This is Part 2. If you need the cloud-to-homelab translation and requirement framing, read Part 1: From Cloud Sizing to Requirements first.
When you architect a Kubernetes cluster, you don’t think about heat dissipation or power consumption. You think in abstractions: N2 instances, vCPUs, memory tiers. Click, deploy, bill. The infrastructure vanishes behind APIs and Terraform declarations. But the moment you decide to build that same cluster in your homelab, those abstractions collapse into very real decisions: which CPU, how much RAM, what kind of storage, and critically, how much will this cost me in electricity every month?