Introduction
Part 1 - Why I Chose Raspberry Pi (And Paid the ARM Tax)
I decided to run my Kubernetes cluster on Raspberry Pis.
Not because it was trendy. Not because it was the cheapest possible setup. But because I wanted something that felt closer to infrastructure and less like a lab experiment running on a leftover desktop.
Three small ARM machines. Silent. Low power. Always on.
On paper, it looked perfect.
Cheap. Efficient. Kubernetes-ready. Close to what real edge infrastructure actually looks like.
Reality, as usual, was more nuanced.
This is not a guide about installing K3s. That part takes minutes.
This is about what changes when your cluster runs on ARM.
Why Raspberry Pi?
I wanted low energy consumption. A small physical footprint. Hardware I fully control. Something I could leave running without feeling like I was powering a small data center in my living room.
More importantly, I wanted a cluster that behaves like infrastructure, not a toy. Something always on. Something that forces you to think in terms of services, not processes.
K3s runs beautifully on ARM. Installation is trivial. Within minutes you have a working control-plane, nodes registered, pods scheduling.
That part is easy.
The hard part starts the moment you deploy your own software.
The ARM Surprise Nobody Talks About
When your cluster runs on ARM, everything must run on ARM.
Not just Kubernetes itself. Everything.
Your application images.
Your base images.
Your native dependencies.
Your CI pipelines.
Most Docker images default to amd64. And you don’t notice that. Until you do.
The first time a pod crashes with:
exec format error
That is the moment you realize the image was built for the wrong architecture.
Nothing is “wrong” with the container. It just physically cannot execute on that CPU.
This is the ARM tax.
You are no longer on the default path. And the ecosystem is still very much optimized for x86.
Building for ARM Was Painful
At the time, my development machine was x86.
My GitLab runners were x86.
Which meant I could not just build locally, push, and deploy.
If I built an image the usual way, it would default to amd64. The cluster would reject it at runtime. So now I needed ARM-compatible images, consistently.
Emulation via QEMU was technically possible. It was also slow, fragile, and not something I wanted to depend on for every build.
So I grabbed an old Raspberry Pi I had in a drawer.
And turned it into an ARM GitLab runner.
The Drawer Pi CI Phase
That old Pi became my build machine.
It worked. That’s the good news.
The bad news: builds were slow. Really slow.
Multi-stage Docker builds felt heavy. Java builds dragged. Node builds were not much better. CI pipelines became “trigger and go do something else” pipelines.
It forced discipline.
I started caring more about layer caching. I reduced unnecessary dependencies. I trimmed images aggressively. You learn quickly when every inefficiency costs minutes.
But iteration speed suffered.
And slow CI quietly kills momentum.
You start batching changes instead of deploying small increments. You hesitate before refactoring because you know you will pay for it in build time.
That is not the feedback loop you want.
The Apple Silicon Turning Point
Then I got a MacBook Air with Apple Silicon.
That changed everything.
Suddenly my development machine was ARM. Docker built ARM images natively. No emulation. No remote ARM runner required. No architectural mismatch between dev and cluster.
buildx stopped being a theoretical feature and became part of my daily workflow.
I could build ARM images locally. Push directly to the registry. Deploy instantly to the cluster.
Multi-arch builds became practical instead of painful.
Iteration speed increased massively.
The Raspberry Pi cluster stayed exactly the same.
But the CI bottleneck disappeared.
And that alone made the whole setup feel mature instead of experimental.
Lessons From Running ARM Kubernetes
Running ARM in production-style environments is absolutely viable.
But you must accept a few things upfront.
You are not on the default path. Some images will not support ARM. Debugging architecture issues will cost you time. And your CI strategy matters far more than you expect.
The upside is that you actually learn how containers work.
You start understanding image manifests instead of ignoring them. You become explicit about platform flags. You think about multi-arch builds instead of assuming Docker magically handles it.
You see the difference between cross-compilation and native builds. You stop mixing architecture assumptions across environments.
Docker stops being magic.
And that is a good thing.
What I Would Do Differently From Day One
If I started again, I would define the architecture strategy upfront.
I would standardize on multi-arch builds immediately. I would make CI architecture-aware from the beginning. And I would treat build pipelines as first-class infrastructure instead of something that “just builds images.”
Because once your cluster runs on ARM, architecture is not an implementation detail.
It is a design constraint.
And ignoring it will cost you later.
Next: Storage Is Where Things Really Broke
ARM was not the biggest long-term challenge.
Storage was.
Raspberry Pis are surprisingly stable for control-plane workloads. Stateless services run fine. Even moderate traffic is not an issue.
But the moment you introduce state, things change.
Stateful services.
Persistent volumes.
Logging stacks.
Databases.
Disk I/O becomes the real bottleneck.
In the next part, I will break down why storage latency nearly destabilized the cluster, what I learned about PVCs on small hardware, and the architectural changes that finally made the cluster predictable.
Because Kubernetes does not forgive slow disks.
And I learned that the hard way.
