Control Planes and Cluster Size #11719
-
I've been building out a PI based K3S cluster, and things have been working great. As of today I have 10 nodes, more for the fact that it's interesting and fun, and less for the illusion of having a powerful cluster of machines. I have a few workloads running (homepage, traefik, cert manager, kube-vip, and longhorn) and so far things are running great, and I couldn't be happier. But.... Originally I had 3 control planes and 6 workers (9 nodes), and once I added longhorn I wanted to have something with a bit more robust disk (2 of the original 9 PI have NVME drives) than the SD's that are in most of the PI's. So, I grabbed an old workstation, slapped a nice shiny NVME in it, spun it up and added it to the cluster... as a control plane... sigh And now I have a situation; an even number of control planes and ETCD. After I had my initial "awwww maaaannnnnn" moment, I thought through approaches to deal with this situation, along with an overall architectural question when designing a cluster like this with more than a couple nodes. I know that general wisdom and convention says that you should have an odd number of ETCD control plane nodes, so, three at a minimum, and possibly up to seven before running into IO related issues (speaking generically here). I totally get it, and it makes ooooodles of sense to me. So with my "fat finger" 4 control planes, am I better off: I'm not really sure what approach makes sense in a situation like this (considering a somewhat nonsensical number of overall nodes in a small 2 person homelab environment). None of my workloads have any sustained compute or I/O (for now), but that could easily change. I'm still exploring, and it wouldn't be bad to move some of my VM solutions to orchestrated containers. Any perspective and discussion would be greatly appreciated. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
I'd probably just That will also delete it from the etcd cluster and you can go back to having 3. If you don't need the extra redundancy, more just makes things slower as it has to wait for a quorum of cluster members to ack each write. |
Beta Was this translation helpful? Give feedback.
I'd probably just
kubectl delete node
the node you just added, and restart it as an agent.That will also delete it from the etcd cluster and you can go back to having 3. If you don't need the extra redundancy, more just makes things slower as it has to wait for a quorum of cluster members to ack each write.