Open Source – Kubernetes on Metal

Patrick McCloryDevOps AutomationLeave a Comment

Ready for some Kubernetes on metal? It might not be the first thing you think about when you hear ‘Kubernetes,’ but we had some hardware sitting around and deploying K8s on it gave us a lot of insight into how things work ‘under the hood.’ We’re sharing our ‘lab’ setup on our Github account. From end to end, we’re able to deploy machines, set up a single-master kubernetes cluster and cycle it in minutes. Who doesn’t want to have k8s in 10 mins on their own machines.

Image

If you’re looking for deeper detail on how we chose these tools or even for other takes on how to get to know this toolset, check out the README.md file from the repo for this project.

Here’s the basic overview:

On and Off – Metal as a Service

We had a few machines that were PXE boot enabled (you know, that annoying ‘network boot’ screen you usually ignore?) and after making a few changes to the BIOS settings, we were good to go:

  • Set the power settings to turn the machine on when AC power is restored after failure
  • Make sure boot order is set to boot from network before anything else

From there, we’ve got a pretty basic MaaS setup – a single machine serving both as our rack and region controller. Next, we used a Digital Loggers IP-accessible PDU to manage turning things on and off.

For more on supported PDUs, check out the MaaS documentation on BMC Power Types.

MaaS also has an API that we ended up using to drive a custom dynamic inventory for Ansible. We added a few command line options to kill (release) and reset (release and then deploy) hosts based on a predetermined set of tags (we used k8s-master and k8s-node). Our dynamic inventory feeds infrastructure-level data from MaaS to Ansible.

Up and Running – Ansible

Once we’ve got servers, the Ansible playbooks we wrote layer on configurations to get the OS and other services configured to then install Kubernetes on top of it using kubeadm loosely following their single master setup documentation.

Deploying Resources – Kubectl and Helm

We didn’t need to reinvent the wheel to deploy resources. However, it was important to use kubectl and helm in specific ways. Specifically, we found it helpful to use kubectl apply and helm update for all create and update calls. This was better than splitting the updates into their appropriate install/create and update workflows.  

Here’s what it looks like when we’re done with the basics:

Kubernetes on metal

Lessons Learned and ‘Little Tweaks’

We used this setup to cycle rapidly as we made some pretty spectacular mistakes in getting k8s running. Here’s just a few of the ‘bigger’ examples that you can find fixes for in our playbooks:

  • We tried to be ‘fancy’ with our local DNS servers. Between MaaS default behavior and Ubuntu’s local dns caching setup, we ran into some problems. We found it easier to be VERY specific about how K8s was resolving DNS outside the cluster.
  • Kubelet won’t start (by design/default) when swap is turned on. We turned off swap no problem. However, we neglected to turn it off in /etc/fstab too. When a node restarted, it wouldn’t come back up.
    • Once we fixed that, we could restart nodes without surprises.
  • Helm and kubectl both try to resolve the location of your kubeconfig file by way of the executing user’s home directory, identified by the $HOME variable.
  • MaaS identifies that nodes are ‘deployed’ once the OS has restarted after the install process. This doesn’t mean that there aren’t other tasks happening on first boot – like updating the apt cache. This made things randomly fail on nodes ‘every once in a while.’ Especially if we’re ‘too quick’ to start the install process.
    • So we got to learn about while loops with retries in Ansible since Ubuntu updates its apt cache early on and locks the update/upgrade process for a short period of time.