I recently released a new Ruby gem called Metatron. This gem aims to make it very easy to create Kubernetes controllers, either to implement the Operator pattern or to respond to events related to built-in resource types. It does this by deferring to Metacontroller for the Kubernetes API interactions and handling the boilerplate work of providing the JSON API it expects. To be clear, Metacontroller does most of the heavy lifting here. It provides some fantastic examples for creating various kinds of controllers, but sadly none of them are Ruby-based. This really seems like a great place for Ruby to shine. So, I decided to roll up my sleeves and get to work on that.
I touched on a lot of advanced Kubernetes topics in that short paragraph. It might not be clear precisely what value Metatron is adding to the equation, so I’m going to dive deep into the concept of Kubernetes controllers, Metacontroller, and finally Metatron itself in a three-part miniseries. Buckle up and prepare to learn lots about Kubernetes!
What is a Controller, anyway? How is it different from an Operator?
A Kubernetes Controller is a mechanism that attempts to make the observed state of your Kubernetes cluster match your desired state by performing some action(s) (such as adding, updating, or removing resources via the Kubernetes API) if necessary. Controllers are clients of the Kubernetes API server and use so-called “watch streams” to track changes to resources within a cluster. This loop of retrieving changes and then performing actions is called the control loop, which is where Controllers get their name.
Notice that none of that mentions anything about custom resources; Controllers can track changes to any kind of resource that they’re allowed to monitor, including built-in types and custom types. Those that track custom resource types, use those custom types as storage for status, and act as a parent for other Kubernetes are called Operators.
Controllers: How Do They Work?
Controllers are just Kubernetes API clients. They request information about Collections of resources based on their type. They do this using “watch” requests based on a
resourceVersion provided during their initial request. The controller can then poll on a loop, retrieving only changes that occurred since the
resourceVersion marker. This allows many controllers to efficiently poll for changes.
When the Controller sees changes it actually cares about (based on some criteria like labels, annotation, etc.), it usually takes some action. This might be adding annotations, creating Pods, or connecting to some external API. After performing these actions (if any), the Controller might then report the status somewhere, though this isn’t strictly required.
Operators: How Do They Work?
Since they’re Controllers, everything above applies to Operators. The main distinction is that Operators care about Custom Resources. These are usually used as parent resources and as storage for the current state and status of child resources. They’re usually bundled with Custom Resource Definitions that must be installed into the cluster. Operators then create a watch stream to subscribe to changes to those custom resources. They’ll also usually subscribe to the child resource types too and use
ownerReferences to identify the parent.
Where Does Metacontroller Come In?
Writing a Controller from scratch — or even using a framework — can be a fair amount of work. Not everyone loves Golang, plus there tends to be a lot of boilerplate and repetition. Controllers, especially finalizers (things that need to run before resources are deleted) can become a pretty critical part of your infrastructure as you’re extending the Kubernetes API.
Metacontroller provides the control loop and does so in a safe, efficient way; it either provides a Controller that listens for existing resources (Pods, Ingresses, etc.) and adds annotations or labels to them as a DecoratorController, or it listens for custom resources and creates child resources as a CompositeController. Both of these can cause new resources to be created in your cluster, but more on them in part 2!