Keep Your Tooling SimpleApr 27, 2018
As a DevOps architect you hold the responsibility of keeping your tooling as simple as possible for your own benefit as well as others. Simple architectures hold less technical debt, require less cognitive load to understand, and have less moving parts to break.
Kubernetes is a powerful open-source container orchestration platform that’s gained a lot of popularity over the past few years and for any non-trivial software projects, it’s a solid investment. With declarative configurations and a growing list of manageable resources, Kubernetes fits best-practices of immutable infrastructures very well. The list of resources that can be managed continues to grow: deployments, ingresses, services, secrets, configmaps, namespaces, etc. This growing number of resources and its own tooling crosses Kubernetes into a space that’s shared by many other DevOps tools.
Because of this, it’s easy to find ourselves wanting to manage Kubernetes through the other tools we’ve grown accustomed to using. Its popularity has driven a host of plug-ins and extensions from some of the mode widely used DevOps tools, making that desire to manage Kubernetes in a familiar way somewhat attainable, but this isn’t always a good thing.
Pursuit of the golden pipeline
In our typical DevOps fashion of automating everything, decreasing repetition, and creating abstractions, it’s all too easy to design ourselves into a black hole.
To illustrate, lets use two examples:
Example 1: Terraform Kubernetes Plugin
Terraform is a widely used DevOps tool and it works well for provisioning resources at various cloud providers when existing methods to declaratively manage resources are poor or non-existent. When using this with Kubernetes, the result is the generation of a “Terraform state” in addition to Kubernetes’ own state and a resource declaration that doesn’t offer many (if any) improvements over Kubernetes-native resource declarations. Unless you simply prefer Terraform’s own configuration syntax (HCL) to YAML or JSON, this creates an unnecessary abstraction. Plus, if you need to create a resource that’s not supported by Terraform, you’ll have a mixture of both native Kubernetes YAML and Terraform HCL.
Don’t get me wrong, Terraform is an excellent tool and I personally use it for several of my own projects, but I specifically don’t use it to interact with Kubernetes because it overlaps functionality that’s already provided by Kubernetes itself. On top of that, having both a Terraform state and an internal Kubernetes state that both manage the same resources doesn’t feel right.
Example 2: Helmfile
We’re going to define our configuration in YAML, template it, generate templated YAML, package that YAML into Helm charts, publish those Helm charts, then define the charts we want to deploy with Helmfile, and finally use Helmfile to synchronize to the desired state. Yikes.
I think the Helm ecosystem is an excellent addition to Kubernetes in a handful of common use-cases, but I also think having to use Helm just for simple templating is unfortunate and something that kubectl should support natively. If kubectl would leverage Go’s templating library it would allow lots of cases to avoid additional tooling entirely.
Keeping it simple
There’s certainly no lack of DevOps tools that we can use to manage our systems, so we need to carefully weigh how complicated we’re making our stack when we evaluate the addition of new tooling. Keeping the smallest set of moving parts that accomplish our requirements should be a constant goal.
Don’t proudly stand beside a complicated and over-engineered deployment pipeline.
For simple projects, a CI/CD service that applies your latest Docker images with kubectl can be sufficient.
For projects that heavily rely on resource templating, running
helm upgrade against a local set of charts can be sufficient.
By all means, use Terraform when it makes things simple, but recognize that it doesn’t automatically make everything simple.
Open source tooling
The current state of DevOps tooling is extraordinary. On a daily basis we rely on open and freely available software that others have developed to solve major pain points and those developers rarely receive much recognition for their work. But we need to be diligent with the technology we adopt and make sure that we’re not unnecessarily complicating things just in the name of using a particular tool.
- Auto-generate Kubernetes ConfigMaps from Environr
- Elastic Beanstalk Secrets as a Service
- Keep Your Tooling Simple
- Hosted Secrets Management for Kubernetes
- Start Using Feature Toggles Now
- Ansible, Puppet, Chef: No thanks
- Gogland IDE
- Super Cheap and Flexible Hosting of your Go Application
- Elastic Beanstalk vs. ECS vs. Kubernetes: Part 4
- Elastic Beanstalk vs. ECS vs. Kubernetes: Part 3