The appeal of Kubernetes is universal. Application development, operations and infrastructure teams recognise diverse reasons for its immediate utility and growing potential — a testament of Kubernetes’ empathetic design. Web apps, galvanised by the 12 factor pattern as well as microservice-structured applications find a native habitat in Kubernetes. Moreover, there is a growing list of analytics and data streaming applications, Function-as-a-Service platforms and deep/machine learning frameworks that benefit from Kubernetes’ functionality. Add to the mix a deep desire to decouple applications from VMs, increase portability for hybrid cloud operations, and a voracious appetite from the business for continuous innovation. The intrinsic diversity of goals and expectations make the decision for the most appropriate Kubernetes solution challenging. Here, we will explore what constitutes a minimal viable Kubernetes environment from a developer and operations perspective.
We have learned much from the rise and fall of the “move fast and break things” development mantra. To implement and test ideas quickly, unverified approximations and assumptions might be employed. Conversely, achieving consistent and reliable behavior, for any engineering endeavor, in-depth understanding and hypothesis validation is necessary. Developers need to write and debug code in the comfort of their IDE, complete unit tests on their laptop, and collaborate with their devops peers for integration testing and production lifecycle management. Having a tacit knowledge of Kubernetes drastically improves the efficiency of a developer. Using a production-grade Kubernetes cluster for such experimentation is a cumbersome experience. A self-contained, isolated and disposable cluster is preferred. With microk8s, anyone can install such a cluster on a laptop or a VM (local or in the cloud) in a matter of minutes. Microk8s is an entire Kubernetes cluster in a snap, and it can be easily installed on the most common Linux distributions.
The definition of a minimal production environment for Kubernetes comes with a broader set of prerequisites. The target environments for a production cluster range from the data center to the cloud, and then to the elusive edge. Further, a production deployment reflects governance needs and possibly regulatory mandates. Characterising scale, elasticity, availability, portability, security, and compliance should drive design decisions. Nevertheless, there is a minimal subset of properties and attributes, which need to be carefully considered.
Automation: There are myriad ways to deploy a Kubernetes cluster. The best in class tooling can perform across multiple dimensions. It can deploy Kubernetes on a variety of substrates, including bare-metal, virtualised, private cloud, and public cloud. It enables repeatable and predictable operations, such as updating and upgrading a cluster, scaling out (adding more worker nodes) and scaling up (adding higher capacity nodes), scaling back, as well as simplifying recovery from physical server or virtual machine failure. Finally, automation needs to facilitate extensibility of the core featureset, allowing integration with third party components from the Kubernetes ecosystem.
Observability: In depth understanding of a cluster’s state is essential for a production environment.Two main types of data streams provide insight to the control plane’s health: logs and monitoring metrics. Diagnostic logging records an event timeline and describes state changes. Metrics provide lightweight instrumentation about system resources. Metrics are used for a variety of tasks — capacity planning, triggering alarms, and initial triangulation during root cause analysis of a pathological state. Logs provide detailed, context specific descriptions, which are utilized to troubleshoot and remediate erroneous cluster conditions. The logging and monitoring solution needs to scale horizontally alongside with the worker nodes.
Artifact management: Kubernetes uses text-based manifests to deploy and manage binary containers, which are typically built from source. Two types of tools are needed: source version control and binary repository management. Version control systems store and track predominantly source code, configuration files, and documentation. Binary repositories provide analogous functionality for containers, OS packages, and built executable binaries. A binary repository can integrate directly with Kubernetes as a container registry. Most public clouds offer registries as a service, which can be utilised both for cloud-based Kubernetes deployments, as well as on prem. Private registries can be used when compliance requirements do not allow for hosted solutions.
High availability (HA): Production-grade is virtually synonymous with a highly available control plane. Kubernetes includes components that accommodate active/passive, active/active, and clustered HA configuration. A minimum of three nodes (typically physical servers or properly sized virtual machines) is necessary to host the associated services. Isolation of the respective services, fine-grained control of resource allocation, and streamlined updates and upgrades can be accommodated by additionally utilising machine containers for each cluster service.
At the current stage of the Kubernetes wave, a minimalistic cluster is a nimble cluster. As Kubernetes evolves quickly, along with its ecosystem, it is crucial to cherry pick additional components progressively and maintain agility for alternative options in the future.
To speak to us about Kubernetes click below.
Ubuntu offers all the training, software infrastructure, tools, services and support you need for your public and private clouds.