What is container orchestration?
Container orchestration is about managing the lifecycle of containers, particularly in large, dynamic environments. It automates the deployment, networking, scaling, and availability of containerised workloads and services. Running containers - which are lightweight and usually ephemeral by nature - in small numbers, is easy enough to be done manually. However, managing them at scale in production environments can be a significant challenge without the automation that container orchestration platforms offer. Kubernetes has become the standard for container orchestration in the enterprise world.
We asked Devs, DevOps and businesses to tell us how they are using Kubernetes, and it made for a fascinating read.
65% Improve maintainance, monitoring, and automation
46% Modernizing infrastructure
26.6% Faster time to market
What is a Kubernetes cluster?
A Kubernetes cluster is what you get when you deploy Kubernetes on physical or virtual machines. It consists of two types of machines:
- Workers: the resources used to run the services needed to host containerised workloads
- Control plane hosts: used to manage the workers and monitor the health of the entire system
Every cluster has at least one worker, and the control plane services can be colocated on a single machine. In production environments, there is typically a large number of workers, depending on the number of containers to be run, and the control plane is distributed across multiple machines for high-availability and fault-tolerance purposes.
Kubernetes is popular for its appealing architecture, a large and active community and the continuous need for extensibility that enables countless development teams to deliver and maintain software at scale by automating container orchestration.
Kubernetes maps out how applications should work and interact with other applications. Due to its elasticity, it can scale services up and down as required, perform rolling updates, switch traffic between different versions of your applications to test features or rollback problematic deployments.
Kubernetes has emerged as a leading choice for organisations looking to build their multi-cloud environments. All public clouds have adopted Kubernetes and offer their own distributions, such as AWS Elastic Container Service for Kubernetes, Google Kubernetes Engine and Azure Kubernetes Service.
Kubernetes history and ecosystem
Kubernetes (from the Greek 'κυβερνήτης' meaning 'helmsman') was originally developed by Google. Kubernete's design has been heavily influenced by Google's 'Borg' project – a similar system used by Google to run much of its infrastructure. Kubernetes has since been donated to the Cloud Native Computing Foundation (CNCF), a collaborative project between the Linux Foundation and Google, Cisco, IBM, Docker, Microsoft, AWS and VMware.
Key contributors to Kubernetes
What does Kubernetes do?
Kubernetes is a platform for running your applications and services. It manages the full lifecycle of container-based applications, by automating tasks, controlling resources, and abstracting infrastructure. Enterprises adopt Kubernetes to cut down operational costs, reduce time-to-market, and transform their business. Developers like container-based development, as it helps break up monolithic applications into more maintainable microservices. Kubernetes allows their work to move seamlessly from development to production, and results in faster-time-to-market for a businesses' applications.
Kubernetes works by:
- Orchestrating containerised applications across multiple hosts
- Ensures that containerised apps behave in the same way in all environments, from testing to production
- Controlling and automating application deployments and updates
- Making more efficient use of hardware to minimise resources needed to run containerised applications
- Mounting and adding storage to run stateful apps
- Scaling and load balancing containerised applications and their resources on the fly and reacting to changes in the workload
- Exposing containers to the internet, to other containers and to other clusters
- Health-checking and self-healing applications with auto-placement, autorestart, auto-replication and autoscaling
- Declaratively managing services, which guarantees that applications are always running as intended
- Being open source (all Kubernetes code is on GitHub) and maintained by a large, active community
Learn how others are using K8s
What Kubernetes is not
Kubernetes enables configuration, automation and management capabilities around containers. It has a vast tooling ecosystem and can address complex use cases, and this is the reason why many mistake it for a traditional Platform-as-a-Service (PaaS).
Kubernetes, as opposed to PaaS, does not:
- Limit the types of supported applications or require a dependency handling framework
- Require applications to be written in a specific programming language nor does it dictate a specific configuration language/system.
- Deploy source code and does not build applications, although it can be used to build CI/CD pipelines
- Provide application-level services, such as middleware, databases and storage clusters out-of-the box. Such components can be integrated with k8s through add-ons.
- Provide nor dictate specific logging, monitoring and alerting components
- Manage and provision certificates for the applications running in containers
How does Kubernetes work?
Kubernetes works by joining a group of physical or virtual host machines, referred to as "nodes", into a cluster. This creates a "supercomputer" to run containerized applications with a greater processing speed, more storage capacity, and increased network capabilities than any single machine would have on its own. The nodes include all necessary services to run "pods", which in turn run single or multiple containers. A pod corresponds to a single instance of an application in Kubernetes.
One (or more for larger clusters, or High Availability) node of the cluster is designated as the "control plane". The control plane node then assumes responsibility for the cluster as the orchestration layer – scheduling and allocating tasks to the "worker" nodes in a way which optimises the resources of the cluster. All administrative and operational tasks on the cluster are done through the control plane, whether these are changes to the configuration, executing or terminating workloads, or controlling ingress and egress on the network.
The control plane is also responsible for monitoring all aspects of the cluster, enabling it to perform additional useful functions such as automatically reallocating workloads in case of failure, scaling up tasks which need more resources and otherwise ensuring that the assigned workloads are always operating correctly.
How to get started with K8s
Run Kubernetes locally and at the edge
MicroK8s is a production-grade, CNCF-certified, lightweight Kubernetes that deploys a single-node cluster with a single command. It's a Linux snap that runs all Kubernetes services natively on Ubuntu, or any operating system that supports snaps, including 20+ Linux distributions, Windows and macOS.
MicroK8s is the simplest distribution of Kubernetes, and eliminates the barrier to entry for container orchestration and cloud-native development. Because of its small footprint, it is ideal for clusters, workstations, CI/CD pipelines, IoT devices, and small edge clouds.
Run Kubernetes on your infrastructure of choice
Deploy Canonical's Charmed Kubernetes, a highly available, pure upstream, multi-node Kubernetes cluster. It's a fully automated, model-driven approach to Kubernetes that takes care of logging, monitoring and alerting, while also providing application lifecycle automation capabilities.
Charmed Kubernetes has been tested across the widest range of infrastructures, and deploys on bare-metal, private and public clouds. Along with our Kubernetes, Canonical supports a rich ecosystem of different pieces of software that can be integrated from the stack up. Canonical is one of Microsoft's primary development partners as Ubuntu is the operating system of choice on AKS.
Have us run Kubernetes
Don't want the hassle and cost of hiring your own K8s team of experts? Get peace of mind and focus on your business, by leveraging our proven experience in Kubernetes deployments and operations.
Canonical offers a fully managed service which takes on the complex operations that many may lack the skills to implement, such as installing, patching, scaling, monitoring, and upgrading with zero downtime.
If you have clusters built, we can manage them for you. We can also build and manage them for you, handing the keys over when and if you're ready to take full control.