What is Kubernetes?

Kubernetes, or k8s for short, is an open source platform, pioneered by Google, which started as a simple container orchestration tool but has grown into a cloud-native platform. It’s one of the most significant advancements in IT since the public cloud came to being in 2009 and has an unparalleled 5-year 30% growth rate in both market revenue and overall adoption.

Kubernetes has risen to popularity as a result of its appealing architecture, development teams needing to deliver and maintain software at scale, a large and active community and the continuous need for extensibility. It reduces many of the manual tasks traditionally associated with co-ordinating containers in production by automating their deployment, scheduling and management.

Kubernetes maps out how applications should work and how they should interact with other applications. Due to its elasticity, it can scale services up and down as required, perform rolling updates, switch traffic between different versions of your applications to test features or rollback problematic deployments.

The technology has emerged as a leading choice for organisations looking to build their multi-cloud environments, thanks to the widespread adoption of its APIs. All public clouds have adopted Kubernetes and offer their own distributions, such as AWS Elastic Container Service for Kubernetes, Google Kubernetes Engine and Azure Kubernetes Service.

What are containers?

Containers are a technology that allows the user to divide up a machine so that it can run more than one application (in the case of process containers) or operating system instance (in the case of system containers) on the same kernel and hardware, and in so doing maintain isolation between these workloads. Containers are a modern way to virtualise infrastructure and provide a more lightweight approach than traditional virtual machines: they all share a single host OS, require less memory space, ensure greater resource utilisation and have several orders of magnitude shorter startup time.

Inside Google alone, at least two billion containers are generated each week to manage its vast operations.

Kubernetes history and ecosystem

Kubernetes (from the Greek ‘κυβερνήτης’ meaning ‘helmsman’) was developed by Google, its design has been heavily influenced by Google’s ‘Borg’ project – a similar system used by Google to run much of its infrastructure. Kubernetes has since been donated to the Cloud Native Computing Foundation, a collaborative project between the Linux Foundation and Google, Cisco, IBM, Docker, Microsoft, AWS and VMware.

Did you know? The font used in the Kubernetes logo is the Ubuntu font!


How does Kubernetes work?

Kubernetes works by joining a group of physical or virtual host machines referred to as “nodes”, into a cluster to manage containers. This creates a “supercomputer” that has greater processing speed, more storage capacity and increased network capabilities than any single machine would have on its own. The nodes include all necessary services to run “pods”, which in turn run single or multiple containers. A pod corresponds to a single instance of an application in Kubernetes.

One (or more for larger clusters, or High Availability) node of the cluster is designated as the "master". The master node then assumes responsibility for the cluster as the orchestration layer - scheduling and allocating tasks to the other "worker" nodes in a way which maximises the resources of the cluster. All operator interaction with the cluster goes through this master node, whether that is changes to the configuration, executing or terminating workloads, or controlling ingress and egress on the network.

The master node is also responsible for monitoring all aspects of the cluster, enabling it to perform additional useful functions such as automatically reallocating workloads in case of failure, scaling up tasks which need more resources and otherwise ensuring that the assigned workloads are always operating correctly.

Download Enterprise Kubernetes datasheet ›

Speaking Kubernetes

  • Cluster

    A set of nodes that run containerized applications managed by Kubernetes.

  • Pod

    The smallest unit in the Kubernetes object model that is used to host containers.

  • Master node

    a.k.a. control plane
    The orchestration layer that provides interfaces to define, deploy, and manage the lifecycle of containers.

  • Worker Node

    Every worker node can host applications as containers. A Kubernetes cluster usually has multiple worker nodes (at least one).

  • API server

    The primary control plane component, that exposes the Kubernetes API, enabling communications between cluster components.

  • Controller-manager

    A control plane daemon that monitors the state of the cluster and makes all necessary changes for the cluster to reach its desired state.

  • Container runtime

    The software responsible for running containers by coordinating the use of system resources across containers.

  • Kubelet

    An agent that runs on each worker node in the cluster and ensures that containers are running in a pod.

  • Kubectl

    A command-line tool for controlling Kubernetes clusters.

  • Kubeproxy

    Enables communication between worker nodes, by maintaining network rules on the nodes.

  • CNI

    The Container Network Interface is a specification and a set of tools to define networking interfaces between network providers and Kubernetes.

  • CSI

    The Container Storage Interface is a specification for data storage tools and applications to integrate with Kubernetes clusters.

Why use Kubernetes?

Kubernetes is a platform to run your applications and services. It’s cloud-native, provides operational cost savings, faster time-to-market and is maintained by a large community. Developers like container-based development, as it helps break up monolithic applications into more maintainable microservices. Kubernetes allows their work to move seamlessly from development to production and results in faster-time-to-market for a businesses’ applications.

Kubernetes works by:

  • Orchestrating containers across multiple hosts
  • Ensures that containerised apps behave in the same way in all environments, from testing to production
  • Controlling and automating application deployments and updates
  • Making more efficient use of hardware to minimise resources needed to run containerised applications
  • Mounting and adding storage to run stateful apps
  • Scaling containerised applications and their resources on the fly
  • Declaratively managing services, which guarantees that applications are always running as intended
  • Health-checking and self-healing applications with auto-placement, autorestart, auto-replication and autoscaling
  • Being open source (all Kubernetes code is on GitHub) and maintained by a large, active community

What Kubernetes is not

Kubernetes is a platform that enables configuration, automation and management capabilities around containers. It has a vast tooling ecosystem and can address complex use cases, and this is the reason why many confuse it for a traditional Platform-as-a-Service (PaaS).

It is important to distinguish between the two solutions. Kubernetes, as opposed to PaaS does not:

  • Limit the types of supported applications or require a dependency handling framework
  • Require applications to be written in a specific programming language nor does it dictate a specific configuration language/system.
  • Deploy source code and does not build applications, although it can be used to build CI/CD pipelines
  • Provide application-level services, such as middleware, databases and storage clusters out-of-the box. Such components can be integrated with k8s through add-ons.
  • Provide nor dictate specific logging, monitoring and alerting components

Using Kubernetes for development

Run Kubernetes locally and at the edge

MicroK8s is a production-grade lightweight Kubernetes that deploys a single-node cluster with a single-install command. It’s a CNCF-certified Linux snap that runs all Kubernetes services natively on Ubuntu, or any operating system that supports snaps, Windows and macOS.

MicroK8s is the simplest distribution of Kubernetes, that lowers the barrier of entry to container orchestration and cloud-native development. Because of its small footprint it is ideal for clusters, workstations, CI/CD pipelines, IoT devices, and small edge clouds.

Install MicroK8s ›

Run Kubernetes on your infrastructure of choice

Deploy Canonical’s Charmed Kubernetes, a highly available, pure upstream, multi-node Kubernetes cluster. It’s a fully automated, model-driven approach to Kubernetes that takes care of logging, monitoring and alerting and provides application lifecycle automation capabilities.

It’s tested across the widest range of infrastructure and deploys on bare-metal, private and public clouds. Along with our Kubernetes, Canonical supports a rich ecosystem of different pieces of software that can be integrated from the stack up. Canonical is one of Microsoft’s primary development partners as Ubuntu is the operating system of choice on AKS.

Deploy Charmed Kubernetes ›

Have us run Kubernetes
for you

Don't want the hassle and cost of hiring your own K8s team of experts? Get peace of mind and focus on your business, by leveraging our proven experience in Kubernetes deployments and operations.

Canonical offers a fully managed service which manages the complex operations that many may lack the skills to do, such as installing, patching, scaling, monitoring, and upgrading with zero downtime.

If you have clusters built, we can manage them for you. We can also build and manage them for you, handing the keys over when and if you’re ready to take full control.

Learn about our Managed Kubernetes service ›

What's new in Kubernetes 1.18

Kubernetes 1.18, which is the latest stable version, was released in March 2020. Canonical provides full enterprise support for Kubernetes. 1.18.

Here are highlights from the latest- version of Kubernetes 1.18 (released March 2020):

  • Kubectl debug - the cli now provides a way to do interactive troubleshooting and to run an ephemeral container alongside the Pod being investigated
  • OIDC discovery for service accounts tokens - Kubernetes API tokens can be used as a general authentication mechanism, simplifying service integration
  • Node Topology Manager - enables NUMA alignment of CPU and devices, such as SR-IOV VFs, allowing workloads to run in low-latency optimised environments
  • ContainerD support on Windows - the container runtime is now supported on Windows, a big step to ensure Kubernetes compatibility on Windows nodes
  • Ingress extensions - Ingress is enhanced with the pathType field and IngressClass resource
  • CertificateSigningRequest API - extending the Certificate API capabilities to provision certificates for non-core components of a Kubernetes cluster

Learn more about Kubernetes 1.18 features ›

Other Kubernetes resources

We’re here to revolutionise the way you think about Kubernetes clusters.

Schedule a Kubernetes demo