Release notes

1.15+ck1 Bugfix release

August 15, 2019 - charmed-kubernetes-209


A list of bug fixes and other minor feature updates in this release can be found at


June 28, 2019 - charmed-kubernetes-142

What's new

  • Containerd support

Although Docker is still supported, containerd is now the default container runtime in Charmed Kubernetes. Containerd brings significant peformance improvements and prepares the way for Charmed Kubernetes integration with Kata in the future.

Container runtime code has been moved out of the kubernetes-worker charm, and into subordinate charms (one for Docker and one for containerd). This allows the operator to swap the container runtime as desired, and even mix container runtimes within a cluster. It also allows for additional container runtimes to be supported in the future. Because this is a significant change, you are advised to read the upgrade notes before upgrading from a previous version.

  • Calico 3.x support

The Calico and Canal charms have been updated to install Calico 3.6.1 by default. For users currently running Calico 2.x, the next time you upgrade your Calico or Canal charm, the charm will automatically upgrade to Calico 3.6.1 with no user intervention required.

The Calico charm's ipip config option has been changed from a boolean to a string to allow for the addition of a new mode. This change is illustrated in the table below:

New value Old value Description
"Never" false Never use IPIP encapsulation. (The default)
"Always" true Always use IPIP encapsulation.
"CrossSubnet" \ Only use IPIP encapsulation for cross-subnet traffic.
  • Calico BGP support

Several new config options have been added to the Calico charm to support BGP functionality within Calico. These additions make it possible to configure external BGP peers, route reflectors, and multiple IP pools. For instructions on how to use the new config options, see the [CNI with Calico documentation][cni-calico].

  • Custom load balancer addresses

Support has been added to specify the IP address of an external load balancer. This support is in the kubeapi-load-balancer and the kubernetes-master charms. This allows a virtual IP address on the kubeapi-load-balancer charm or the IP address of an external load balancer. See the custom load balancer page for more information.

  • Container image registry

By default, all container images required by the deployment come from the Canonical image registry. This includes images used by the cdk-addons snap, ingress, dns, storage providers, etc. The registry can be configured with the new image-registry config option on the kubernetes-master charm.

The addons-registry config option is now deprecated. If set, this will take precedence over the new image-registry option when deploying images from the cdk-addons snap. However, the addons-registry option will be removed in 1.17. Users are encouraged to migrate to the new image-registry option as soon as possible.


A list of bug fixes and other minor feature updates in this release can be found at

Known Issues

  • Docker-registry interface does not support containerd (bug 1833579)

When a docker-registry charm is related, kubernetes-worker units will attempt to configure the Docker daemon.json configuration file and may also attempt to use docker login to authenticate with the connected registry. This will not work in a containerd environment, as there is no daemon.json file nor docker command available to invoke.

Users relying on docker-registry to serve container images to Kubernetes deployments should continue to use the Docker subordinate runtime as outlined in the upgrade notes, under the heading "To keep Docker as the container runtime".

We intend to fix this shortly after release. For now, if you want to deploy Charmed Kubernetes on LXD, we recommend using the Docker subordinate charm instead. Instructions for this can be found in the [Container runtimes][container-runtime] section of our documentation.

  • New provisioner value for Cinder storage classes

The switch to the external cloud provider for OpenStack includes an upstream change to the provisioner field for storage classes using Cinder. A cdk-cinder storage class will be automatically created with the correct value, but any manually created storage classes will need to be edited and the provisioner field changed to csi-cinderplugin. Existing volumes will be unaffected, but new PVCs using those storage classes will hang until the storage class is updated.

1.14 Bugfix release

June 19th, 2019 - charmed-kubernetes-124


  • Fixed leader_set being called by kubernetes-master followers (Issue)

1.14 Bugfix release

June 6th, 2019 - charmed-kubernetes-96


  • Fixed leader_get import error in .reactive/ (Issue)
  • Fixed kernel network tunables need better defaults and to be configurable (Issue)
  • Fixed proxy-extra-args config missing from kubernetes-master (Issue)

1.14 Bugfix release

May 23rd, 2019 - charmed-kubernetes-74


  • Fixed missing core snap resource for etcd, kubernetes-master, kubernetes-worker, and kubernetes-e2e charms (Issue)
  • Fixed kubernetes-master charm resetting user changes to basic_auth.csv (Issue)
  • Fixed charm upgrades removing /srv/kubernetes directory (Issue)
  • Fixed docker-opts charm config being ignored on kubernetes-worker (Issue)
  • Fixed master services constantly restarting due to cert change (Issue)
  • Fixed kubernetes-worker tag error on GCP (Issue)

1.14 Bugfix release

April 23rd, 2019 - charmed-kubernetes-31


  • Added automatic and manual cleanup for subnet tags (Issue)
  • Added action apply-manifest (Issue)
  • Added label to inform Juju of cloud (Issue)
  • Added support for loadbalancer-ips (Issue)
  • Fixed handling "not found" error message (Issue)
  • Fixed snapd_refresh smashed by subordinate charm (Issue)
  • Fixed making sure cert has proper IP as well as DNS (Issue)
  • Fixed etcd charm stuck on "Requesting tls certificates" (Issue)
  • Fixed cert relation thrashing due to random SAN order (Issue)
  • Fixed contact point for keystone to be public address (Issue)
  • Fixed cluster tag not being sent to new worker applications (Issue)
  • Fixed removal of ceph relations causing trouble (Issue)
  • Fixed pause/resume actions (Issue)
  • Fixed ingress address selection to avoid fan IPs (Issue)
  • Fixed snapd_refresh handler (Issue)
  • Fixed credentials fields to allow for fallback and override (Issue)

1.14 Bugfix release

April 4th, 2019 - [canonical-kubernetes-471][bundle]


  • Fixed Ceph PV fails to mount in pod (Issue)
  • Fixed Problems switching from kube-dns to CoreDNS (Issue)
  • Fixed defaultbackend-s390x image (Issue)
  • Fixed keystone-ssl-ca config description (Issue)
  • Partial fix for using custom CA with Keystone (Issue)


March 27, 2019 - canonical-kubernetes-466

What's new

  • Tigera Secure EE support

CDK extends its support for CNI solutions by adding the option of using [Tigera Secure EE][tigera-home], the enterprise-ready alternative to Calico. Users are now able to deploy CDK with Tigera Secure EE installed and subsequently configure additional features such as ElasticSearch and the CNX secure connectivity manager. For further details, please see the [CDK CNI documentation][tigera-docs]

  • Additional options for High Availability

Version 1.13 of CDK introduced support for keepalived to provide HA for the api-loadbalancer. This new release adds support for both HAcluster and MetalLB. See the relevant [HAcluster][hacluster-docs] and [MetalLB][metallb-docs] pages in the documentation, as well as the [HA overview][haoverview] for more information.

  • Added CoreDNS support

All new deployments of CDK 1.14 will install CoreDNS 1.4.0 by default instead of KubeDNS.

Existing deployments that are upgraded to CDK 1.14 will continue to use KubeDNS until the operator chooses to upgrade to CoreDNS. See the [upgrade notes][upgrade-notes] for details.

  • Docker upgrades: Docker 18.09.2 is the new default in Ubuntu. CDK now includes a charm action to simplify [upgrading Docker across a set of worker nodes][upgrading-docker].
  • Registry enhancements: Read-only mode, frontend support, and additional TLS configuration options have been added to the Docker registry charm.

  • Cloud integrations: New configuration options have been added to the vSphere (folder and respool_path) and OpenStack (ignore-volume-az, bs-version, trust-device-path) integrator charms.


  • Added an action to upgrade Docker (Issue)
  • Added better multi-client support to EasyRSA (Issue)
  • Added block storage options for OpenStack (Issue)
  • Added dashboard-auth config option to master (Issue)
  • Added docker registry handling to master (Issue)
  • Added more TLS options to Docker registry (Issue)
  • Added new folder/respool_path config for vSphere (Issue)
  • Added proxy support to Docker registry (Issue)
  • Added read-only mode for Docker registry (Issue)
  • Fixed allow-privileged not enabled when Ceph relation joins (Issue)
  • Fixed apt install source for VaultLocker (Issue)
  • Fixed Ceph relation join not creating necessary pools (Issue)
  • Fixed Ceph volume provisioning fails with "No such file or directory" (Issue)
  • Fixed detecting of changed AppKV values (Issue)
  • Fixed docker-ce-version config not working for non-NVIDIA configuration (Issue)
  • Fixed Docker registry behavior with multiple frontends (Issue)
  • Fixed Docker registry not cleaning up old relation data (Issue)
  • Fixed Docker registry to correctly handle frontend removal (Issue)
  • Fixed Docker registry to work behind a TLS-terminating frontend (Issue)
  • Fixed error: snap "etcd" is not compatible with --classic (Issue)
  • Fixed file descriptor limit on api server (Issue)
  • Fixed GCP NetworkUnavailable hack when only some pods pending (Issue)
  • Fixed handle_requests being called when no clients are related (Issue)
  • Fixed handling of nameless and SANless server certificates (Issue)
  • Fixed inconsistent cert flags (Issue)
  • Fixed ingress=false not allowing custom ingress to be used (Issue)
  • Fixed installing from outdated docker APT respository (Issue)
  • Fixed IPv6 disabled on kubeapi-loadbalancer machines leads to error during installation (Issue)
  • Fixed Keystone not working with multiple masters (Issue)
  • Fixed kubeconfig should contain the VIP when keepalived used with kubeapi-load-balancer (Issue)
  • Fixed metrics server for k8s 1.11 (Issue)
  • Fixed proxy var to apply when adding an apt-key (Issue)
  • Fixed RBAC enabled results in error: unable to upgrade connection (Issue)
  • Fixed registry action creating configmap in the wrong namespace (Issue)
  • Fixed rules for metrics-server (Issue)
  • Fixed status when writing kubeconfig file (Issue)
  • Fixed "subnet not found" to be non-fatal (Issue)
  • Fixed vSphere integrator charm not updating cloud-config when setting new charm defaults (Issue)
  • Removed deprecated allow-privileged config from worker (Issue)
  • Removed use of global / shared client certificate (Issue)
  • Updated default nginx-ingress controller to 0.22.0 for amd64 and arm64 (Issue)

1.13 Bugfix Release

February 21, 2019 - canonical-kubernetes-435


  • Fixed docker does not start when docker_runtime is set to nvidia (Issue)
  • Fixed snapd_refresh charm option conflict (Issue)


January 10, 2019

What happened

  • A security vulnerability was found in the Kubernetes dashboard that affected all versions of the dashboard.

A new dashboard version, v1.10.1, was released to address this vulnerability. This includes an important change to logging in to the dashboard. The Skip button is now missing from the login page and a user and password is now required. The easiest way to log in to the dashboard is to select your ~/.kube/config file and use credentials from there.

1.13 Release Notes

December 10, 2018

What's new

  • LDAP and Keystone support

Added support for LDAP-based authentication and authorisation via Keystone. Please read the documentation for details on how to enable this.

  • Vault PKI support

Added support for using Vault for PKI in place of EasyRSA. Vault is more secure and robust than EasyRSA and supports more advanced features for certificate management. See the documentation for details of how to add Vault to Charmed Kubernetes and configure it as a root or intermediary CA.

  • Encryption-at-rest support using Vault

Added support for encryption-at-rest for cluster secrets, leveraging Vault for data protection. This ensures that even the keys used to encrypt the data are protected at rest, unlike many configurations of encryption-at-rest for Kubernetes. Please see the documentation for further details.

  • Private Docker registry support

Added support for the Docker Registry charm to provide Docker images to cluster components without requiring access to public registries. Full instructions on using this feature are in the documentation.

  • Keepalived support

The keepalived charm can be used to run multiple kube-api-loadbalancers behind a virtual IP. For more details, please see the documentation.

  • Nginx update

Nginx was updated to v0.21.0, which brings a few changes of which to be aware. The first is that nginx is now in a namespace by itself, which is derived from the application name. By default this will be ingress-nginx-kubernetes-worker. The second change relates to custom configmaps. The name has changed to nginx-configuration and the configmap needs to reside in the same namespace as the nginx deployment.


  • Added post deployment script for jaas/jujushell (Issue)
  • Added support for load-balancer failover (Issue)
  • Added always restart for etcd (Issue)
  • Added Xenial support to Azure integrator (Issue)
  • Added Bionic support to Openstack integrator (Issue)
  • Added support for ELB service-linked role (Issue)
  • Added ability to configure Docker install source (Issue)
  • Fixed EasyRSA does not run as an LXD container on 18.04 (Issue)
  • Fixed ceph volumes cannot be attached to the pods after 1.12 (Issue)
  • Fixed ceph volumes fail to attach with "node has no NodeID annotation" (Issue)
  • Fixed ceph-xfs volumes failing to format due to "executable file not found in $PATH" (Issue)
  • Fixed ceph volumes not detaching properly (Issue)
  • Fixed ceph-csi addons not getting cleaned up properly (Issue)
  • Fixed Calico/Canal not working with kube-proxy on master (Issue)
  • Fixed issue with Canal charm not populating the kubeconfig option in 10-canal.conflist (Issue)
  • Fixed cannot access logs after enabling RBAC (Issue)
  • Fixed RBAC breaking prometheus/grafana metric collection (Issue)
  • Fixed upstream Docker charm config option using wrong package source (Issue)
  • Fixed a timing issue where ceph can appear broken when it's not (Issue)
  • Fixed status when cni is not ready (Issue)
  • Fixed an issue with calico-node service failures not surfacing (Issue)
  • Fixed empty configuration due to timing issue with cni. (Issue)
  • Fixed an issue where the calico-node service failed to start (Issue)
  • Fixed updating policy definitions during upgrade-charm on AWS integrator (Issue)
  • Fixed parsing credentials config value (Issue)
  • Fixed pvc stuck in pending (azure-integrator)
  • Fixed updating properties of the openstack integrator charm do not propagate automatically (openstack-integrator)
  • Fixed flannel error during install hook due to incorrect resource (flannel)
  • Updated master and worker to handle upstream changes from OpenStack Integrator (Issue)
  • Updated to CNI 0.7.4 (Issue)
  • Updated to Flannel v0.10.0 (Issue)
  • Updated Calico and Canal charms to Calico v2.6.12 (Issue, Issue)
  • Updated to latest CUDA and removed version pins of nvidia-docker stack (Issue)
  • Updated to nginx-ingress-controller v0.21.0 (Issue)
  • Removed portmap from Calico resource (Issue)
  • Removed CNI bins from flannel resource (Issue)

Known issues

  • A current bug in Kubernetes could prevent the upgrade from properly deleting old pods. kubectl delete pod <pod_name> --force --grace-period=0 can be used to clean them up.

1.12 Release Notes

  • Added support for Ubuntu 18.04 (Bionic)

New deployments will get Ubuntu 18.04 machines by default. We will also continue to support Charmed Kubernetes on Ubuntu 16.04 (Xenial) machines for existing deployments.

  • Added kube-proxy to kubernetes-master

The kubernetes-master charm now installs and runs kube-proxy along with the other master services. This makes it possible for the master services to reach Service IPs within the cluster, making it easier to enable certain integrations that depend on this functionality (e.g. Keystone).

For operators of offline deployments, please note that this change may require you to attach a kube-proxy resource to kubernetes-master.

  • New kubernetes-worker charm config: kubelet-extra-config

In Kubernetes 1.10, a new KubeletConfiguration file was introduced, and many of Kubelet's command line options were moved there and marked as deprecated. In order to accomodate this change, we've introduced a new charm config to kubernetes-worker: kubelet-extra-config.

This config can be used to override KubeletConfiguration values provided by the charm, and is usable on any Canonical cluster running Kubernetes 1.10+.

The value for this config must be a YAML mapping that can be safely merged with a KubeletConfiguration file. For example:

juju config kubernetes-worker kubelet-extra-config="{evictionHard: {memory.available: 200Mi}}"

For more information about KubeletConfiguration, see upstream docs:

  • Added support for Dynamic Kubelet Configuration

While we recommend kubelet-extra-config as a more robust and approachable way to configure Kubelet, we've also made it possible to configure kubelet using the Dynamic Kubelet Configuration feature that comes with Kubernetes 1.11+. You can read about that here:

  • New etcd charm config: bind_to_all_interfaces (PR)

Default true, which retains the old behavior of binding to Setting this to false makes etcd bind only to the addresses it expects traffic on, as determined by the configuration of Juju endpoint bindings.

Special thanks to @rmescandon for this contribution!

  • Updated proxy configuration

For operators who currently use the http-proxy, https-proxy and no-proxy Juju model configs, we recommend using the newer juju-http-proxy, juju-https-proxy and juju-no-proxy model configs instead. See the Proxy configuration page for details.


  • Fixed kube-dns constantly restarting on 18.04 (Issue)
  • Fixed LXD machines not working on 18.04 (Issue)
  • Fixed kubernetes-worker unable to restart services after kubernetes-master leader is removed (Issue)
  • Fixed kubeapi-load-balancer default timeout might be too low (Issue)
  • Fixed unable to deploy on NVidia hardware (Issue)

We appreciate your feedback on the documentation. You can edit this page or file a bug here.