Charmed Kubernetes on OpenStack
Charmed Kubernetes will run seamlessly
on OpenStack. With the addition of the
openstack-integrator, your cluster
will also be able to directly use OpenStack native features.
openstack-integrator charm simplifies working with Charmed Kubernetes on OpenStack. Using the
credentials provided to Juju, it acts as a proxy between Charmed Kubernetes and the underlying cloud,
granting permissions to dynamically create, for example, Cinder volumes.
OpenStack integration requires Octavia to be available in the underlying OpenStack cloud, both to support Kubernetes LoadBalancer services and to support creation of a load balancer for the Kubernetes API.
When installing Charmed Kubernetes using the Juju bundle, you can add the openstack-integrator at the same time by using the following overlay file (download it here):
description: Charmed Kubernetes overlay to add native OpenStack support. applications: openstack-integrator: annotations: gui-x: "600" gui-y: "300" charm: openstack-integrator num_units: 1 trust: true relations: - ['openstack-integrator', 'kubernetes-control-plane:openstack'] - ['openstack-integrator', 'kubernetes-worker:openstack']
To use the overlay with the Charmed Kubernetes bundle, specify it during deploy like this:
juju deploy charmed-kubernetes --overlay ~/path/openstack-overlay.yaml --trust
... and remember to fetch the configuration file!
juju ssh kubernetes-control-plane/leader -- cat config > ~/.kube/config
For more configuration options and details of the permissions which the integrator uses, please see the charm docs.
Using Octavia Load Balancers
There are two ways in which Octavia load balancers can be used with Charmed Kubernetes:
load balancers automatically created by Kubernetes for
Services which sit in front of
are defined with
type=LoadBalancer, and as a replacement for the load balancer in front of the
API server itself.
In either case, the load balancers can optionally have floating IPs (FIPs) attached to them to allow for external access.
LoadBalancer-type Pod Services
To use Octavia for
LoadBalancer type services in the cluster, you will also need to set the
subnet-id config to the appropriate tenant subnet where your nodes reside, and if desired, the
floating-network-id config to whatever network you want FIPs created in. See the
Charm config docs for more details.
As an example of this usage, this will create a simple application, scale it to five pods,
and expose it with a
kubectl create deployment hello-world --image=gcr.io/google-samples/node-hello:1.0 kubectl scale deployment hello-world --replicas=5 kubectl expose deployment hello-world --type=LoadBalancer --name=hello --port 8080
You can verify that the application and replicas have been created with:
kubectl get deployments hello-world
Which should return output similar to:
NAME READY UP-TO-DATE AVAILABLE AGE hello-world 5/5 5 5 2m38s
To check that the service is running correctly:
kubectl get service hello
...which should return output similar to:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello LoadBalancer 10.152.183.136 18.104.22.168 8080:32662/TCP 2m
You can see that the External IP is now in front of the five endpoints of the example deployment. You can test the ingress address:
API Server Load Balancer
If desired, the openstack-integrator can also replace kubeapi-load-balancer and create a native OpenStack load balancer for the Kubernetes API server, which simplifies the model and is properly HA, which kubeapi-load-balancer on its own is not. To enable this, use this overlay instead (download it here):
applications: kubeapi-load-balancer: null openstack-integrator: annotations: gui-x: "600" gui-y: "300" charm: openstack-integrator num_units: 1 trust: true relations: - ['openstack-integrator', 'kubernetes-control-plane:loadbalancer'] - ['openstack-integrator', 'kubernetes-control-plane:openstack'] - ['openstack-integrator', 'kubernetes-worker:openstack']
You will also need to set the
lb-subnet config to the appropriate tenant subnet where your nodes
reside, and if desired, the
lb-floating-network config to whatever network you want the FIP created
in. See the Charm config docs for more details.
Using Cinder Volumes
Many pods you may wish to deploy will require storage. Although you can use any type of storage supported by Kubernetes (see the storage documentation), you also have the option to use Cinder storage volumes, if supported by your OpenStack.
cdk-cinder storage class will be automatically created when the integrator is
used. This storage class can then be used when creating a Persistent Volume Claim:
kubectl create -f - <<EOY kind: PersistentVolumeClaim apiVersion: v1 metadata: name: testclaim spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: cdk-cinder EOY
This should finish with a confirmation. You can check the current PVCs with:
kubectl get pvc
...which should return something similar to:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE testclaim Bound pvc-54a94dfa-3128-11e9-9c54-028fdae42a8c 1Gi RWO cinder 9s
This PVC can then be used by pods operating in the cluster. As an example, the following
kubectl create -f - <<EOY apiVersion: v1 kind: Pod metadata: name: busybox namespace: default spec: containers: - image: busybox command: - sleep - "3600" imagePullPolicy: IfNotPresent name: busybox volumeMounts: - mountPath: "/pv" name: testvolume restartPolicy: Always volumes: - name: testvolume persistentVolumeClaim: claimName: testclaim EOY
Using Keystone Authentication / Authorisation
openstack-integrator also provides an interface for authentication and authorisation using
Keystone. This is covered in detail in the Keystone and LDAP documentation.
Upgrading the integrator charm
The openstack-integrator is not specifically tied to the version of Charmed Kubernetes installed and may generally be upgraded at any time with the following command:
juju refresh openstack-integrator
If you have any specific problems with the openstack-integrator, you can report bugs on Launchpad.
For logs of what the charm itself believes the world to look like, you can use Juju to replay the log history for that specific unit:
juju debug-log --replay --include openstack-integrator/0