Your submission was sent successfully! Close

You have successfully unsubscribed! Close

Thank you for signing up for our newsletter!Close

Ceph storage

How to add Ceph storage

Many things you will want to use your Kubernetes cluster for will require some form of available storage. Storage is quite a large topic -- this guide will focus on just adding some quick storage using Ceph, so you can get up and running quickly.

What you'll need

  • A Charmed Kubernetes environment set up and running. See the [quickstart][quickstart] if you haven't .
  • An existing Ceph cluster or the ability to create one.

Deploying Ceph

Setting up a Ceph cluster is easy with Juju. For this example we will deploy three ceph monitor nodes:

 juju deploy -n 3 ceph-mon

...and then we'll add three storage nodes. For the storage nodes, we will also specify some actual storage for these nodes to use by using -- storage. In this case the Juju charm uses labels for different types of storage:

 juju deploy -n 3 ceph-osd --storage osd-devices=32G,2 --storage osd-journals=8G,1

This will deploy a storage node, and attach two 32GB devices for storage and 8GB for journalling. As we have asked for 3 machines, this means a total of 192GB of storage and 24GB of journal space. The storage comes from whatever the default storage class is for the cloud (e.g., on AWS this will be EBS volumes).

juju integrate ceph-osd ceph-mon

For more on how Juju makes use of storage, please see the relevant Juju documentation

Relating to Charmed Kubernetes

Making Charmed Kubernetes aware of your Ceph cluster just requires a Juju relation.

juju integrate ceph-mon kubernetes-control-plane

Note that the Ceph CSI containers require privileged access:

juju config kubernetes-control-plane allow-privileged=true

And finally, you need the pools that are defined in the storage class:

juju run ceph-mon/0 create-pool name=xfs-pool
  id: c12f0688-f31b-4956-8314-abacd2d6516f
  status: completed
    completed: 2018-08-20 20:49:34 +0000 UTC
    enqueued: 2018-08-20 20:49:31 +0000 UTC
    started: 2018-08-20 20:49:31 +0000 UTC
  unit: ceph-mon/0
juju run ceph-mon/0 create-pool name=ext4-pool
  id: 4e82d93d-546f-441c-89e1-d36152c082f2
  status: completed
    completed: 2018-08-20 20:49:45 +0000 UTC
    enqueued: 2018-08-20 20:49:41 +0000 UTC
    started: 2018-08-20 20:49:43 +0000 UTC
  unit: ceph-mon/0

Verifying things are working

Now you can look at your Charmed Kubernetes cluster to verify things are working. Running:

kubectl get sc,po

... should return output similar to:

NAME                                             PROVISIONER     AGE            csi-rbdplugin    7m (default)   csi-rbdplugin    7m

NAME                                                   READY     STATUS    RESTARTS   AGE
pod/csi-rbdplugin-attacher-0                           1/1       Running   0          7m
pod/csi-rbdplugin-cnh9k                                2/2       Running   0          7m
pod/csi-rbdplugin-lr66m                                2/2       Running   0          7m
pod/csi-rbdplugin-mnn94                                2/2       Running   0          7m
pod/csi-rbdplugin-provisioner-0                        1/1       Running   0          7m

If you have installed Helm, you can then add a chart to verify the persistent volume is automatically created for you.

helm install stable/phpbb
kubectl get pvc

Which should return something similar to:

NAME                            STATUS    VOLUME                 CAPACITY   ACCESS MODES   STORAGECLASS   AGE
calling-wombat-phpbb-apache     Bound     pvc-b1d04079a4bd11e8   1Gi        RWO            ceph-xfs       34s
calling-wombat-phpbb-phpbb      Bound     pvc-b1d1131da4bd11e8   8Gi        RWO            ceph-xfs       34s
data-calling-wombat-mariadb-0   Bound     pvc-b1df7ac9a4bd11e8   8Gi        RWO            ceph-xfs       34s


Now you have a Ceph cluster talking to your Kubernetes cluster. From here you can install any of the things that require storage out of the box.

We appreciate your feedback on the documentation. You can edit this page or file a bug here.