Your submission was sent successfully! Close

You have successfully unsubscribed! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu and upcoming events where you can meet our team.Close

Getting Started

This guide contains the essential steps for deploying a Ceph cluster on MAAS with Juju. Ensure that the base requirements have been met.

Note:

This guide gets you up and running with Charmed Ceph quickly. If you want to explore how to customise your install, please see the Charmed Ceph manual install page.

What you will need

  • A snapd-compatible host to run the Juju client
  • A MAAS cluster (with a user account at your disposal, and internet access configured)

Cluster specifications

The Ceph cluster will have three Ceph Monitors and three Ceph OSDs. The OSDs will be provided by three storage nodes, with one OSD per node (backed by device /dev/sdb).

A Monitor will be containerised on each storage node. This means that you will require three machines for the Ceph cluster. One additional machine will be needed for the Juju controller. The MAAS cluster must therefore consist of at least four machines.

The MAAS nodes will be running Ubuntu 20.04 LTS (Focal) as will any LXD containers created during the deployment. Ceph Octopus will be deployed.

Procedure

Run the below commands on the host allocated to the Juju client.

Install the Juju client:

sudo snap install juju --classic

Inform the Juju client about the MAAS cluster (choose ‘maas’ during the interactive session):

juju add-cloud --client

Add your MAAS user’s API key:

juju add-credential my-maas

Create a Juju controller to manage the Ceph deployment:

juju bootstrap my-maas my-controller

Create a juju model:

 juju add-model my-model

Deploy the OSDs (change block devices according to your MAAS nodes). The following configuration option is needed:

osd-devices
This option lists what block devices can be used for OSDs across the cluster. This list may affect newly added ceph-osd units as well as existing units (the option may be modified after units have been added). The charm will attempt to activate as Ceph storage any listed device that is visible by the unit’s underlying machine.

juju deploy -n 3 --config osd-devices=/dev/sdb ceph-osd

Deploy the MONs:

juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 ceph-mon

Connect the OSDs and MONs together:

juju integrate ceph-osd:mon ceph-mon:osd

Monitor the deployment:

watch -c juju status --color

You now have a Ceph cluster up and running.

This page was last modified 5 days ago. Help improve this document in the forum.