LXD

LXD is a system container manager that has native support for Ceph. The images of running containers can reside in a remote Ceph cluster in the form of RADOS Block Devices, or RBD.

Note: LXD is installed by default on all supported Ubuntu releases. The snap install is recommended and is the delivery method starting with Ubuntu 20.04 LTS (Focal).

LXD+RBD client usage

This section will provide optional instructions for integrating LXD with Ceph using RBD by setting up a simple client environment. Deploy the client using the steps provided in the Client setup appendix.

Note: These instructions will use the string ‘lxd-rbd’ for a Ceph pool name, a Ceph user name, and a LXD pool name. This is merely out of convenience; each entity can be named differently.

Important: LXD hosts cannot share the same Ceph pool. Develop a naming convention for pool names if using multiple hosts.

Create a Ceph pool (‘lxd-rbd’), an RBD user (‘lxd-rbd’), collect the user’s keyring file, and transfer it to the client:

juju run-action --wait ceph-mon/0 create-pool name=lxd-rbd app-name=rbd

juju run --unit ceph-mon/0 -- \
   sudo ceph auth get-or-create client.lxd-rbd \
   mon 'profile rbd' osd 'profile rbd pool=lxd-rbd' | \
   tee ceph.client.lxd-rbd.keyring

juju scp ceph.client.lxd-rbd.keyring ceph-client/0:

From the LXD client,

Configure the client using the keyring file and set up the correct permissions:

sudo mv ~ubuntu/ceph.client.lxd-rbd.keyring /etc/ceph
sudo chmod 600 /etc/ceph/ceph.client.lxd-rbd.keyring
sudo chown ubuntu: /etc/ceph/ceph.client.lxd-rbd.keyring

Ensure the current user is a member of the ‘lxd’ group and then initialise LXD (here the user is named ‘ubuntu’):

sudo adduser ubuntu lxd
newgrp lxd
lxd init --auto

Now create a LXD storage pool (‘lxd-rbd’) of type ‘ceph’ that is linked to the previously created ‘lxd-rbd’ Ceph user:

lxc storage create lxd-rbd ceph source=lxd-rbd ceph.user.name=lxd-rbd

Note: Option source was used to explicitly refer to the existing Ceph pool (it’s possible for the LXD and Ceph pool names to differ). If the names are the same then this option is not strictly required.

If the last command throws an error such as “error connecting to the cluster” it may possibly be resolved by configuring the lxd snap to use the client host’s Ceph binaries:

sudo snap set lxd ceph.external=true
sudo systemctl reload snap.lxd.daemon

Configure the LXD default profile to use the new LXD storage pool:

lxc profile device set default root pool lxd-rbd

LXD images will still be stored locally (under /var/snap/lxd/common/lxd/images). To have Ceph also store images:

lxc storage volume create lxd-rbd images size=10GiB
lxc config set storage.images_volume lxd-rbd/images

Note: Support for Ceph-backed LXD images starts with lxd 4.0.4.

Launch a test container named ‘focal-1’:

lxc launch ubuntu:20.04 focal-1

From the Juju client,

The location of the running container can be verified:

juju ssh ceph-mon/0 sudo rbd ls -l --pool lxd-rbd

The space used by the pool in question can be viewed in this way:

juju ssh ceph-mon/0 sudo rados df --pool lxd-rbd

Last updated a month ago. Help improve this document in the forum.