Your submission was sent successfully! Close

You have successfully unsubscribed! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu and upcoming events where you can meet our team.Close

How to setup a basic LXD cluster

1. Overview

While single node LXD is quite powerful and more than suitable for running advanced workloads, you can face some limitations depending on your hardware and the size of your storage. To overcome these limitations, LXD can be run in clustering mode, allowing any number of LXD servers to share the same distributed database and be managed uniformly. This is suitable for both testing and development environments, as well as production environments.

What you’ll learn

  • How to set up a basic LXD cluster
  • How to add nodes to the cluster
  • How to deploy instances to the cluster

What you’ll need

  • LXD snap (version 4.2 or above) installed and running
  • Minimum two physical or virtual (cloud) servers
  • Some basic command-line knowledge

2. Initializing clustering on the first server

First, we will run lxd init on the initial server

The terminal will prompt us with a number of questions, starting with “Would you like to use LXD clustering?”. This will allow us to select a number of different options when setting up our cluster, including configuring local or remote storage, connecting to a MAAS server, and different networking options.

For the basic clustering setup, the selection would look as follows.

lxd init

Would you like to use LXD clustering? (yes/no) [default=no]: yes

What IP address or DNS name should be used to reach this node? [default=]:

Are you joining an existing cluster? (yes/no) [default=no]:

What name should be used to identify this node in the cluster? [default=Marvin-II]: node1

Setup password authentication on the cluster? (yes/no) [default=no]:

Do you want to configure a new local storage pool? (yes/no) [default=yes]:

Name of the storage backend to use (btrfs, dir, lvm, zfs) [default=zfs]:

Create a new ZFS pool? (yes/no) [default=yes]:

Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]:

Size in GB of the new loop device (1GB minimum) [default=30GB]:

Do you want to configure a new remote storage pool? (yes/no) [default=no]:

Would you like to connect to a MAAS server? (yes/no) [default=no]:

Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]:

Would you like to create a new Fan overlay network? (yes/no) [default=yes]:

What subnet should be used as the Fan underlay? [default=auto]:

Would you like stale cached images to be updated automatically? (yes/no) [default=yes]:

Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:

Once you opt into clustering, you will notice that the default values are usually sufficient. Optionally, you can create a network bridge and a remote storage pool.

3. Configuring additional nodes

For each additional node we would like to add to the cluster, we first need to generate a join token on the existing instance with the command.

lxc cluster add <new member name>

Please note that further nodes need to be brand new LXD servers. In case they’re not, we need to make sure to clear their contents before adding them, otherwise, the data would be lost.

For each of the remaining nodes we would like to add, we should connect to them, and run

sudo lxd init

Like before, we’ll be prompted with several questions that look like the below. We will answer yes to whether we would like to join an existing cluster, and use the join token generated for that instance.

Would you like to use LXD clustering? (yes/no) [default=no]: yes

What IP address or DNS name should be used to reach this node?


Are you joining an existing cluster? (yes/no) [default=no]: yes

Do you have a join token? (yes/no/[token]) [default=no]: eyJzZXJ2ZXJfbmFtZSI6Im1hcnZpbklJSSIsImZpbmdlcnByaW50IjoiNjhlMjljYzBlN2IxNTkzNWY1MGM5YjI3NjM0NmFhNDU1OTc2ZWQ1N2Y4ODAyZTYxMTc4MzUwOThlNjNkNmFmYSIsImFkZHJlc3NlcyI6WyIxOTIuMTY4LjAuMTg6ODQ0MyJdLCJzZWNyZXQiOiIxODg3ZWQxMmIxN2MwOTYwZDM1NTU0Zjc3M2IxNzU2NmZlNWExZjQ1M2VhZTc1NmVmYzk0OGExMDUyYjYwNTE5In0=

All existing data is lost when joining a cluster, continue? (yes/no)[default= no]: yes

Choose “size” property for storage pool “local”:

Choose "source" property for storage pool "local":

Choose "zfs.pool_name" property for storage pool "local":

Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:

ⓘYou’ll notice we haven’t added any details for “size” and “source” property for a storage pool - leaving this empty would just create a loop. Similarly, leaving the “zfs.pool_name” empty, would default to the name of the LXD pool.

We should repeat this process for each of the nodes we’d like to add to the cluster.

Once completed, we can check what’s running in the cluster with

lxc cluster list

4. Deploying instances on the cluster

Deploying an instance on the cluster is fairly simple. We use the basic commands for deploying system containers or VMs

System containers

lxc launch ubuntu:20.04 A1


lxc launch ubuntu:20.04 A2 --vm

When deploying instances, if needed, you can designate a specific target node as follows

lxc launch --target node2 ubuntu:22.04 A3

ⓘA1, A2, A3 are just the names we used for the instances

If you do not designate a target node, the instance will be launched on the server that is running the lowest number of instances. Once an instance is launched, you can operate it from any node in the cluster.

5. Additional information

This tutorial shows the basic clustering setup that is suitable for testing and development purposes. If you would like to use this in your production environment, you should consider setting up an HA cluster with Ceph and OVN for providing remote storage and advanced networking.

You can find more information in the following video walkthroughs:

6. That’s all

Now you’ve learned to set up a simple LXD cluster.

For more about LXD in general, take a look at the following resources:

If you have further questions or need help, you can get help here: