Your submission was sent successfully! Close

You have successfully unsubscribed! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu and upcoming events where you can meet our team.Close

Snapping Cuberite


on 27 October 2016

This article is more than 7 years old.

This is a guest post by James Tait, Software Engineer at Canonical. If you would like to contribute a guest post, please contact

I’m a father of two pre-teens, and like many kids their age (and many adults, for that matter) they got caught up in the craze that is Minecraft. In our house we adopted Minetest as a Free alternative to begin with, and had lots of fun and lots of arguments! Somewhere along the way, they decided they’d like to run their own server and share it with their friends. But most of those friends were using Windows and there was no Windows client for Minetest at the time. And so it came to pass that I would trawl the internet looking for Free Minecraft server software, and eventually stumble upon Cuberite (formerly MCServer), “a lightweight, fast and extensible game server for Minecraft”.

Cuberite is an actively developed project. At the time of writing, there are 16 open pull requests against the server itself, of which five are from the last week. Support for protocol version 1.10 has recently been added, along with spectator view and a steady stream of bug fixes. It is automatically built by Jenkins on each commit to master, and the resulting artefacts are made available on the website as .tar.gz and .zip files. The server itself runs in-place; that is to say that you just unpack the archive and run the Cuberite binary and the data files are created alongside it, so everything is self-contained. This has the nice side-effect that you can download the server once, copy or symlink a few files into a new directory and run a separate instance of Cuberite on a different port, say for testing.

All of this sounds great, and mostly it is. But there are a few wrinkles that just made it a bit of a chore:

  • No formal releases. OK, while there are official build artifacts, there are no milestones, no version numbers
  • No package management! No version numbers means no managed package. We just get an archive with a self-contained build directory
  • No init scripts. When I restart my server, I want the Minecraft server to be ready to play, so I need init scripts
  • Now none of these problems is insurmountable. We can put the work in to build distro packages for each distribution from git HEAD. We can contribute upstart and systemd and sysvinit scripts. We can run a cron job to poll for new releases. But, frankly, it just seems like a lot of work.

    In truth I’d done a lot of manual work already to build Cuberite from source, create a couple of independent instances, and write init scripts. I’d become somewhat familiar with the build process, which basically amounted to something like:

    $ cd src/cuberite
    $ git pull
    $ git submodule update --init
    $ cd Release
    $ make

    This builds the release binaries and copies the plugins and base data files into the Server subdirectory, which is what the Jenkins builds then compress and make available as artifacts. I’d then do a bit of extra work: I’ve been running this in a dedicated lxc container, and keeping a production and a test instance running so we could experiment with custom plugins, so I would:

    $ cd ../Server
    $ sudo cp Cuberite /var/lib/lxc/miners/rootfs/usr/games/Cuberite
    $ sudo cp brewing.txt crafting.txt furnace.txt items.ini monsters.ini /var/lib/lxc/miners/rootfs/etc/cuberite/production
    $ sudo cp brewing.txt crafting.txt furnace.txt items.ini monsters.ini /var/lib/lxc/miners/rootfs/etc/cuberite/testing
    $ sudo cp -r favicon.png lang Plugins Prefabs webadmin /var/lib/lxc/miners/rootfs/usr/share/games/cuberite

    Then in the container, /srv/cuberite/production and /srv/cuberite/testing contain symlinks to everything we just copied, and some runtime data files under /var/lib/cuberite/production and /var/lib/cuberite/testing, and we have init scripts to chdir to each of those directories and run Cuberite.

    All this is fine and could no doubt be moulded into packages for the various distros with a bit of effort. But wouldn’t it be nice if we could do all of that for all the most popular distros in one fell swoop? Enter snaps and snapcraft. Cuberite is statically linked and already distributed as a run-in-place archive, so it’s inherently relocatable, which means it lends itself perfectly to distribution as a snap.
    This is the part where I confess to working on the Ubuntu Store and being more than a little curious as to what things looked like coming from the opposite direction. So in the interests of eating my own dogfood, I jumped right in.
    Now snapcraft makes getting started pretty easy:

    $ mkdir cuberite
    $ cd cuberite
    $ snapcraft init

    And you have a template snapcraft.yaml with comments to instruct you. Most of this is straightforward, but for the version here I just used the current date. With the basic metadata filled in, I moved onto the snapcraft “parts”.

    Parts in snapcraft are the basic building blocks for your package. They might be libraries or apps or glue, and they can come from a variety of sources. The obvious starting point for Cuberite was the git source, and as you may have noticed above, it uses CMake as its build system. The snapcraft part is pretty straightforward:

            plugin: cmake
                - -DCMAKE_BUILD_TYPE=RELEASE
                - gcc
                - g++
                - -include
                - -lib

    That last section warrants some explanation. When I built Cuberite at first, it included some library files and header files from some of the bundled libraries that are statically linked. Since we’re not interested in shipping these files, they just add bloat to the final package, so we specify that they are excluded.

    That gives us our distributable Server directory, but it’s tucked away under the snapcraft parts hierarchy. So I added a release part to just copy the full contents of that directory and locate them at the root of the snap:

            after: [cuberite]
            plugin: dump
            source: parts/cuberite/src/Server
                "*": "."

    Some projects let you specify the output directory with a –prefix flag to a configure script or similar methods, and won’t need this little packaging hack, but it seems to be necessary here.

    At this stage I thought I was done with the parts and could just define the Cuberite app – the executable that gets run as a daemon. So I went ahead and did the simplest thing that could work:

            command: Cuberite
            daemon: forking
                - network
                - network-bind

    But I hit a snag. Although this would work with a traditional package, the snap is mounted read-only, and Cuberite writes its data files to the current directory. So instead I needed to write a wrapper script to switch to a writable directory, copy the base data files there, and then run the server:

     1 #!/bin/bash
     2 for file in brewing.txt crafting.txt favicon.png furnace.txt items.ini
     3 monsters.ini README.txt; do
     4 if [ ! -f "$SNAP_USER_DATA/$file" ]; then
     5  cp --preserve=mode "$SNAP/$file" "$SNAP_USER_DATA"
     6 fi
     7 done
     9 for dir in lang Plugins Prefabs webadmin; do
    10 if [ ! -d "$SNAP_USER_DATA/$dir" ]; then
    11 cp -r --preserve=mode "$SNAP/$dir" "$SNAP_USER_DATA"
    12 fi
    13 done
    15 cd "$SNAP_USER_DATA"
    16 exec "$SNAP"/Cuberite -d 

    Then add the wrapper as a part:

            plugin: dump
            source: .
                Cuberite.wrapper: bin/Cuberite.wrapper

    And update the snapcraft app:

            command: bin/Cuberite.wrapper
            daemon: forking
                - network
                - network-bind 

    And with that we’re done! Right? Well, not quite…. While this works in snap’s devmode, in strict mode it results in the server being killed. A little digging in the output from scanlog showed that seccomp was taking exception to Cuberite using the fchown system call. Applying some Google-fu turned up a bug with a suggested workaround, which was applied to the two places (both in sqlite submodules) that used the offending system call and the snap rebuilt – et voilà! Our Cuberite server now happily runs in strict mode, and can be released in the stable channel.

    My build process now looks like this:

    $ vim snapcraft.yaml
    $ # Update version
    $ snapcraft pull cuberite
    $ # Patch two fchown calls
    $ snapcraft
    I can then push it to the edge channel:
    $ snapcraft push cuberite_20161023_amd64.snap --release edge
    Revision 1 of cuberite created.
    And when people have had a chance to test and verify, promote it to stable:
    $ snapcraft release cuberite 1 stable

    There are a couple of things I’d like to see improved in the process:

    • It would be nice not to have to edit the snapcraft.yaml on each build to change the version. Some kind of template might work for this
    • It would be nice to be able to apply patches as part of the pull phase of a part

    With those two wishlist items fixed, I could fully automate the Cuberite builds and have a fresh snap released to the edge channel on each commit to git master! I’d also like to make the wrapper a little more advanced and add another command so that I can easily manage multiple instances of Cuberite. But for now, this works – my boys have never had it so good!

    Download the Cuberite Snap

Internet of Things

From home control to drones, robots and industrial systems, Ubuntu Core and Snaps provide robust security, app stores and reliable updates for all your IoT devices.

Newsletter signup

Get the latest Ubuntu news and updates in your inbox.

By submitting this form, I confirm that I have read and agree to Canonical's Privacy Policy.

Related posts

Optimise your ROS snap – Part 6

Welcome to Part 6 of our “Optimise your ROS snap” blog series. Make sure to check Part 5. This sixth and final part will  summarise every optimisation that we...

Optimise your ROS snap – Part 4

Welcome to Part 4 of our “optimise your ROS snap” blog series. Make sure to check Part 3 before. This fourth part is going to explain what dynamic library...

Optimise your ROS snap – Part 3

Welcome to Part 3 of our “optimise your ROS snap” blog series. Make sure to check Part 2. This third part is going to present safe optimisations consisting of...