April 1st 2021 – Today, Ceph upstream released the first stable version of ‘Pacific’, a full year after the last stable release ‘Octopus’. Pacific focuses on usability and cross-platform integrations, with exciting features such as iSCSI and NFS promoted to stable or major dashboard enhancements. This makes it easier to integrate, operate and monitor Ceph as a unified storage system. Ceph packages are built for Ubuntu 20.04 LTS and Ubuntu 21.04 to ensure a uniform experience across clouds.
You can try the Ceph Pacific beta by following these instructions, and your deployment will automatically upgrade to the final release as soon as it’s made available from Canonical.
What’s new in Ceph Pacific?
As usual, the Ceph community grouped the latest enhancements into five different themes, which are in descending order of significance for changes in usability, quality, performance, multi-site usage and ecosystem & integrations.
The highlight of Pacific is the cross-platform availability of Ceph with a new native Windows RBD driver and the iSCSI and NFS gateways becoming stable. These allow a wide variety of platforms to take advantage of Ceph: from your Linux native workloads to your VMware clusters to your Windows estate, you can leverage scalable software-defined storage to drive infrastructure costs down.
It is also worth mentioning that the Ceph dashboard now includes all core Ceph services and extensions – i.e. object, block, file, iSCSI, NFS Ganesha – as it evolves to a robust and responsive management GUI in front of the Ceph API. It also provides new observability and management capabilities to manage the Ceph OSDs, multisite deployments, enforce RBAC, define security policies and more.
A new host maintenance mode reduces unexpected outages, as the cluster is informed when a node is about to go under maintenance. Cephadm, the orchestrator module, got a new exporter/agent mode that increases performance when monitoring large clusters. Other notable usability enhancements in Pacific include a simplified status output and progress bar for the cluster recovery processes, MultiFS marked stable, and the MDS-side encrypted file support in CephFS.
RADOS is, as usual, the focal point when it comes to quality improvements to make Ceph more robust and reliable. Placement groups can be deleted significantly faster, and this has a smaller impact on client workloads. On CephFS, a new feature bit allows turning required file system features on or off, preventing any older clients that do not support required features from being rejected. Lastly, enhanced public dashboards based on Ceph’s telemetry feature are now available, giving users insights about the use of Ceph clusters and storage devices in the wild, helping drive data-based design and business decisions.
The RADOS Bluestore backend now supports RocksDB sharding to reduce disk space requirements, a hybrid allocator lowers memory use and disk fragmentation, and work was done to bring finer-grained memory tracking. The use of mclock scheduler and extensive testing on SSDs helped improve QoS and system performance. Ephemeral pinning, improved cache management and asynchronous unlink/create improve performance, scalability and reduce unnecessary round trips to the MDS for CephFS.
Ceph Crimson got a prototype for the new SeaStore backend, alongside a compatibility layer to the legacy BlueStore backend. New recovery, backfill and scrub implementations are also available for Crimson with the Pacific release. Ceph Crimson is the project to rewrite the Ceph OSD module to better support persistent memory and fast NVMe storage.
The snapshot-based multi-site mirroring feature in CephFS means automatic replication of any snapshot from a source cluster to remote clusters that are bigger than any directory. Similarly, the pre-bucket multi-site replication feature in RGW, which was given significant stability enhancements, allows for async data replication at a site or zone level while federating multiple sites at once.
Ecosystem & integrations
Enhancing the user experience while onboarding to Ceph is the focus of the ecosystem theme, with ongoing projects to revamp the documentation and the ceph.io website while removing instances of racially charged terms. Support of ARM64 is also in progress with new CI, release builds and testing workflows and Pacific will be the first Ceph release to be available, although initially with limited support, on ARM.
On the integrations front, Rook is now able to operate stretch clusters in two datacenters with a MON in a third location and can manage CephFS mirroring using CRDs. The container storage interface allows Openstack Manila to integrate with container and cloud platforms bringing enhanced management and security capabilities in CephFS and RBD.
Ceph Pacific available on Ubuntu
Try Ceph Pacific now on Ubuntu to combine the benefits of a unified storage system with a secure and reliable operating system. You can install the Ceph Pacific beta from the OpenStack Wallaby Ubuntu Cloud Archive for Ubuntu 20.04 LTS or using the development version of Ubuntu 21.04 (Hirsute Hippo).
Canonical supports all Ceph releases as part of the Ubuntu Advantage for Infrastructure enterprise support offering. Canonical’s Charmed Ceph packages the upstream Ceph images in wrappers called charms, which add lifecycle automation capabilities thus significantly simplifying Ceph deployments and day-2 operations, thanks to the Juju model-driven framework. Charmed Ceph Pacific will be released in tandem with the Canonical Openstack Wallaby release in late April 2021.
Suggested resources for Ceph
Ceph is a software-defined storage (SDS) solution designed to address the object, block, and file storage needs of
both small and large data centres.
It's an optimised and easy-to-integrate solution for companies adopting open source as the new norm for high-growth block storage, object stores and data lakes.