Your submission was sent successfully! Close

You have successfully unsubscribed! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu and upcoming events where you can meet our team.Close

Metrics and alerts

This reference page list the metrics exported by the native Ceph exporter and the default alerts which use them. It is divided into two major sections:

Metrics

The native Ceph exporter allows Prometheus to scrape hundreds of metrics. Not all of them however are of operational importance to Ceph cluster monitoring. This section contains a limited set of metrics and their associated default set of alerts.

Every metric has:

  • a name
  • a description
  • a type (the quantity it represents)
  • labels (these add dimensionality to the otherwise single-dimensional time series)

ceph_health_detail

healthcheck status by type (0=inactive, 1=active). Metric type gauge.

Default alerts:

CephMonDownQuorumAtRisk, CephMonDiskspaceCritical, CephMonDiskspaceLow, CephMonClockSkew, CephOSDHostDown, CephOSDDown, CephOSDNearFull, CephOSDFull, CephOSDBackfillFull, CephOSDTooManyRepairs, CephOSDTimeoutsPublicNetwork, CephOSDTimeoutsClusterNetwork, CephOSDInternalDiskSizeMismatch, CephDeviceFailurePredicted, CephDeviceFailurePredictionTooHigh, CephDeviceFailureRelocationIncomplete, CephOSDReadErrors, CephFilesystemDamaged, CephFilesystemOffline, CephFilesystemDegraded, CephFilesystemMDSRanksLow, CephFilesystemInsufficientStandby, CephFilesystemFailureNoStandby, CephFilesystemReadOnly, CephMgrModuleCrash, CephPGsDamaged, CephPGRecoveryAtRisk, CephPGUnavilableBlockingIO, CephPGBackfillAtRisk, CephPGNotScrubbed, CephPGsHighPerOSD, CephPGNotDeepScrubbed, CephPoolBackfillFull, CephPoolFull, CephPoolNearFull, CephadmUpgradeFailed, CephadmDaemonFailed, CephadmPaused, CephObjectMissing, and CephObjectMissing


ceph_pg_active

PG active per pool. Metric type gauge.

Default alert: CephPGsInactive

Sample: ceph_pg_active{pool_id="1"} 128.0


ceph_osd_numpg

Placement groups. Metric type gauge.

Default alert: CephPGImbalance

Sample: ceph_osd_numpg{ceph_daemon="osd.1"} 161.0


ceph_osd_up

OSD status up. Metric type untyped.

Default alerts: CephOSDDownHigh, CephOSDFlapping, and CephOSDFlapping

Sample: ceph_osd_up{ceph_daemon="osd.0"} 1.0


ceph_pg_total

PG Total Count per Pool. Metric type gauge.

Default alerts: CephPGsInactive, and CephPGsInactive

Sample: ceph_pg_total{pool_id="1"} 128.0


ceph_healthcheck_slow_ops

OSD or Monitor requests taking a long time to process. Metric type gauge.

Default alert: CephSlowOps

Sample: ceph_healthcheck_slow_ops 0.0


ceph_pool_percent_used

DF pool percent_used. Metric type gauge.

Default alert: CephPoolGrowthWarning

Sample: ceph_pool_percent_used{pool_id="1"} 0.0


ceph_pool_metadata

POOL Metadata. Metric type untyped.

Default alerts: CephPGsInactive, CephPGsUnclean, and CephPGsUnclean

Sample: ceph_pool_metadata{pool_id="1",name="default.rgw.buckets.data",type="replicated",description="replica:3",compression_mode="none"} 1.0


ceph_osd_metadata

OSD Metadata. Metric type untyped.

Default alerts: CephOSDFlapping, CephPGImbalance, and CephPGImbalance

Sample: ceph_osd_metadata{back_iface="",ceph_daemon="osd.0",cluster_addr="10.5.2.89",device_class="hdd",front_iface="",hostname="juju-080297-zaza-5e7b19357667-5",objectstore="bluestore",public_addr="10.5.2.89",ceph_version="ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)"} 1.0


ceph_mon_metadata

MON Metadata. Metric type untyped.

Default alerts: CephMonDownQuorumAtRisk and CephMonDownQuorumAtRisk

Sample: ceph_mon_metadata{ceph_daemon="mon.juju-080297-zaza-5e7b19357667-9",hostname="juju-080297-zaza-5e7b19357667-9",public_addr="10.5.1.21",rank="0",ceph_version="ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)"} 1.0


ceph_health_status

Cluster health status. Metric type untyped.

Default alerts: CephHealthError and CephHealthError

Sample: ceph_health_status 0.0


ceph_pg_clean

PG clean per pool. Metric type gauge.

Default alert: CephPGsUnclean

Sample: ceph_pg_clean{pool_id="1"} 128.0


ceph_mon_quorum_status

Monitors in quorum. Metric type gauge.

Default alerts: CephMonDownQuorumAtRisk and CephMonDownQuorumAtRisk

Sample: ceph_mon_quorum_status{ceph_daemon="mon.juju-080297-zaza-5e7b19357667-9"} 1.0


ceph_data_sync_from_zone_fetch_bytes_sum

Number of object bytes replicated Total. Metric type counter.

Default alerts:

Sample: ceph_data_sync_from_zone_fetch_bytes_sum{instance_id="7524",source_zone="foo_sec"} 1500.0


ceph_data_sync_from_zone_fetch_bytes_count

Number of object bytes replicated Count. Metric type counter.

Default alerts:

Sample: ceph_data_sync_from_zone_fetch_bytes_count{instance_id="7524",source_zone="foo_sec"} 10.0


ceph_data_sync_from_zone_fetch_errors

Number of object replication errors. Metric type counter.

Default alerts:

Sample: ceph_data_sync_from_zone_fetch_errors{instance_id="7524",source_zone="foo_sec"} 2.0


ceph_data_sync_from_zone_fetch_not_modified

Number of objects already replicated. Metric type counter.

Default alerts:

Sample: ceph_data_sync_from_zone_fetch_not_modified{instance_id="7524",source_zone="foo_sec"} 0.0


ceph_data_sync_from_zone_poll_errors

Number of replication log request errors. Metric type counter.

Default alerts:

Sample: ceph_data_sync_from_zone_poll_errors{instance_id="7524",source_zone="foo_sec"} 2.0


ceph_data_sync_from_zone_poll_latency_sum

Average latency of replication log requests Total. Metric type counter.

Default alerts:

Sample: ceph_data_sync_from_zone_poll_latency_sum{instance_id="7524",source_zone="foo_sec"} 18212.350834507


ceph_data_sync_from_zone_poll_latency_count

Average latency of replication log requests Count. Metric type counter.

Default alerts:

Sample: ceph_data_sync_from_zone_poll_latency_count{instance_id="7524",source_zone="foo_sec"} 1685428.0


Alerts

Ceph has a multitude of alert rules configured by default for monitoring a general set of operational events. These include OSDs reaching near their full capacity, daemon crashes, and device failures to name a few. This section contains a summary and a longer description of each of these alerts.

CephMonDownQuorumAtRisk (mon)

Monitor quorum is at risk

Quorum requires a majority of monitors (x) to be active. Without quorum the cluster will become inoperable, affecting all services and connected clients.

CephMonDiskspaceCritical (mon)

Filesystem space on at least one monitor is critically low

The free space available to a monitor’s store is critically low. You should increase the space available to the monitor(s). The default directory is /var/lib/ceph/mon-*/data/store.db on traditional deployments, and /var/lib/rook/mon-*/data/store.db on the mon pod’s worker node for Rook. Look for old, rotated versions of *.log and MANIFEST* files. Do NOT touch any *.sst files. Also check any other directories under /var/lib/rook and other directories on the same filesystem, often /var/log and /var/tmp are culprits.

CephMonDiskspaceLow (mon)

Drive space on at least one monitor is approaching full

The space available to a monitor’s store is approaching full (>70% is the default). You should increase the space available to the monitor(s). The default directory is /var/lib/ceph/mon-/data/store.db on traditional deployments, and /var/lib/rook/mon-/data/store.db on the mon pod’s worker node for Rook. Look for old, rotated versions of .log and MANIFEST. Do NOT touch any *.sst files. Also check any other directories under /var/lib/rook and other directories on the same filesystem, often /var/log and /var/tmp are culprits.

CephMonClockSkew (mon)

Clock skew detected among monitors

Ceph monitors rely on closely synchronized time to maintain quorum and cluster consistency. This event indicates that the time on at least one mon has drifted too far from the lead mon. Review cluster status with ceph -s. This will show which monitors are affected. Check the time sync status on each monitor host with ‘ceph time-sync-status’ and the state and peers of your ntpd or chrony daemon.

CephOSDHostDown (osd)

An OSD host is offline

Describes the list of OSDs marked offline.

CephOSDDown (osd)

An OSD has been marked down

OSD down for over 5mins. Describes the list of OSDs marked down.

CephOSDNearFull (osd)

OSD(s) running low on free space (NEARFULL)

One or more OSDs have reached the NEARFULL threshold. Use ‘ceph health detail’ and ‘ceph osd df’ to identify the problem. To resolve, add capacity to the affected OSD’s failure domain, restore down/out OSDs, or delete unwanted data.

CephOSDFull (osd)

OSD full, writes blocked

An OSD has reached the FULL threshold. Writes to pools that share the affected OSD will be blocked. Use ‘ceph health detail’ and ‘ceph osd df’ to identify the problem. To resolve, add capacity to the affected OSD’s failure domain, restore down/out OSDs, or delete unwanted data.

CephOSDBackfillFull (osd)

OSD(s) too full for backfill operations

An OSD has reached the BACKFILL FULL threshold. This will prevent rebalance operations from completing. Use ‘ceph health detail’ and ‘ceph osd df’ to identify the problem. To resolve, add capacity to the affected OSD’s failure domain, restore down/out OSDs, or delete unwanted data.

CephOSDTooManyRepairs (osd)

OSD reports a high number of read errors

Reads from an OSD have used a secondary PG to return data to the client, indicating a potential failing drive.

CephOSDTimeoutsPublicNetwork (osd)

Network issues delaying OSD heartbeats (public network)

OSD heartbeats on the cluster’s ‘public’ network (frontend) are running slow. Investigate the network for latency or loss issues. Use ‘ceph health detail’ to show the affected OSDs.

CephOSDTimeoutsClusterNetwork (osd)

Network issues delaying OSD heartbeats (cluster network)

OSD heartbeats on the cluster’s ‘cluster’ network (backend) are slow. Investigate the network for latency issues on this subnet. Use ‘ceph health detail’ to show the affected OSDs.

CephOSDInternalDiskSizeMismatch (osd)

OSD size inconsistency error

One or more OSDs have an internal inconsistency between metadata and the size of the device. This could lead to the OSD(s) crashing in future. You should redeploy the affected OSDs.

CephDeviceFailurePredicted (osd)

Device(s) predicted to fail soon

The device health module has determined that one or more devices will fail soon. To review device status use ‘ceph device ls’. To show a specific device use ‘ceph device info ’. Mark the OSD out so that data may migrate to other OSDs. Once the OSD has drained, destroy the OSD, replace the device, and redeploy the OSD.

CephDeviceFailurePredictionTooHigh (osd)

Too many devices are predicted to fail, unable to resolve

The device health module has determined that devices predicted to fail can not be remediated automatically, since too many OSDs would be removed from the cluster to ensure performance and availabililty. Prevent data integrity issues by adding new OSDs so that data may be relocated.

CephDeviceFailureRelocationIncomplete (osd)

Device failure is predicted, but unable to relocate data

The device health module has determined that one or more devices will fail soon, but the normal process of relocating the data on the device to other OSDs in the cluster is blocked. Ensure that the cluster has available free space. It may be necessary to add capacity to the cluster to allow data from the failing device to successfully migrate, or to enable the balancer.

CephOSDReadErrors (osd)

Device read errors detected

An OSD has encountered read errors, but the OSD has recovered by retrying the reads. This may indicate an issue with hardware or the kernel.

CephFilesystemDamaged (mds)

CephFS filesystem is damaged.

Filesystem metadata has been corrupted. Data may be inaccessible. Analyze metrics from the MDS daemon admin socket, or escalate to support.

CephFilesystemOffline (mds)

CephFS filesystem is offline

All MDS ranks are unavailable. The MDS daemons managing metadata are down, rendering the filesystem offline.

CephFilesystemDegraded (mds)

CephFS filesystem is degraded

One or more metadata daemons (MDS ranks) are failed or in a damaged state. At best the filesystem is partially available, at worst the filesystem is completely unusable.

CephFilesystemMDSRanksLow (mds)

Ceph MDS daemon count is lower than configured

The filesystem’s ‘max_mds’ setting defines the number of MDS ranks in the filesystem. The current number of active MDS daemons is less than this value.

CephFilesystemInsufficientStandby (mds)

Ceph filesystem standby daemons too few

The minimum number of standby daemons required by standby_count_wanted is less than the current number of standby daemons. Adjust the standby count or increase the number of MDS daemons.

CephFilesystemFailureNoStandby (mds)

MDS daemon failed, no further standby available

An MDS daemon has failed, leaving only one active rank and no available standby. Investigate the cause of the failure or add a standby MDS.

CephFilesystemReadOnly (mds)

CephFS filesystem in read only mode due to write error(s)

The filesystem has switched to READ ONLY due to an unexpected error when writing to the metadata pool. Either analyze the output from the MDS daemon admin socket, or escalate to support.

CephMgrModuleCrash (mgr)

A manager module has recently crashed

One or more mgr modules have crashed and have yet to be acknowledged by an administrator. A crashed module may impact functionality within the cluster. Use the ‘ceph crash’ command to determine which module has failed, and archive it to acknowledge the failure.

CephPGsDamaged (pgs)

Placement group damaged, manual intervention needed

During data consistency checks (scrub), at least one PG has been flagged as being damaged or inconsistent. Check to see which PG is affected, and attempt a manual repair if necessary. To list problematic placement groups, use ‘rados list-inconsistent-pg ’. To repair PGs use the ‘ceph pg repair <pg_num>’ command.

CephPGRecoveryAtRisk (pgs)

OSDs are too full for recovery

Data redundancy is at risk since one or more OSDs are at or above the ‘full’ threshold. Add more capacity to the cluster, restore down/out OSDs, or delete unwanted data.

CephPGUnavilableBlockingIO (pgs)

PG is unavailable, blocking I/O

Data availability is reduced, impacting the cluster’s ability to service I/O. One or more placement groups (PGs) are in a state that blocks I/O.

CephPGBackfillAtRisk (pgs)

Backfill operations are blocked due to lack of free space

Data redundancy may be at risk due to lack of free space within the cluster. One or more OSDs have reached the ‘backfillfull’ threshold. Add more capacity, or delete unwanted data.

CephPGNotScrubbed (pgs)

Placement group(s) have not been scrubbed

One or more PGs have not been scrubbed recently. Scrubs check metadata integrity, protecting against bit-rot. They check that metadata is consistent across data replicas. When PGs miss their scrub interval, it may indicate that the scrub window is too small, or PGs were not in a ‘clean’ state during the scrub window. You can manually initiate a scrub with: ceph pg scrub

CephPGsHighPerOSD (pgs)

Placement groups per OSD is too high

The number of placement groups per OSD is too high (exceeds the mon_max_pg_per_osd setting).
Check that the pg_autoscaler has not been disabled for any pools with ‘ceph osd pool autoscale-status’, and that the profile selected is appropriate. You may also adjust the target_size_ratio of a pool to guide the autoscaler based on the expected relative size of the pool (‘ceph osd pool set cephfs.cephfs.meta target_size_ratio .1’) or set the pg_autoscaler mode to ‘warn’ and adjust pg_num appropriately for one or more pools.

CephPGNotDeepScrubbed (pgs)

Placement group(s) have not been deep scrubbed

One or more PGs have not been deep scrubbed recently. Deep scrubs protect against bit-rot. They compare data replicas to ensure consistency. When PGs miss their deep scrub interval, it may indicate that the window is too small or PGs were not in a ‘clean’ state during the deep-scrub window.

CephPoolBackfillFull (pools)

Free space in a pool is too low for recovery/backfill

A pool is approaching the near full threshold, which will prevent recovery/backfill operations from completing. Consider adding more capacity.

CephPoolFull (pools)

Pool is full - writes are blocked

A pool has reached its MAX quota, or OSDs supporting the pool have reached the FULL threshold. Until this is resolved, writes to the pool will be blocked. Pool Breakdown (top 5) - at % Increase the pool’s quota, or add capacity to the cluster first then increase the pool’s quota (e.g. ceph osd pool set quota <pool_name> max_bytes )

CephPoolNearFull (pools)

One or more Ceph pools are nearly full

A pool has exceeded the warning (percent full) threshold, or OSDs supporting the pool have reached the NEARFULL threshold. Writes may continue, but you are at risk of the pool going read-only if more capacity isn’t made available. Determine the affected pool with ‘ceph df detail’, looking at QUOTA BYTES and STORED. Increase the pool’s quota, or add capacity to the cluster first then increase the pool’s quota (e.g. ceph osd pool set quota <pool_name> max_bytes ). Also ensure that the balancer is active.

CephadmUpgradeFailed (cephadm)

Ceph version upgrade has failed

The cephadm cluster upgrade process has failed. The cluster remains in an undetermined state. Please review the cephadm logs, to understand the nature of the issue

CephadmDaemonFailed (cephadm)

A ceph daemon manged by cephadm is down

A daemon managed by cephadm is no longer active. Determine, which daemon is down with ‘ceph health detail’. you may start daemons with the ‘ceph orch daemon start <daemon_id>’

CephadmPaused (cephadm)

Orchestration tasks via cephadm are PAUSED

Cluster management has been paused manually. This will prevent the orchestrator from service management and reconciliation. If this is not intentional, resume cephadm operations with ‘ceph orch resume’

CephObjectMissing (rados)

Object(s) marked UNFOUND

The latest version of a RADOS object can not be found, even though all OSDs are up. I/O requests for this object from clients will block (hang). Resolving this issue may require the object to be rolled back to a prior version manually, and manually verified.

CephDaemonCrash (generic)

One or more Ceph daemons have crashed, and are pending acknowledgement

One or more daemons have crashed recently, and need to be acknowledged. This notification ensures that software crashes do not go unseen. To acknowledge a crash, use the ‘ceph crash archive ’ command.

CephPGsInactive (pgs)

One or more placement groups are inactive

PGs have been inactive for more than 5 minutes in pool . Inactive placement groups are not able to serve read/write requests.

CephPGImbalance (osd)

PGs are not balanced across OSDs

OSD on deviates by more than 30% from average PG count.

CephOSDDownHigh (osd)

More than 10% of OSDs are down

% or of OSDs are down (>= 10%). The following OSDs are down: - on

CephOSDFlapping (osd)

Network issues are causing OSDs to flap (mark each other down)

OSD on was marked down and back up times once a minute for 5 minutes. This may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster network, or the public network if no cluster network is deployed. Check the network stats on the listed host(s).

CephPGsUnclean (pgs)

One or more placement groups are marked unclean

PGs have been unclean for more than 15 minutes in pool . Unclean PGs have not recovered from a previous failure.

CephSlowOps (healthchecks)

OSD operations are slow to complete

OSD requests are taking too long to process (osd_op_complaint_time exceeded)

CephPoolGrowthWarning (pools)

Pool growth rate may soon exceed capacity

Pool ‘’ will be full in less than 5 days assuming the average fill-up rate of the past 48 hours.

CephMonDown (mon)

One or more monitors down

You have monitor down. Quorum is still intact, but the loss of an additional monitor will make your cluster inoperable. The following monitors are down: - on

CephHealthError (cluster health)

Ceph is in the ERROR state

The cluster state has been HEALTH_ERROR for more than 5 minutes. Please check ‘ceph health detail’ for more information.

CephHealthWarning (cluster health)

Ceph is in the WARNING state

The cluster state has been HEALTH_WARN for more than 15 minutes. Please check ‘ceph health detail’ for more information.

CephRGWMultisitePollError

Unsuccessful Replication Log Request Errors Threshold Exceeded

Unsuccessful replication log request errors threshold has been exceeded. The threshold is defined as 2 errors per 15min

CephRGWMultisitePollErrorCritical

Critical: Unsuccessful Replication Log Request Errors Threshold Exceeded

Critical: Unsuccessful replication log request errors threshold has been exceeded. The threshold is defined as 50 errors per 15min

CephRGWMultisiteFetchError

Unsuccessful Object Replications from Source Zone Threshold Exceeded

Unsuccessful Object Replications from source zone threshold has been exceeded. The threshold is defined as 2 errors per 15min

CephRGWMultisiteFetchErrorCritical

Critical: Unsuccessful Object Replications from Source Zone Threshold Exceeded

Critical: Unsuccessful Object Replications from source zone threshold has been exceeded. The threshold is defined as 50 errors per 15min

CephRGWMultisitePollLatency

Poll Request Latency Threshold Exceeded

Latency for poll request threshold exceeded. The threshold is defined as 600s latency per 15min

This page was last modified a month ago. Help improve this document in the forum.