Device Mapper Multipathing

Device mapper multipathing (DM-Multipath) allows you to configure multiple I/O paths between server nodes and storage arrays into a single device. These I/O paths are physical SAN connections that can include separate cables, switches, and controllers. Multipathing aggregates the I/O paths, creating a new device that consists of the aggregated paths. This chapter provides a summary of the features of DM-Multipath that are new for the initial release of Ubuntu Server 12.04. Following that, this chapter provides a high-level overview of DM Multipath and its components, as well as an overview of DM-Multipath setup.

New and Changed Features for Ubuntu Server 12.04

Migrated from multipath-0.4.8 to multipath-0.4.9

Migration from 0.4.8

The priority checkers are no longer run as standalone binaries, but as shared libraries. The key value name for this feature has also slightly changed. Copy the attribute named prio_callout to prio, also modify the argument the name of the priority checker, a system path is no longer necessary. Example conversion:

device {
        vendor "NEC"
        product "DISK ARRAY"
        prio_callout mpath_prio_alua /dev/%n
        prio    alua
}

See Table Priority Checker Conversion for a complete listing

v0.4.8 v0.4.9
prio_callout mpath_prio_emc /dev/%n prio emc
prio_callout mpath_prio_alua /dev/%n prio alua
prio_callout mpath_prio_netapp /dev/%n prio netapp
prio_callout mpath_prio_rdac /dev/%n prio rdac
prio_callout mpath_prio_hp_sw /dev/%n prio hp_sw
prio_callout mpath_prio_hds_modular %b prio hds

Priority Checker Conversion

Since the multipath config file parser essentially parses all key/value pairs it finds and then makes use of them, it is safe for both prio_callout and prio to coexist and is recommended that the prio attribute be inserted before beginning migration. After which you can safely delete the legacy prio_calliout attribute without interrupting service.

Overview

DM-Multipath can be used to provide:

  • Redundancy DM-Multipath can provide failover in an active/passive configuration. In an active/passive configuration, only half the paths are used at any time for I/O. If any element of an I/O path (the cable, switch, or controller) fails, DM-Multipath switches to an alternate path.

  • Improved Performance Performance DM-Multipath can be configured in active/active mode, where I/O is spread over the paths in a round-robin fashion. In some configurations, DM-Multipath can detect loading on the I/O paths and dynamically re-balance the load.

Storage Array Overview

By default, DM-Multipath includes support for the most common storage arrays that support DM-Multipath. The supported devices can be found in the multipath.conf.defaults file. If your storage array supports DM-Multipath and is not configured by default in this file, you may need to add them to the DM-Multipath configuration file, multipath.conf. For information on the DM-Multipath configuration file, see Section, The DM-Multipath Configuration File. Some storage arrays require special handling of I/O errors and path switching. These require separate hardware handler kernel modules.

DMM components

Table DM-Multipath Components describes the components of the DM-Multipath package.

DMM Setup Overview

DM-Multipath includes compiled-in default settings that are suitable for common multipath configurations. Setting up DM-multipath is often a simple procedure. The basic procedure for configuring your system with DM-Multipath is as follows:

  1. Install the multipath-tools and multipath-tools-boot packages

  2. Create an empty config file, MULTIPATH-CONF-FULL, that re-defines the following

  3. If necessary, edit the multipath.conf configuration file to modify default values and save the updated file.

  4. Start the multipath daemon

  5. Update initial ramdisk

For detailed setup instructions for multipath configuration see Section, Setting Up DM-Multipath.

Multipath Devices

Without DM-Multipath, each path from a server node to a storage controller is treated by the system as a separate device, even when the I/O path connects the same server node to the same storage controller. DM-Multipath provides a way of organizing the I/O paths logically, by creating a single multipath device on top of the underlying devices.

Multipath Device Identifiers

Each multipath device has a World Wide Identifier (WWID), which is guaranteed to be globally unique and unchanging. By default, the name of a multipath device is set to its WWID. Alternately, you can set the **user_friendly_names**option in the multipath configuration file, which causes DM-Multipath to use a node-unique alias of the form mpathn as the name. For example, a node with two HBAs attached to a storage controller with two ports via a single unzoned FC switch sees four devices: /dev/sda, /dev/sdb, /dev/sdc, and /dev/sdd. DM-Multipath creates a single device with a unique WWID that reroutes I/O to those four underlying devices according to the multipath configuration. When the user_friendly_names configuration option is set to yes, the name of the multipath device is set to mpathn. When new devices are brought under the control of DM-Multipath, the new devices may be seen in two different places under the /dev directory: /dev/mapper/mpathn and /dev/dm-n.

  • The devices in /dev/mapper are created early in the boot process. Use these devices to access the multipathed devices, for example when creating logical volumes.

  • Any devices of the form /dev/dm-n are for internal use only and should never be used.

For information on the multipath configuration defaults, including the user_friendly_names configuration option, see Section , Configuration File Defaults. You can also set the name of a multipath device to a name of your choosing by using the alias option in the multipaths section of the multipath configuration file. For information on the multipaths section of the multipath configuration file, see Section, Multipaths Device Configuration Attributes.

Consistent Multipath Device Names in a Cluster

When the user_friendly_names configuration option is set to yes, the name of the multipath device is unique to a node, but it is not guaranteed to be the same on all nodes using the multipath device. Similarly, if you set the alias option for a device in the multipaths section of the MULTIPATH-CONF configuration file, the name is not automatically consistent across all nodes in the cluster. This should not cause any difficulties if you use LVM to create logical devices from the multipath device, but if you require that your multipath device names be consistent in every node it is recommended that you leave the user_friendly_names option set to no and that you not configure aliases for the devices. By default, if you do not set user_friendly_names to yes or configure an alias for a device, a device name will be the WWID for the device, which is always the same. If you want the system-defined user-friendly names to be consistent across all nodes in the cluster, however, you can follow this procedure:

  1. Set up all of the multipath devices on one machine.

  2. Disable all of your multipath devices on your other machines by running the following commands:

    # systemctl stop multipath-tools.service
    # multipath -F
    
  3. Copy the /etc/multipath/bindings file from the first machine to all the other machines in the cluster.

  4. Re-enable the multipathd daemon on all the other machines in the cluster by running the following command:

    # systemctl start multipath-tools.service
    

If you add a new device, you will need to repeat this process.

Similarly, if you configure an alias for a device that you would like to be consistent across the nodes in the cluster, you should ensure that the /etc/multipath.conf file is the same for each node in the cluster by following the same procedure:

  1. Configure the aliases for the multipath devices in the in the multipath.conf file on one machine.

  2. Disable all of your multipath devices on your other machines by running the following commands:

    # systemctl stop multipath-tools.service
    # multipath -F
    
  3. Copy the MULTIPATH-CONF file from the first machine to all the other machines in the cluster.

  4. Re-enable the multipathd daemon on all the other machines in the cluster by running the following command:

    # systemctl start multipath-tools.service
    

When you add a new device you will need to repeat this process.

Multipath Device attributes

In addition to the user_friendly_names and alias options, a multipath device has numerous attributes. You can modify these attributes for a specific multipath device by creating an entry for that device in the multipaths section of the multipath configuration file. For information on the multipaths section of the multipath configuration file, see Section, “#multipath-config-multipath”.

Multipath Devices in Logical Volumes

After creating multipath devices, you can use the multipath device names just as you would use a physical device name when creating an LVM physical volume. For example, if /dev/mapper/mpatha is the name of a multipath device, the following command will mark /dev/mapper/mpatha as a physical volume.

# pvcreate /dev/mapper/mpatha

You can use the resulting LVM physical device when you create an LVM volume group just as you would use any other LVM physical device.

Note

If you attempt to create an LVM physical volume on a whole device on which you have configured partitions, the pvcreate command will fail.

When you create an LVM logical volume that uses active/passive multipath arrays as the underlying physical devices, you should include filters in the lvm.conf to exclude the disks that underlie the multipath devices. This is because if the array automatically changes the active path to the passive path when it receives I/O, multipath will failover and failback whenever LVM scans the passive path if these devices are not filtered. For active/passive arrays that require a command to make the passive path active, LVM prints a warning message when this occurs. To filter all SCSI devices in the LVM configuration file (lvm.conf), include the following filter in the devices section of the file.

filter = [ "r/block/", "r/disk/", "r/sd.*/", "a/.*/" ]

After updating /etc/lvm.conf, it’s necessary to update the initrd so that this file will be copied there, where the filter matters the most, during boot. Perform:

update-initramfs -u -k all

Note

Every time either /etc/lvm.conf or /etc/multipath.conf is updated, the initrd should be rebuilt to reflect these changes. This is imperative when blacklists and filters are necessary to maintain a stable storage configuration.

Last updated 5 months ago. Help improve this document in the forum.