42
SLES 10 Storage Administration Guide for EVMS
no
vd
ocx (
E
NU)
9
Jan
uar
y 2
007
IMPORTANT:
If you plan to use multipathing with software RAIDs, you should first configure
multipathing for the devices to create the multipath storage object for each device, then configure
the RAID with them.
5.1.2 Benefits of Multipathing
Multipathing can help provide fault tolerance for the connection between the server and its storage
devices, typically in a SAN configuration. When multipathing is configured and running, it
automatically isolates and identifies device connection failures, and reroutes I/O to alternate
connections. A previously failed path is automatically reinstated when it becomes healthy again.
Typical connection problems involve faulty adapters, cables, or controllers. When you configure
multipath I/O for a device, the multipath driver monitors the active connection between devices.
When it detects I/O errors, the multipath driver fails over to a designated secondary path. When the
primary path recovers, control is automatically returned to the primary connection.
5.1.3 Guidelines for Multipathing
Multipathing is available only under the following conditions:
Multiple physical paths must exist between host bus adapters in the server and host bus
controllers for the storage device.
Device partitioning (disk carving) and hardware RAID configuration should be completed
prior to configuring multipathing. If you change the partitioning in the running system, Device
Mapper Multipath I/O (DM-MPIO) does not automatically detect and reflect these changes. It
must be reinitialized, which usually requires a reboot.
For software RAID devices, multipathing should be configured prior to creating the software
RAID devices because multipathing runs underneath the software RAID.
In the initial release of SUSE Linux Enterprise Server 10, DM-MPIO is not available for the
boot partition, because the boot loader cannot handle multipath I/O. Therefore, we recommend
you set up a separate boot (
/boot
) partition when using multipathing. This has been resolved
in the latest package updates.
The storage subsystem you use on the multipathed device must support multipathing. Most
storage subsystems should work; however, they might require an appropriate entry in the
DEVICE variable in the
/etc/multipath.conf
file.
For a list of supported storage subsystems that allows multiple paths to be detected
automatically, see “10.1 Supported Hardware” in the
SUSE Linux Enterprise Server 10
Administration Guide
(http://www.novell.com/documentation/sles10/sles_admin/data/
sec_mpio_supported_hw.html)
.
When configuring devices for multipathing, use the device names in the
/dev/disk/by-id
directory instead of the default device names (such as
/dev/sd*
), because the
/dev/disk/
by-id
names persist over reboots.
5.1.4 Device Mapper
The default settings for
mdadm.conf
(and
lvm.conf
) do not work properly with multipathed
devices. By default, both
md
and LVM2 scan physical devices only and ignore any symbolic links or
device-mapper devices.
Summary of Contents for LINUX ENTERPRISE SERVER 10 - STORAGE ADMINISTRATION GUIDE FOR EVMS
Page 4: ...novdocx ENU 9 January 2007 ...
Page 8: ...8 SLES 10 Storage Administration Guide for EVMS novdocx ENU 9 January 2007 ...
Page 10: ...10 SLES 10 Storage Administration Guide for EVMS novdocx ENU 9 January 2007 ...
Page 40: ...40 SLES 10 Storage Administration Guide for EVMS novdocx ENU 9 January 2007 ...
Page 52: ...52 SLES 10 Storage Administration Guide for EVMS novdocx ENU 9 January 2007 ...
Page 74: ...74 SLES 10 Storage Administration Guide for EVMS novdocx ENU 9 January 2007 ...
Page 84: ...84 SLES 10 Storage Administration Guide for EVMS novdocx ENU 9 January 2007 ...