Managing Software RAIDs 6 and 10 with mdadm
77
no
vd
ocx (
E
NU)
9
Jan
uar
y 2
007
The following table describes the advantages and disadvantages of RAID 10 nesting as 1+0 versus
0+1. It assumes that the storage objects you use reside on different disks, each with a dedicated I/O
capability.
Table 7-2
RAID Levels Supported in EVMS
7.2.2 Creating Nested RAID 10 (1+0) with mdadm
A nested RAID 1+0 is built by creating two or more RAID 1 (mirror) devices, then using them as
component devices in a RAID 0.
IMPORTANT:
If you need to manage multiple connections to the devices, you must configure
multipath I/O before configuring the RAID devices. For information, see
Chapter 5, “Managing
Multipath I/O for Devices,” on page 41
.
The following procedure uses the device names shown in the following table. Make sure to modify
the device names with the names of your own devices.
RAID Level Description
Performance and Fault Tolerance
10 (1+0)
RAID 0 (stripe)
built with RAID 1
(mirror) arrays
RAID 1+0 provides high levels of I/O performance, data redundancy,
and disk fault tolerance. Because each member device in the RAID 0 is
mirrored individually, multiple disk failures can be tolerated and data
remains available as long as the disks that fail are in different mirrors.
You can optionally configure a spare for each underlying mirrored array,
or configure a spare to serve a spare group that serves all mirrors.
10 (0+1)
RAID 1 (mirror)
built with RAID 0
(stripe) arrays
RAID 0+1 provides high levels of I/O performance and data
redundancy, but slightly less fault tolerance than a 1+0. If multiple disks
fail on one side of the mirror, then the other mirror is available. However,
if disks are lost concurrently on both sides of the mirror, all data is lost.
Although this solution offers less disk fault tolerance than a 1+0
solution, If you need to perform maintenance or maintain the mirror on a
different site, you can take an entire side of the mirror offline and still
have a fully functional storage device. Also, if you lose the connection
between the two sites, either site operates independently of the other.
That is not true if you stripe the mirrored segments, because the mirrors
are managed at a lower level.
If a device fails, the mirror on that side fails because RAID 1 is not fault-
tolerant. Create a new RAID 0 to replace the failed side, then
resynchronize the mirrors.
Summary of Contents for LINUX ENTERPRISE SERVER 10 - STORAGE ADMINISTRATION GUIDE FOR EVMS
Page 4: ...novdocx ENU 9 January 2007 ...
Page 8: ...8 SLES 10 Storage Administration Guide for EVMS novdocx ENU 9 January 2007 ...
Page 10: ...10 SLES 10 Storage Administration Guide for EVMS novdocx ENU 9 January 2007 ...
Page 40: ...40 SLES 10 Storage Administration Guide for EVMS novdocx ENU 9 January 2007 ...
Page 52: ...52 SLES 10 Storage Administration Guide for EVMS novdocx ENU 9 January 2007 ...
Page 74: ...74 SLES 10 Storage Administration Guide for EVMS novdocx ENU 9 January 2007 ...
Page 84: ...84 SLES 10 Storage Administration Guide for EVMS novdocx ENU 9 January 2007 ...