Managing Software RAIDs 6 and 10 with mdadm
83
no
vd
ocx
(e
n)
6 Ap
ril 20
07
The following table describes the advantages and disadvantages of RAID 10 nesting as 1+0 versus
0+1. It assumes that the storage objects you use reside on different disks, each with a dedicated I/O
capability.
Table 7-2
RAID Levels Supported in EVMS
7.2.2 Creating Nested RAID 10 (1+0) with mdadm
A nested RAID 1+0 is built by creating two or more RAID 1 (mirror) devices, then using them as
component devices in a RAID 0.
IMPORTANT:
If you need to manage multiple connections to the devices, you must configure
multipath I/O before configuring the RAID devices. For information, see
Chapter 5, “Managing
Multipath I/O for Devices,” on page 43
.
The procedure in this section uses the device names shown in the following table. Make sure to
modify the device names with the names of your own devices.
RAID Level Description
Performance and Fault Tolerance
10 (1+0)
RAID 0 (stripe)
built with RAID 1
(mirror) arrays
RAID 1+0 provides high levels of I/O performance, data redundancy,
and disk fault tolerance. Because each member device in the RAID 0 is
mirrored individually, multiple disk failures can be tolerated and data
remains available as long as the disks that fail are in different mirrors.
You can optionally configure a spare for each underlying mirrored array,
or configure a spare to serve a spare group that serves all mirrors.
10 (0+1)
RAID 1 (mirror)
built with RAID 0
(stripe) arrays
RAID 0+1 provides high levels of I/O performance and data
redundancy, but slightly less fault tolerance than a 1+0. If multiple disks
fail on one side of the mirror, then the other mirror is available. However,
if disks are lost concurrently on both sides of the mirror, all data is lost.
This solution offers less disk fault tolerance than a 1+0 solution, but if
you need to perform maintenance or maintain the mirror on a different
site, you can take an entire side of the mirror offline and still have a fully
functional storage device. Also, if you lose the connection between the
two sites, either site operates independently of the other. That is not
true if you stripe the mirrored segments, because the mirrors are
managed at a lower level.
If a device fails, the mirror on that side fails because RAID 1 is not fault-
tolerant. Create a new RAID 0 to replace the failed side, then
resynchronize the mirrors.
Summary of Contents for LINUX ENTERPRISE SERVER 10 - STORAGE ADMINISTRATION GUIDE 7-2007
Page 4: ...novdocx en 6 April 2007...
Page 30: ...30 SLES 10 Storage Administration Guide novdocx en 6 April 2007...
Page 42: ...42 SLES 10 Storage Administration Guide novdocx en 6 April 2007...
Page 58: ...58 SLES 10 Storage Administration Guide novdocx en 6 April 2007...
Page 90: ...90 SLES 10 Storage Administration Guide novdocx en 6 April 2007...
Page 100: ...100 SLES 10 Storage Administration Guide novdocx en 6 April 2007...
Page 106: ...106 SLES 10 Storage Administration Guide novdocx en 6 April 2007...