•
RAID 1
: mirroring without parity or striping
RAID 1 uses mirroring so that data written to one drive is simultaneously written to another drive. This is
good for small databases or other applications that require small capacity but complete data redundancy.
RAID 1 provides fault tolerance from disk errors or failures and continues to operate as long as at least
one drive in the mirrored set is functioning. With appropriate operating system support, there can be
increased read performance and only a minimal write performance reduction.
RAID 1 requires a minimum number of two hard disk drives.
•
RAID 5
: block-level striping with distributed parity
RAID 5 uses disk striping and parity data across all drives (distributed parity) to provide high data
throughput, especially for small random access. RAID 5 distributes parity along with the data and requires
all drives but one to be present to operate; drive failure requires replacement, but the array is not
destroyed by a single drive failure. Upon drive failure, any subsequent read operations can be calculated
from the distributed parity so that the drive failure is masked from the end user. The array will have data
loss in the event of a second drive failure and is vulnerable until the data that was on the failing drive is
rebuilt onto a replacement drive. A single drive failure in the set will result in reduced performance of
the entire set until the failing drive has been replaced and rebuilt.
RAID 5 requires a minimum number of three hard disk drives.
•
RAID 6
: block-level striping with distributed parity
RAID 6 uses distributed parity, with two independent parity blocks per stripe, and disk striping. A RAID 6
virtual drive can survive the loss of any two drives without losing data. A RAID 6 drive group is similar
to a RAID 5 drive group. Blocks of data and parity information are written across all drives. The parity
information is used to recover the data if one or two drives fail in the drive group.
RAID 6 requires a minimum number of three hard disk drives.
•
RAID 10
: a combination of RAID 0 and RAID 1
RAID 10 consists of striped data across mirrored spans. A RAID 10 drive group is a spanned drive
group that creates a striped set from a series of mirrored drives. RAID 10 allows a maximum of eight
spans. You must use an even number of drives in each RAID virtual drive in the span. The RAID 1
virtual drives must have the same stripe size. RAID 10 provides high data throughput and complete data
redundancy but uses a larger number of spans.
RAID 10 requires a minimum number of four hard disk drives and also requires an even number of drives,
for example, six hard disk drives or eight hard disk drives.
For detailed information about RAID, refer to “Introduction to RAID” in the
MegaRAID SAS Software User
Guide
on the documentation DVD that comes with your storage product.
Configuring RAID using the Lenovo ThinkServer Deployment Manager
program
Deployment Manager simplifies the process of configuring supported RAID. The help system for the program
can be accessed directly from the program interface.
Deployment Manager has the following features for RAID configuration:
• For use with all supported RAID controllers
• Automatically detects hardware and lists all supported RAID configurations
• Configures one or more disk arrays per controller depending on the number of drives attached to the
controller and the RAID level selected
• Supports hot-spare drives
• Creates a RAID response file that can be used to configure RAID controllers on similarly configured
Lenovo storage products
.
Configuring the storage product
39
Summary of Contents for Storage N3310
Page 1: ...Lenovo Storage N3310 User Guide and Hardware Maintenance Manual Machine Types 70FX and 70FY ...
Page 14: ...xii Lenovo Storage N3310 User Guide and Hardware Maintenance Manual ...
Page 18: ...4 Lenovo Storage N3310 User Guide and Hardware Maintenance Manual ...
Page 20: ...6 Lenovo Storage N3310 User Guide and Hardware Maintenance Manual ...
Page 138: ...124 Lenovo Storage N3310 User Guide and Hardware Maintenance Manual ...
Page 151: ......
Page 152: ......