7914DS3KPlanning_090710.fm
Draft Document for Review March 28, 2011 12:24 pm
54
IBM System Storage DS3500: Introduction and Implementation Guide
as well on RAID 5 logical drives because parity data must be recalculated for each write
operation and then written to each drive in the array.
Use write caching on RAID 5 arrays, because each RAID 5 write will not be completed until at
least two read IOs (one data one parity) and two write IOs (one data and one parity) have
occurred. The write performance penalty may be mitigated by using battery-backed write
cache. RAID 5 arrays with caching can give as good as performance as any other RAID level,
and with some workloads, the striping effect can provide better performance than RAID 1/10.
Applications that require high throughput sequential write I/Os are an example of one such
workload. In this situation, a RAID Level 5 array can be configured to perform just one
additional parity write when using “full stripe writes” (also known as “full stride writes”) to
perform a large write I/O when compared to the two writes per data drive (self, and its mirror)
that are needed for each write I/O with a RAID 1 array.
You must configure the RAID 5 array with certain number of physical drives in order to take
advantage of full stripe writes. This is illustrated for the case of a RAID 5 array with 8 total
drives (7 data + 1 parity) in the Figure 3-17.
Figure 3-17 Table illustrating the potential performance advantages of RAID 5 full stripe writes
Column A lists the number of drives that are being written to. Column Y is the number of write
I/Os for a RAID 1 and will always be twice the value of A. Columns B,C,D,E contain the
numbers of read data/parity and write data/parity I/Os required for the number of drives that
are being written to. You can see that for seven drives, no read I/Os are required for RAID 5
arrays because the full stripe is being written at once. This substantially reduces the total
number of I/Os (column X) required for each write operation.
The decrease in the overhead read operations with the full stripe write operation is the
advantage you are looking for. You must be very careful when implementing this type of layout
to ensure that your data pattern does not change, and decrease its effectiveness. However,
this layout might work well for you in a large sequential write environment. Due to the small
size of segments, reads might suffer, so mixed I/O environments might not fare well, which
might be worth testing if your writes are high.
When the IBM DS system storage detects that it is receiving contiguous full stripe writes it will
switch internally to an even faster write capability known as Fast Stripe Write Through. In this
method of writing the DS system storage uses the disk as the mirror device for the cache
write and shortens the write process. This method of writing can increase throughput as
much as 30% on the system storage.
This does however, require that the following rules are being met by the IO pattern:
14
8
1
7
0
0
7
12
8
1
6
0
1
6
10
8
1
5
0
2
5
8
8
1
4
0
3
4
6
8
1
3
1
3
3
4
6
1
2
1
2
2
2
4
1
1
1
1
1
Y: Total #
of IOs for
RAID 1 ( =
Ax2 )
X: Total #
of IOs for
RAID 5 ( =
B+C+D+E )
E: Total #
of R AID 5
Write
Parity IOs
D: Total #
of R AID 5
Write Data
IOs
C: Total #
of RAID 5
Read Parity
IOs
B: Total #
of RAID 5
Read Data
IOs
A: Total #
of drives
written
Summary of Contents for DS3500
Page 2: ......
Page 5: ...iii Draft Document for Review March 28 2011 12 24 pm 7914edno fm ...
Page 789: ......