10400455-002
©2008-14 Overland Storage, Inc.
87
SnapScale/RAINcloudOS 4.1 Administrator’s Guide
5 – Storage Options
Peer Set Utilization
Each file's data is spread across multiple peer sets, and the cluster automatically distributes
data for different files throughout the peer sets in the cluster. Metadata for files and
directories is independently distributed among different peer sets using a hash algorithm for
optimum performance and protection.
Peer Set Basics
New drives are initially configured automatically as spare drives. Subsequently, if enough
spare drives exist on different nodes to construct new peer sets but still satisfy the spare count
setting, the SnapScale automatically creates new peer sets and expand cluster storage space.
Drives in a cluster do not all need to have the same capacity, but drives in a given peer set
should have the same capacity or space is wasted on the larger drives.
The following points must be observed in regards to drives used in the cluster:
•
The drives in a cluster must all be the same type of drive (such as SAS) and the same
rotational speed.
•
The storage capacity of a peer set is limited to the smallest capacity drive in the peer set.
In case of peer set drive failure, RAINcloudOS continues to serve data reads and writes to that
peer set from another member of the peer set as long as the peer set is not offline. If clients are
currently using data on the peer set, it continues to operate as-is.
Data Replication Count
is an administrator specified, cluster-wide count of the degree of
redundancy of data on the cluster. The Data Replication Count can be either 2x or 3x and
determines the number of drives (2 or 3) that make up each peer set.
Hot Spares
Each node can have a number of hot spares in the event of a drive failure. The total number of
hot spares for the cluster is user selectable. A suggested number of hot spares for various node
sizes is provided. If a peer set member drive fails, data from a healthy peer set drive on
another node is re-synced onto an available spare on any node that doesn't have another active
member of that peer set, and the spare then becomes a member of that peer set.
Drives added to nodes as additions or as replacements to failed drives are automatically
configured as spares. If enough spares exist across different nodes to satisfy the Data
Replication Count and the spare drives count, the cluster automatically creates a new peer set
out of available spare drives.
Degraded –
Cannot repair;
spares on same
node
The peer set cannot be repaired because
the only eligible spares are located on the
same node as an active member of the
peer set.
Data is fully available for read and
write.
Failed
All drives in peer set have failed.
No availability. Contact Overland
Support.
Initializing
The peer set is being created or
initialized.
Data is not yet available.
Inconsistent
The peer set has more members than the
data replication count.
Contact Overland Support.
Peer Set Status
Failure Type
Data Availability