10400455-002
©2008-14 Overland Storage, Inc.
12
SnapScale/RAINcloudOS 4.1 Administrator’s Guide
1 - Overview
Peer sets are created using two or three drives (based on redundancy choices) located on
different nodes. Each peer set member has the same data and metadata as its peers.
There are three different states for SnapScale nodes:
•
Uninitialized
node
– an independent node that has not yet been joined to a SnapScale
cluster.
•
SnapScale
node
– a healthy node that is a member of a fully-configured SnapScale
cluster. Both 2U nodes with up to 12 drives and 4U nodes with 36 drives are available.
•
Management
node
– a SnapScale node with special duties involved in managing the
cluster. The Management node is selected automatically by the RAINcloudOS when the
cluster boots. Should that management node fail, another currently available node is
automatically chosen to become the new Management node. This Management node also
hosts peer sets with metadata and data just like all other SnapScale nodes.
Other key concepts include:
•
Management IP
– the IP address through which the administrator accesses the Web
Management Interface of the current Management node.
•
Peer set
– a set of two or three disks (each on a separate node) that have mirrored data
for redundancy.
•
Cluster Name
– the name visible to network clients and used to connect to the cluster
(similar to a server name), and resolvable to node IP addresses via round robin DNS.
•
Cluster Management Name
– the hostname resolvable to the Management IP for Web
Management Interface access or Snap EDR configuration.
•
Data Replication Count
– an administrator-specified, cluster-wide count of the
number of mirrored copies of data within the cluster. The Data Replication Count can be
either “
2
” or “
3
” and determines the number of drives in a peer set.
A SnapScale cluster consists of two separate networks:
•
Client Network
– used exclusively for client access. Clients can connect to any node to
access data anywhere on the cluster.
•
Storage Network
– an isolated network used exclusively by the cluster for inter-node
communications. This includes:
•
Heartbeat (node health/presence) sensing.
•
Synchronization of peer set members.
•
Data transfer between nodes to facilitate clients reading from and writing to files.
SnapScale Node Requirements
The following table details the basic requirements for cluster nodes:
Requirement
Detailed Description
Minimum number of nodes
A SnapScale cluster must have a minimum of three (3) nodes to
operate normally.
No expansion units
A SnapScale node cannot have any expansion units attached to it.
Minimum number of disks per
node
Each node must have a minimum of four disks. Additional disks can be
added as needed.