The default use model of an ClusterPack cluster is that end users will submit jobs remotely through the
ClusterWare GUI or by using the ClusterWare CLI from the Management Node. Cluster administrators
generally discourage users from logging into the Compute Nodes directly. Users are encouraged to use th
Management Server for accessing files and performing routine tasks. When it is desirable to add addition
nodes for this purpose, or for more intense computational tasks such as job pre or post processing and
compilation, additional "head nodes" can be used. In this document, the term "head node" refers to such
user-accessible nodes that allow for interactive use. Head nodes can be included in an ClusterPack Cluste
using the following approach:
z
The head nodes should include an additional network card to allow the node to be
accessible to the wider area network.
z
Head nodes should be added to the cluster using the same approach as Compute Nodes.
They can be included in the initial cluster definition or added at a later time using the '-a'
option to manager_config and compute_config.
z
Administrators may choose to close these nodes from running ClusterWare jobs or only
make them accessible only to particular queues. (See ClusterWare documentation for
more information).
z
It may be convenient to use the clgroup command to create groups to represent the head
node(s) and the remaining Compute Nodes.
z
Use compute_config to configure the additional network cards to allow the head node(s)
to be accessible outside of the cluster. Assign the available network cards publicly
accessible IP addresses as appropriate to your local networking configuration.
Back to Top
1.3.4 Set up TCP-CONTROL
ClusterPack delivers a package to allow some control of TCP services coming into the Compute Nodes. T
package, called TCP-CONTROL, can be used to limit users from accessing the Compute Nodes directly,
should be used with great care due to several restrictions. TCP-CONTROL can be used to force users to r
jobs through ClusterWare Pro™ only. It accomplishes this by disabling telnet and remsh access to the
Compute Nodes from the manager. However, this will also cause several important telnet- and remsh-bas
applications to fail for non-root users. The tools affected are the multi-system aware tools (clsh, clps, etc.
and the AppRS utilities (apprs_ls, apprs_clean, etc.).
Note:
Enabling TCP-CONTROL by setting the /etc/hosts.deny file will prevent users' access to
multi-system aware tools and AppRS utilities.
By default, the TCP-CONTROL package is installed on the Compute Nodes, but is not configured to rest
access in any way. TCP control is restricted by the settings in /etc/hosts.allow and /etc/hosts.deny files on
each Compute Node. The /etc/hosts.deny file is initially configured with no entries, but has two comment
lines that can be uncommented to prevent users from accessing the Compute Nodes:
ALL:ALL@<Management Server name>