Summary

Introduction

I – Installations

A – Use the Kernel/Boot Roll CD for the Rocks Cluster frontend

B – Installation of your compute node Use CD or PXE (Network Boot) for the compute nodes

II – Physical Assembly

III – One of the Rolls: Bio Roll

IV – Home Directories

V – Some of main commands

Introduction

Rocks Cluster Distribution is a Linux distribution intended for high-performance computing clusters. Rocks was initially based on the Red Hat Linux distribution, however modern versions of Rocks are now based on CentOS, with a modified Anaconda installer that simplifies mass installation onto many computers.

Rocks include many tools (such as MPI) which are not part of CentOS but are integral components that make a group of computers into a cluster. In what follows we will show the installation of the CentOS version of Rocks Cluster.

The demonstration facility will start from the Frontend Node compute different and multitudes actions and file system that is used on different cluster.

I – Installations

A – Use the Kernel/Boot Roll CD for the Rocks Cluster frontend

For the minimum installation of Rocks cluster frontend must have at least

□  Kernel/Boot Roll CD

□  Base Roll CD

□  Web Server Roll CD

At the beginning of install we have to insert the kernel/Boot Roll CD into our frontend machine and reset the frontend machine. The frontend machine has to boot and restart on the CD Roll.

After the frontend machine restart you’ll see a screen in which it invite you to type “build”, the installer will start running.

If the machine did not frontend DHCP enabled we will have screens that will allow us to manually set the IPv4 address, gateway and name server. Otherwise the settings are done automatically.

After the DHCP settings, we'll see the screen that calls us to select o Rolls. Click on the link CD / DVD-based Roll. At the end of this stage we have a summary of selected Rolls. Then we'll see the Cluster Information screen.

The information we have requested in this screen are all stored in the cluster frontend. Among the information requested we have the fully-Qualify Host Name. Other information requested has not mandatory. So choose your hostname very carefully.

Ethernet Configuration for eth0:

At that time we have to configure our IP address and the Netmask. It gives possibility at the frontend to connect at the outside network.

Ethernet Configuration eth1:

The next step is the same; it’s to give an IP address and the Netmask for that part of our network card.

Root Password:

I – Installations

B – Installation of your compute node Use CD or PXE (Network Boot) for the compute nodes

For the start of installation Login to the frontend node as root.

In the terminal window type the following command

# insert-ethers this command allows the computer to connect to the network to access the frontend.

From that moment the program is used to start DHCP displays this screen. The next step just selects compute and hit ok.

The next step shows that the screen means that the frontend is waiting for a new compute node.

Once the connection is DHCP obtained between the two machines we have this screen that appears. And since the demand is established, it inserts it in its files and configures them as / etc / hosts

II – Physical Assembly

This architecture shows a frontend and a number of compute nodes

III – Bio Roll

Bio-Informatics is the use of techniques from applied mathematics, informatics, statistics, and computer science to solve biological problems.

The following procedure will install the Roll, and after the server reboots the Roll should be fully installed and configured.

$ su - root

# rocks add roll bio.iso

# rocks enable roll bio

test bio:

service gmond restart

/ hmmer 195 cd / opt / bio / hmmer / tutorial /196

IV – Home Directories

Home directories in rocks work like this:

1.  /usr/sbin/useradd creates the home directory in /export/home/$USER (based on the settings in /etc/default/useradd)

2.  rocks sync users adjusts all home directories that are listed as /export/home as follows:

·  edit /etc/password, replacing /export/home/ with /home/

·  add a line to /etc/auto.home pointing to the existing directory in /export/home

·  411 is updated, to propagate the changes in /etc/passwd and /etc/auto.home

In the default Rocks configuration, /home/ is an automount directory. By default, directories in an automount directory are not present until an attempt is made to access them, at which point they are (usually NFS) mounted.

This means you CANNOT create a directory in /home/ manually! The contents of /home/ are under autofs control.

To "see" the directory, it's not enough to do a ls /home as that only accesses the /home directory itself, not its contents. To see the contents, you must ls /home/username.

V – Some of main commands

To distribute account information to whole the cluster:

rocks-user-sync

Run a command on all compute nodes:

rocks run host compute "pstree"

This command displays the process in a tree so that their inter-dependencies:

The ps command gives a list of active processes

rocks run host "ps"

The command uptime displays on a line the following information: The current time, time which the system operates, the number of users currently connected

rocks run host "uptime"

After you login into the frontend node using root account, you can conduct administrative work

To make an user account

useradd toto

To initiate a password for a new account and enable the account

passwd toto

State Web server mode
http://cluster.esaip.org/ganglia/
optionally restart the service Ganglia
(On the controller and the computer)