OSCAR

Our coverage of Linux-based cluster distributions continues this month with OSCAR, the Open Source Cluster Application Resource software bundle available free from the Open Cluster Group.
Back in March, “Extreme Linux” (available online at http://www.linux-mag.com/2005-03/extreme_01.html) showed you how to build a (very) low cost cluster. The four-node, diskful cluster — which cost only about $1,600 — consisted of modest 2.3 GHz Intel Celeron processors interconnected via 100 Mbps (fast) Ethernet. While that system isn’t particularly powerful, it is suitable for testing clustering technologies and scalability of parallel software applications. Such a small cluster is a dandy place to do software development before moving an application to a bigger system.
Since March, the column has been focused on free Linux- based cluster distributions that can run on such a cluster. These cluster distributions tend to include combinations of individual packages (many of which have been previously described in this column!) and typically run on systems consisting of a few processors to a few thousand processors. Some of the software highlighted recently were really full cluster distributions with their own custom kernels, while others were package collections that install on well-known Linux distributions.
April’s column (http://www.linux-mag.com/2005-04/extreme_01.html) discussed the Clustermatic Linux distribution, a collection of packages (including BProc) designed for fast and efficient clusters with minimal operating system software. May presented the Rocks Cluster Distribution distributed by the San Diego Supercomputing Center (SDSC). Rocks comes in many flavors for a variety of computational needs.
This month, let’s dive into OSCAR, the Open Source Cluster Application Resources software bundle designed solely for high-performance cluster computing.

What is OSCAR?

OSCAR is the first project by the Open Cluster Group (http://www.openclustergroup.org/), an informal group dedicated to making clusters practical for high-performance computing (HPC). The members hail from public and private organizations, including Bald Guy Software, Canada’s Michael Smith Genome Science Centre, Indiana University, Intel, Louisiana Tech University, Revolution Linux, and Oak Ridge National Laboratory. Membership is open to any interested party, and by-laws for the group are located on the web site.
The OSCAR software bundle consists of core, included, and third party packages (RPMs) that are loaded onto a supported Linux distribution. OSCAR version 4.1 — very recently released on 22 April 2005 — supports Red Hat Linux 9 (x86), Red Hat Enterprise Linux 3 (Update 2 or Update 3 only on x86 and IA64), Fedora Core 2 (x86), and Mandrake Linux 10.0 (x86).
An OSCAR server node must consist of an i586 or above, at least one Ethernet interface (two if the server will also be connected to another network), 4 GB of free disk space (2 GB under / and 2 GB under /var), and an installed version of one of the supported Linux distributions (listed above) without any updates installed.
The compute nodes (called OSCAR client nodes) must each consist of an i586 or above, a hard disk (2 GB or larger), an Ethernet interface, and a floppy disk or a PXE- enabled BIOS. Client nodes must all be of the same architecture (x86 or IA64), and they must be able to run the same Linux that was installed on the server node.
The OSCAR distribution packages are available on SourceForge at http://oscar.sourceforge.net/, and they come in three different flavors: Regular, Extra Crispy, and Secret Sauce. The Regular distribution includes all the basic packages needed to install and operate an OSCAR cluster. The Extra Crispy distribution includes all the packages in the Regular distribution plus the source code packages (SRPMs). The Secret Sauce distribution includes only the SRPMs for users who later decide they want to add them to their existing Regular installation. Since SRPMs are not needed in order to install and operate an OSCAR cluster, most people will download and install the Regular distribution.
A list of packages contained in OSCAR is shown in Table One.
TABLE ONE: Packages included in Open Source Cluster Application Resources (OSCAR) software bundle
Package Description
APItest A software testing framework
base OSCAR base installation
C3 Cluster Command and Control toolkit
disable services Disables mail services, Kudzu, slocate, and makewhatis on client nodes
Ganglia A distributed monitoring system for clusters and grids
kernel_picker A Perl script that installs a chosen kernel into an OSCAR image
LAM/MPI The LAM Message Passing Interface libraries
loghost Configures the syslog server
Maui Scheduler for use with OpenPBS or Torque
MPICH The Message Passing Interface implementation from Argonne National Laboratory
mta config Configures mail services
networking Caching nameserver to serve clients
NTP Network Time Protocol
oda OSCAR database (MySQL)
OPIUM OSCAR password installer and user management tool
perl Qt Qt bindings for Perl
pfilter Firewall (packet filtering) compiler
PVM Parallel Virtual Machine libraries
SIS System Installation Suite
sync_files OSCAR-ized file synchronization system
switcher Environment switcher
Torque Batch queuing system

Installing OSCAR

Installing OSCAR is pretty straightforward, but since OSCAR is designed to work with a variety of Linux distributions, error or warning messages are often generated during installation. Fortunately, the OSCAR distribution comes with a detailed installation manual as well as a Quick Install guide that lists many of the distribution-specific issues you may encounter during installation. A user’s guide is also provided describing how to configure and use each of the packages in the OSCAR suite.
OSCAR installs a “compatibility” Python 2 RPM to resolve portability issues across supported Linux distributions. Such compatibility problems arise when installation software uses scripting languages that have version dependencies.
The first step in installing OSCAR is to load a supported Linux distribution onto the machine that will be the server node. Only the distributions listed above are supported by OSCAR. Distribution updates should not be installed prior to loading OSCAR since they may interfere with the OSCAR installation software, which expects an out-of-the-box system. Updates can be loaded onto the server node after OSCAR has been installed.
When laying out the partitions on the server node, keep in mind that OSCAR requires about 2 GB of free space for storing RPMs in /tftpboot/rpm/ and about 2 GB of free space to store system images in /var/lib/systemimager/. When installing a new server, you should allow for 4 GB in both the / (if it contains /tftpboot/) and the /var filesystems.
Next, download a copy of the OSCAR distribution from your favorite mirror and unpack the tarball on the newly installed server. As root, configure OSCAR and install the software as follows (for the Regular distribution):
[root@head root]# wget http://easynews.dl.sourceforge.net/sourceforge/oscar/oscar-4.1.tar.gz
[root@head root]# tar xzf oscar-4.1.tar.gz
[root@head root]# cd oscar-4.1
[root@head oscar-4.1]# ./configure
[root@head oscar-4.1]# make install
By default, OSCAR is installed in /opt/oscar/, but this path may be changed using ./configure ––prefix= ALT-DIR.
After ensuring that your Ethernet interface is up and running with an appropriate IP address for use within the cluster network, it’s necessary to copy all of the RPMS for the chosen Linux distribution to /tftpboot/rpm/ on the server node. For each Linux CD, locate the directory containing the RPMs and copy them to the disk.
For example, for Fedora Core 2, insert the first disk and do the following:
[root@head root]# mkdir –p /tftpboot/rpm
[root@head root]# cp –p /mnt/cdrom/Fedora/RPMS/*.rpm /tftpboot/rpm/
[root@head root]# eject cdrom
Repeat this copy step for each Fedora Core 2 CD. Next, run the OSCAR
installer as follows:
[root@head root]# cd /opt/oscar
[root@head oscar]# ./install_cluster eth0
In the former command, cd to wherever you installed OSCAR to. In the latter command and in this case, eth0 is the private Ethernet interface for the cluster network. If some other device is used for the private network, substitute it for eth0 in the command above.
The OSCAR installer first copies OSCAR RPMs to /tftpboot/rpm/ and then installs some packages (packman, depman, depman-updaterpms, packman-rpm, and update-rpms) to verify that the RPMs copied to /tftpboot/rpm/ are correct and complete. Additional packages (including Perl modules and MySQL) are installed and configured for use on the server node. Then MySQL database tables are created and populated with information about the available OSCAR packages.
The process continues by installing server core RPMs and restarting affected services. Once all of the prerequisites are complete, the OSCAR Installation Wizard, shown in Figure One, automatically launches. A series of steps must now be completed in the order they are presented to complete the installation of the OSCAR cluster. Each step has a corresponding Help… button that displays a message box describing the step to be completed.
FIGURE ONE: The OSCAR Installation Wizard



*Step 0, Download Additional OSCAR Packages, is optional.Click this button to launch the OSCAR Package Downloader (OPD), which allows you to retrieve additional packages over the Internet for use on the cluster. At this time, only the optional OpenPBS package is listed as available. (Since Torque is installed instead, there’s no need to download OpenPBS.)
*Step 1, Select OSCAR Packages To Install, is also optional. By default, all of the included packages are destined to be installed on client nodes.
*Step 2, Configure Selected OSCAR Packages, is an optional step that allows for cluster-specific configurations of individual packages. The installed version of MPI (LAM/MPI vs. MPICH) can be chosen, the name of the cluster used by Ganglia can be specified, alternative kernels can be specified, the NTP server can be provided, and Torque can be configured in this step.
*Step 3, Install OSCAR Server Packages, is required. It automatically re-runs install_server to ensure that all core RPMs are already installed on the server node and also installs and configures all other needed packages as specified in Steps 1 and 2. Once complete, a message box stating “Successfully installed OSCAR server” is displayed.
*Step 4, Building OSCAR Client Image, is a required step. It opens a dialog window as shown in Figure Two. In most cases, the default values are fine. However, the “Disk Partition File” should be verified to make sure it is appropriate for the client nodes. The disk type is located at the end of the filename. The “Disk Partition File” and the “Package File” can be changed as appropriate to provide the desired disk layout on client nodes and to ensure that only the desired packages are installed.
The “IP Assignment Method” may also need to be changed and the “Post Install Action” should be carefully considered. If the desired action upon successful installation is to reboot, you should make sure the BIOS on each client node is set to boot from the local hard drive before attempting to boot over the network. If the node boots over the network before checking the local hard drive, it will continuously re-install the System Installation Suite image instead of running the previously installed operating system.
FIGURE TWO: The Build OSCAR Client Image dialog



Click “Build Image” to construct the image to be used on the client nodes. A green progress bar at the bottom of the window indicates the status of the image construction. A pop-up window will indicate when the installation image has been successfully created.
*Step 5, Define OSCAR Clients, can be run multiple times to define the client nodes. The “Image Name” should specify the name of the image from the previous step. The “Domain Name” should contain the client IP domain name; this field must have a value. The “Base Name” is the first part of the client hostname; it should not contain an underscore or period character. The “Number of Hosts” listed must be greater than zero.
The “Starting Number” specifies the index appended to the Base Name for each client. “Padding” specifies the number of digits to zero-pad in the hostname. The “Starting IP” lists the IP address of the first client nodes. The “Subnet Mask” specifies the netmask for all clients. The “Default Gateway” specifies the default route for all clients.
When this information is correct, click Add Clients to create client database entries. Once all client sets have been defined, click Close to close the dialog and continue with the next step.
*Step 6, Setup Networking, is a required step in which the MAC address of each client is specified. In this step, the DHCP server is started on the server node. MACs can be collected automatically by choosing the correct option and turning on, one at a time, client nodes that are configured to boot over the network. Alternatively, MACs can be imported from a file, and they can be exported to a file at any time while they are being automatically collected.
At the bottom of the “Setup Networking” window, buttons are available for either building an autoinstall floppy or for setting up network boot for client nodes. In addition, multicasting can be enabled (if desired) at this step. Multicasting can now be used to push files to client nodes (using Flamethrower instead of rsync), so it should be enabled if the network switch supports it. Click Close to complete this step.
At this point, all client nodes should be booted via the network or via floppy. They will automatically be installed and configured as cluster nodes from the server. Once a node has finished loading, it should be rebooted from its hard drive.
*When all nodes are up and operating, Step 7, Complete Cluster Setup, should be selected. This step runs the final installation configuration scripts for each software package and perform cleanup and initialization functions. The terminal window displays status information as this step progresses.
*Step 8, Test Cluster Setup, can now be used to perform some basic tests of the clusters services. If one or more tests fail, it may indicate a problem with the installation or with one or more client nodes in the cluster. If all tests pass, your cluster is setup and ready for use!
At a later time, client nodes may be added or deleted by re-running install_cluster (specifying the desired interface as usual) and using the Add or Delete OSCAR Clients buttons near the bottom of the Wizard window. In addition, individual packages may be installed or uninstalled at a later date using the Install/Uninstall OSCAR Packages button.

Try OSCAR Today!

OSCAR is a complete set of software packages supporting high-performance computing on clusters of Linux boxes. It sits on top of a variety of standard Linux distributions, so you and your staff can choose the one you like best. While all of the packages can be downloaded, installed, and configured separately, OSCAR provides an easy-to-use GUI that simplifies installation and configuration of these disparate packages.
While some choices for configuration are limited, OSCAR makes setting up a standard configuration cluster quick and easy. Try it out for yourself. You may never go back to building and installing individual packages for clusters again!

Forrest Hoffman is a computer modeling and simulation researcher at Oak Ridge National Laboratory. He can be reached at class="emailaddress">forrest@climate.ornl.gov.

Fatal error: Call to undefined function aa_author_bios() in /opt/apache/dms/b2b/linux-mag.com/site/www/htdocs/wp-content/themes/linuxmag/single.php on line 62