Building a (Very) Low Cost Cluster

You don’t need a ton of cash to build a cluster. For a little over$ 1,000, you can even build one for use at home.
Businesses and organizations are often reluctant to invest in new technologies, particularly if the investment is perceived to be large or if the technology is sufficiently different from the organization’s established technology. But such fears can limit the adoption of Linux and cluster computing innovations, even when those initiatives might save considerable time and money.
Fortunately, getting started or just experimenting with Linux and clusters can be done on a very small budget. Whether the goal is redundant, highly-available enterprise applications or high-performance computing (HPC), you can start small and grow the system as needed. And with the right applications or models, you can get as much performance as you can afford.
But for an organization with no means of launching a new information technology or HPC initiative, what’s the best way to get your feet wet? One way to experiment with cluster technology is to acquire a few old personal computers, hook them up, and load a Linux cluster distribution on them. While this approach has been shown to work (read about building the Stone SouperComputer before at http://www.linux-mag.com/1999-05/extreme_01.html), it has some disadvantages if the hardware is insufficient or if the nodes are too different from one another in speed or capability.
Another alternative is to contact one of the many fine cluster integrators. (Some of them advertise here in the pages of Linux Magazine.) Many of these integrators are willing to work with people to design and build a cluster to meet their unique needs and budget constraints. The advantage here is that integrators can deliver a turn-key system that works right out of the box, and the vendors support the system hardware and software. While not everyone needs this much help, it can be convenient and cost effective for many organizations.
In between these two methods lies a spectrum of options for establishing an inexpensive cluster in your office or even at home. The tradeoffs are purchase costs versus people costs. Most groups that cannot afford a turn-key cluster choose some new hardware from a vendor, load one of the free cluster distributions (like ROCKS, Oscar, or Clustermatic) or install high- availability applications, and then start experimenting. This keeps purchase costs low (just the cost of the hardware), but shifts some of the total cost to the expense of salaries. On the other hand, it may be advantageous to have at least some staff intimately involved in operating and maintaining future cluster deployments, so getting a few technical people interested early on can pay off in the long run.
In any case, a balance must be struck between up-front purchase costs and people costs, depending upon how accounting and cost recovery is handled within an organization. In a university setting or within the home, labor costs can largely be ignored.

How Low Can You Go?

So, with hardware prices continuing to fall and ignoring labor expenses, how cheaply can a small cluster be constructed? Let’s build a “garage cluster” for use at home. (Not everyone needs a cluster at home, but having one can be quite useful.)
First, you have to purchase the hardware. Usually, it’s best to have completely homogeneous nodes: all of the machines should have the same processors running at the same clock rate, the same amount of memory, the same network interfaces, and so on. Additionally, choosing upgradable components can provide an incremental upgrade path. Later, when you have more money to spend, upgrading processors can mean more cycles without having to change out other hardware components on the system.
The size of the cluster depends on the exact application being tested or prototyped on the machine. It only takes two nodes to make a cluster, but for computational scaling more are usually preferred. Most computational clusters have at least four or eight nodes.
Having a diskful master node with diskless compute nodes is less expensive than putting disks in every compute nodes. On the other hand, having disks in all nodes can expand your options. Disks distributed across nodes could be made into a parallel filesystem using PVFS2 or some other software. For redundant or high-availability servers, of course, disks are required in all nodes.
The application also dictates the type of network or interconnect used on the cluster. For a really low cost cluster, the only options are fast or gigabit Ethernet, because the high bandwidth, low latency interconnects tend to be rather costly. However, the bandwidth and latency of Ethernet limits the scalability of some computational applications.
Another thing to consider is what to do if the prototype doesn’t pan out. This could influence the hardware purchase. If money is tight, it may be advantageous to build nodes that can be reused as desktops should the cluster experiment fail to produce the desired results. While failure isn’t likely, planning for contingencies demonstrates thorough planning and just might convince your manager to let go of a few greenbacks.

Low Cost Case Study

For my own low cost cluster, I decided to build four nodes interconnected with a small fast Ethernet switch. To maximize the utility of the cluster, each node would have its own hard drive and CD-ROM drive. That way a node could stand in for a broken desktop, if necessary. Besides, ripping CDs and encoding OGGs can be done in parallel.
In keeping with the tradition established with the Stone SouperComputer, not all the components were new. By shopping around on various websites, I managed to find a deal at Computer Geeks (http://www.compgeeks.com/) on some new motherboards with processors that were system pulls (the CPUs were removed from other systems).
While I don’t advocate purchasing used hardware over the web for business or production activities, I was willing to take some risk for an experimental cluster. I’ve purchased a variety of hardware from Computer Geeks over the years, and the company provides warranties on most items for a year if there is no manufacturer warranty provided. Again, cost was a major consideration.
Nearly all of the components for the cluster were purchased from Computer Geeks in June 2004, including new Socket 478 micro-ATX motherboards with on-board fast Ethernet interfaces, and 2.3 GHz Celeron processors. Each motherboard and processor cost$ 104.00 at that time.
While Celeron performance is limited (primarily by their small cache), they are adequate for testing software and application scaling. Since Socket 478 motherboards were chosen, the Celerons could easily be replaced with Socket 478 Pentium 4s (up to 3.06 GHz with HyperThreading) at a later time. Such an upgrade later would significantly improve the performance of each node without changing any other hardware.
The motherboards support up to 266 MHz double data rate (DDR) memory (PC2100), so 512 MB error correcting code (ECC) dual inline memory modules (DIMMs) were purchased for$ 119.00 each. This was the most expensive component. ECC memory costs a little more, but it offers peace of mind because it automatically corrects single bit errors.
An 80 GB, 7200 RPM hard drive was purchased for every node at$ 84.25 each. A separate operating system could be put on the disk on each node, the drives could be used for local scratch space, or they could be ganged together into a parallel filesystem. This year, disk prices have dropped well below$ 1 per gigabye and continue to drop rapidly.
Four 48X CD-ROM drives were purchased at$ 14.00 each. These can be used for loading systems software or when nodes are “drafted” for desktop use. For most Beowulf-style cluster applications, however, they will not be needed.
Four micro-ATX cases with power supplies were purchased at$ 33.00 each, and four CPU fans were purchased at$ 12.99 each. Choosing the micro-ATX form factor for the motherboards and cases means the nodes take up considerably less space. This is important if the cluster will be operated where space is tight or if it the cluster will grow very large.
Heat extraction and air handling is also a consideration. Cases may need to be separated from each other for sufficient air flow. To ensure adequate cooling, external case fans (transparent fans with blue LEDs) were mounted on the side of each case to draw heat out of the enclosure. This required drilling four holes through sheet metal. The vent slits were already cut into the side panels.
Drilling into sheet metal may be where some people draw the line. Getting your fingers cut up on the case may not be everyone’s idea of a good time. Better cases can always be had for a few dollars more; you have to decide what’s best for you. (Actually, these systems probably don’t require external cooling fans, but the blue LEDs are too cool.)
A five port 10/100Mbps Ethernet switch, previously purchased from NewEgg (http://www.newegg.com/) for$ 20, was used to interconnect the nodes. This switch then connects to my home 10/100Mbps Ethernet network. I originally bought this switch because it came with a$ 20 rebate, so it actually cost$ 5.37 (shipping plus the stamp used for sending in the rebate form).

And Your Total Is…

The total cost of the cluster, itemized in Figure One, came to$ 1,601.56. This price includes everything except the power strips, wire shelves, Ethernet cables, and the Ethernet switch rebate. That’s not a bad price for a four processor cluster with 2 GB of memory and 240 GB of disk space, nor is it a bad price for four decent Celeron desktop boxes. Some labor — including a little light drilling and filing — went into the construction of the four nodes, but combining commodity components in PCs is pretty easy these days.
Figure 1: An itemized list of components used for the low cost cluster

This low cost cluster, shown in Figure Two, now operates (part-time) under the wooden staircase in my home. The wire shelves allow for good air flow around the cases and provide a space for storing parts and tools below the systems. The handles on top of the cases make the nodes easy to move around. The cluster is used for testing out Linux distributions, software applications, and for developing parallel codes.
Figure 2: The low cost cluster

This cluster is an example of the kind of system that could be acquired for testing out technologies for business or computational needs. With a$ 1,600 price tag, this cluster is certainly affordable for any organization. It won’t win a beauty contest and it isn’t very fast, but it is a good way to start experimenting with clustering technologies and to test the scaling of your own applications to justify further investment at a later time.

Forrest Hoffman is a computer modeling and simulation researcher at Oak Ridge National Laboratory. He can be reached at class="emailaddress">forrest@climate.ornl.gov.

Comments are closed.