People that know me have learned that I tend to fret over some really strange things. My multi-core angst has been well documented. Another worry comes from a hobby of mine — low cost clustering. This indulgence does not cause me much distress as I like optimizing and engineering small systems. There is, however, something I wish for when I toss and turn at night. In building small low cost clusters, my goal is to measure how much real performance one can you get out of $2500 worth of commodity computing parts. And, by real performance I mean running the Top500 HPL program. I know HPL sucks, but is does have that long historical archive which helps put the performance of my little clusters that could into perspective.
I have not built a new system recently (hint to all you component vendors that want HPC glory) but my current 8 core machine can hit 53.4 HPL GFLOPS. The machine has four nodes in somewhat small, but easily stackable cases. I use inexpensive Micro-ATX motherboards with one Core2 Duo E6550 running at 2.3 GHz and 2 GB of RAM (The motherboards can take Core 2 quads and hold up to up to 8 GB of RAM.) Although the motherboard has an board GigE, I added Intel PCIe GigE desktop networked cards to each node for performance reasons. My design also used two 5-port GigE switches. I use the on-board GigE network for administration and NFS while compute traffic goes over the Intel GigE network. The stitches support Jumbo Packets which can help with performance in some cases. You can find pictures and a detailed hardware information here.
As an aside, my current performance translates to approximately $47 per GFLOPS. I have not done the pricing, but I’ll bet a quad-core version of my current machine might be possible with the recent price drops in desktop processors. Using quads would give me 16 cores next to my desk. There is even the possibility that I could have over 100 HPL GFLOPS at my disposal for about $2500. I do have concerns about whether GigE will be able to service four cores, but that is an engineering issue. Fortunately, Open MX just hit the 1.0 release.
Whenever I think about personal clustering, I think about a survey I did about five years ago. The survey simply asked what was the scalability of your application. The thing I remember most was the fact that 50% of the respondents could not scale over 64 processors. I did not have detailed data on why the application could not scale further, though I assume it ultimately come down to Amdahl’s Law. Aside from Amdahl’s law, I would assume that the only other thing hindering scalability is network performance, but let’s just assume that my original survey is within the ballpark.
If 64 or less cores is scalability threshold for most people, the desk side cluster may be a very interesting proposition. Power and cooling issues not withstanding, it is very possible to build a four node wire shelf cluster that could put 32 cores literally within arms reach (using dual socket motherboards and quad core processors). No need to wait in the queue for your programs to run. Astute readers may also note that a single 8 core workstation may work for many users as well. If it works for you, why not. In addition, a small system would make a great educational tool.
While calculating the local weather forecast under your desk, there is one other important use for your desk-side cluster — software development. In this case, both systems and application software could be developed without the need to carve out a piece of a working cluster. I am a firm believer that software gets written for the hardware that sits in front of people. If enough people have 16 or 32 cores next to their desk, then there is a good chance someone is going to try something new and different. That killer app is out there. And, finally a desk-side cluster has what we don’t like to talk about, but need just the same — reset switches.
At this point you have to be asking What is the problem Eadline? You seem to have all the cheapo FLOPS you need. Sigh. Indeed I do. But, what I don’t have is a really cool case for my cluster. I am still limited to wire shelves, cheap cases, plug strips, and cable ties. The end result is something that is bigger than it needs to be. I would prefer to have a personal cluster under my desk, but the contraption I have now is just not practical.
There are now gamer cases, server cases, desk-top cases, desk-side cases, mini-towers, mid-towers, full-towers, and the list goes on. In addition, there are those who modify cases in the some truly bazaar ways. What I have not found is a case that holds multiple ATX motherboards. Of course, there are some other issues like power supplies, wire management, switches, and cooling, but none of these are insurmountable problems. I would assume a cluster case would cost double or triple what a good single motherboard case would cost because the volumes would be low. A high price case would certainly skew my dollars per GFLOP ratio, but such is the cost for convenience.
I realize there are desk-side systems that pack server boards into a small area, but these usually have rather hefty price tag. Remember, I’m talking about the value based cluster (i.e. commodity cheap clusters). In my estimation there would be a market for such a cluster and case. Whether it would be used for education, software development, or real HPC computing a small desk-side cluster in a single case would have some real utility.
When discussing HPC the one topic that often comes up are the “hold backs” that limit HPC usage. Maybe all it takes is couple pieces steel bent the right way to launch HPC into the mainstream. What would you do with 16 cores under your desk. These are the kind of things I lose sleep over.
Fatal error: Call to undefined function aa_author_bios() in /opt/apache/dms/b2b/linux-mag.com/site/www/htdocs/wp-content/themes/linuxmag/single.php on line 62