What Are You Going To Do With Your FLOPS?

FLOPS are dirt cheap right now. How excited should you really be though?

It is now possible to buy a system capable of 15 Giga Floating Point Operations Per Second (GFLOPS) for about $750. That works out to $50/GFLOPS. If you are doubting this number, price out a minimal Intel Core 2 Duo (E6550) system with 1GB of RAM and a 250GB hard drive, DVD, and load Linux on it. Then run the HPL benchmark using the Goto libraries. You should see at least 15 GFLOPS for your investment, an astounding amount of power for so little cash.

Some perspective may help in understanding this HPC bargain. Had you been able to build this machine in 1993, it would have placed 7th on the Top500 list. Right behind it would be the 512 processor Intel Paragon located at Oak Ridge National Lab (Intel i860 processors running at 50 MHz). Want to guess how much the Paragon machine must have cost back then? My guess would be the high seven figures, maybe more.

Let’s talk about power as well. While I have no numbers for the power usage of the Intel Paragon, I have measured a Core 2 Duo running the HPL. While running the benchmark my little system was using a measly 6.7 Watts per GFLOPS. The Intel Paragon’s power consumption was easily much higher than this number.

Of course, if you are going to build a real HPC cluster the costs will be a bit higher as “server level” hardware is a bit more expensive than that sold for the low cost desktop market. The price/performance gains are just as startling, however. FLOPS it seems, are cheap and the market knows it. The expected AGR for HPC cluster systems is expected to be over ten percent for next 3-5 years according to IDC.

Great news for all the HPC mavens out there. A good questions to ask is, “What will we do with all these FLOPS?” Compared to years past, my little desktop machine could have been used to find oil, fold proteins, design jets, or other cool HPC activities.

Before we start jumping up and down with FLOPS of joy, however, I would like to refine this question a bit. It is really more important to ask, ‘What will we do with all the parallel FLOPS?’ Ah, that parallel word. It conjures up clusters, and now, multi-core. Parallel changes things. No longer can we expect increased performance due to clock ramping of a single processor (single core). We have to bite the parallel bullet. And, biting the bullet implies pain.

Fortunately, for those in the HPC world, much of the parallel pain has passed, or is at, least tolerable. Many of the important applications now run on clusters (which are distributed memory parallel computers) and will also run in some capacity on multi-core systems (which are shared memory parallel computers, and often referred to as Symmetrical Multi-Processing (SMP) systems). Running on clusters of SMP systems is the next challenge. Once solved to satisfactory level, we can continue to throw FLOPS at the same problems we have in the past. The multi-core approach does break the old model of a single CPU/memory node communicating over a fast interconnect. There will need to be some non-trivial adjustments.

For the HPC practitioner, more parallel FLOPS usually means a faster time to solution (i.e. more solutions) and/or better quality of solution. As the efficiency/scalability of parallel jobs may change for better or worse due to multi-core, cluster design will get a bit more complex than in the past. As islands of SMP power-houses start to replace two socket/single core systems old assumptions will need to be tested. Attention to the interconnection network will become more important than in the past where Gigabit Ethernet and a switch once served the basic two core node. Infiniband and 10GigE solutions from companies like Cisco will be needed to service the requirements of large multi-core nodes in the future.

What about everyone else? Will the pain of parallel prevent mainstream use of HPC clusters? As there do not seem to be any viable alternatives, parallel approaches will slow but not prevent growth into the mainstream. Indeed, tools to help with the migration are available from Intel, IBM, and others. There is plenty of room for innovation in the FLOPS pond. There is no argument that more FLOPS can help everything from industrial competitiveness to education, but delivering the FLOPS is going to take some effort on several fronts.

While I could have tried to sell you a rosy picture of a multi-core cluster FLOPS nirvana, I chose to keep things a bit more pragmatic. There is no doubt we have our work cut out for us. I believe we will meet these challenges and open the FLOPS floodgates to one and all. Our goal at Todays HPC Clusters is to help you get there. We plan on helping you navigate HPC landscape, make good decisions, and to get the most out of the low cost parallel FLOPS that seem to be quite literally landing on our desks and in our clusters.

Comments on "What Are You Going To Do With Your FLOPS?"


How about “over 26 Gflops of measured performance for $1256.00″?

“It’s dimensions are just 11″ x 12″ x 17″, making it small enough to fit on one’s desktop or in a suitcase. and is built from 4 microATX motherboards” and 4 PSUs. http://www.calvin.edu/~adams/research/microwulf/design/

“As of Aug 1, 2007, Microwulf can be built for $1256, improving its price/performance ratio to less than $48/Gflop. See the Cluster Monkey article for the details.” http://www.clustermonkey.net//content/view/211/1/

It is getting difficult to write an article for publication without the technology blowing past your publish date with a complete new generation of increased capability!


“Many of the important applications now run on clusters (which are distributed memory parallel computers)”

Can you give examples ?


Spot on with this write-up, I actually assume this web site wants much more consideration. I’ll in all probability be once more to read far more, thanks for that info.


Its like you read my mind! You appear to know a lot about this, like you wrote the book in it or something. I think that you could do with a few pics to drive the message home a bit, but other than that, this is magnificent blog. A great read. I will certainly be back.


Leave a Reply

Your email address will not be published. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>