SC05: Recess for Cluster Geeks

Off to see the wizards… of high-performance computing.
My normal daily routine consists of pecking away at my laptop while listening to the orchestral harmonies of nearby, whirring cluster fans. But once a year, I, like many of the high-performance computing faithful, make a pilgrimage to some distant convention hall to attend the high holy days of high-tech hobnobbing, the annual Supercomputing show. The most recent show, Supercomputing 2005 (SC05, http://sc05.supercomputing.org), was held in Seattle, Washington back in November.
Supercomputing is always rewarding, although it can be a bit of a brutal affair for an average cluster geek like myself. For example, I am forced to talk to people using an archaic form called the spoken word. Luckily, there are several HOWTO’s available.
For those that have never been to the event, it is the largest pile of high-performance computers, storage, and networking collected anywhere in the world. There are four high holy days, and there’s always something to see, someone to meet, or some beer to drink. That leaves little time for sleep and other niceties. Smartly, Supercomputing is always held the week before the United States’s Thanksgiving holiday. A week of rest and relaxation (and overeating) is almost always needed after the event.

Big News?

From my discussions with people, there didn’t seem to be any really big news this year — at least news that those in the trenches care about anyway. There was plenty of news in any case. Check out http://www.clustermonkey.net/content/view/74/40 and http://scalability.org, two SC05 blogs that tally news, announcements, and observations.
For me, the biggest news was the size and scope of the show. There were 9,700 attendees (up from 6,500 in 2004); 3,443 participants in the technical program; a half-terabit of network capacity; and 265 exhibitors. I have to believe that cluster computing has helped expand the HPC market to such a scale.
I thought it would be interesting to mention a few of the things I found noteworthy at the show. Before anyone feels slighted, though, remember that I was one person with two and a half days at the show. I tried to talk to as many people and companies as possible, but the layout of the convention center made it difficult to get around at times. There were two main rooms separated by what was known as the “Microsoft Gauntlet.” There was also an upstairs display area that I never quite found.

Clusters are Disruptive

I managed to attend the IDC breakfast during the show. Not only did I get a free breakfast, but lots of marketing information as well. One conclusion presented at the breakfast was that clusters are a disruptive technology. From the IDC data, clusters have grown 49 percent in the last two years, while capability systems (traditional supercomputers) are down 29 percent in the same period. In 2004, clusters accounted for one third of the market and today they account for over one half the market.
According to IDC, clusters exceeded their most optimistic projections. The really interesting news is that clusters have caused a growth revolution and not a decline in the market. In other words, as cheaper systems replace expensive systems, the market size has only increased, presumably do to the lower cost of entry.
The other term that was mentioned was dark clusters. Like the theoretical “dark matter”, dark clusters cannot bee seen by the standard IDC metrics. These are clusters that are built from component servers and therefore not counted as a cluster shipped from a vendor. I’d be willing to wager that dark clusters, if they could be counted, would push the market share to at least 75 percent.
For those that like to do math, the HPC market is pegged at $7.25 billion in size. IDC also predicts that the category with the biggest growth will be the under-$50,000 technical workgroup category. Personal cluster, anyone?

The Top 500

No SC review would be complete without a mention of the Top 500. The list again shows a major shake-up of the top ten systems, but IBM’s Blue Gene still holds the top spot. (The complete list can be found at http://www.top500.org/lists/2005/11/o). By the way, only three of the top ten systems are traditional clusters. Not to worry though, clusters accounted for 72 percent of the top 500 computers ranked by running the HPL benchmark.
If you look beyond the whole fastest computers in the world misnomer, you can find some excellent historical data (1993 to present) on the the Top 500 site. For this reason alone, the Top 500 should continue. Besides, when everyone gets bored, a good Top500 “flame fest” always brightens the day.

Microsoft and the “Open” Word

As many of you may know, Microsoft has stuck its toe in the HPC pond (again). Bill Gates was even the keynote speaker. As I mentioned, Microsoft had booths on both sides of the large atrium connecting the two main exhibit halls. It was a great location, although, in my opinion, they were missing the standard “Big-Ass-Cluster” (BAC) that is so predominant at SC. However, Microsoft did, to the company’s credit, have some clusters running Windows and a good sprinkling of ISVs could be found in their both.
Now the curious part for me was the use of the word Open. It was all over Microsoft’s literature, booth, and even the free beta version of Windows 2003 Computer Cluster Edition. It was odd to say the least. I have no idea what Microsoft’s use of the word “Open” was supposed to mean. Did it mean their cluster tools were going to be open source or based on open standards? Who knows.

AMD is Driving the Hardware Boat

I did have occasion to talk with separately with Bj& ouml;rn Anderson, Director of HPC and Grid Systems for Sun, and with Douglas O’Flaherty, Division Manager, HPC Commercial Segment for AMD. If you’re using an x86 cluster, chances are that you’re using Opterons and for good reason. AMD has been working their plan and delivering on their promises. If you think dual-cores will change the way you compute, think again. What about 32 Opteron cores in the box sitting next to your desk? Sun quite wisely got on board with AMD two years ago and have announced the fastest system in Japan, expected to achieve 100 teraflops using 10,480 Opterons living in racks of Sun Fire servers.

Here and There

Here are some other interesting things I noticed:
*IBM was talking about the Cell processor. Just before SC05, they announced the availability of a software simulator for the Cell. You may have to wait for the Playstation 3 to get your hands on cheap hardware, but in the mean time, you can start coding for the Cell.
*I spent some time with Colin Hunter, President and CEO of Orion Multisystems. We talked a bit about Orion Multisystems’s personal supercomputer and single-system-image cluster software. Orion is a pioneer in the this area of a personal cluster. They can put a 96-CPU, desk-side system into your cubical without the need for extra power or cooling.
*There was a Beowulf Bash on Wednesday night hosted by Penguin, Scyld, AMD and HPCwire. I had a chance to touch base with Thomas Sterling about his new position at Louisiana State University (LSU). It turns out that the day after his announcement to move to LSU, the hurricane machine went into overdrive. Fortunately, all is well and I expect once Thomas gets settled in, you’ll hear more good things from LSU.

Software Evaporation and Taming the Warewulf

Personally, I am bewildered by the current state of presentations at a computer conference. The lecturer runs Power Point, talking to rows of people who are surfing the web and reading email. (Admit it, you do it, too.) The gravitas of a conference is gone — and that’s the reason I like “Birds of a Feather” (BOF) sessions. There is normally more audience interaction.
The most interesting BOF was one titled “The Evaporation of the HPC Applications Market.” The BOF included an all-star panel (Dr. Stanley Ahalt, Executive Director, Ohio Supercomputer Center (moderator); Paul Bemis, Vice President Marketing, Fluent, Inc.; Dr. Al Geist, Corporate Fellow, Oak Ridge National Laboratory; Thomas Lange, Director, Corporate R& D Modeling& Simulation, The Procter& Gamble Company; Loren Miller, Director, IT Research Development& Engineering, The Goodyear Tire& Rubber Company; and Dr. Reza Sadeghi, Vice President, Solver Development, MSC.Software) that addressed a serious issue in the HPC market highlighted by the Council on Competitiveness/DARPA Study of independent software vendors (ISVs; see the study online at http://www.compete.org/pdf/HPC_Software_Survey.pdf and the November 2005 “Cluster Rant” column, available online at http://www.linux-mag.com/2005-11/cluster_01.html). The discussion among the panel was refreshing and candid. Aside from learning that the HPC community needs a new way to do HPC software, I also learned that the main customer pushing Fluent computational fluid dynamics (CFD) technology is Formula One racing. I also learned that Formula One is the most popular spectator sport in the world, and has one of the highest paid athletes (Michael Schumacher is tied with Tiger Woods at $80 million). Next time someone asks what can you do with a cluster, take them for drive and explain that with a properly designed car shape, you could be making eight-digits a year.
Another interesting BOF was the “Warewulf Cluster Toolkit Users” meeting (Warewulf can be found at http://www.warewulf-cluster.org/cgi-bin/trac.cgi). I attended because I think Warewulf has enormous potential with HPC clusters. I was pleasantly surprised to see over thirty people in attendance. The BOF was held in the same room as the Bill Gates keynote. I found this to be an interesting contrast. The day before the room was full to hear a somewhat generic presentation, but on this day, there was a small group of people, with sleeves rolled up, talking about actually solving some core issues with HPC cluster administration.
My advice: if you want the 50,000 foot view, go to industry keynotes. If you want to check your email, go listen to a paper. If you want to know what is going to move HPC forward, attend the BOFs.

What’s Next on the Agenda?

Speaking of moving HPC forward, I want to close with a shameless plug.
At the beginning of 2005, as some of you may know, I started something called The Cluster Agenda Initiative The Cluster Agenda is intended to be a road map of where we have been (best practices) and where we are going (challenges) with cluster technology and practice. Some community members believe it’s time the Initiative got organized. I’ll be writing more about the Agenda next time, but until then, take a look at it (http://agenda.clustermonkey.net) and contribute if you feel the need. After all, it is an Open initiative.

Douglas Eadline can be reached at class="emailaddress">deadline@basement-supercomputing.com.

Comments are closed.