dcsimg

HPC Reflections: SC09 in Portland, OR

The big show is over and now all that remains is to make sense of it all

Arriving home from the annual SC show has become rather routine. My wife assumes I will be “out of sorts” until after Thanksgiving, my daughter usually wants to inspect my t-shirt haul, and I just want to lay on the sofa and watch some mindless television. As it is now the first week of December I am basically recovered from my sojourn to Portland, although I find myself wanting to remain on the sofa.

I don’t know if busy is the right word to describe my week. Perhaps over-allocated is a better description. Let’s start with my last night in Portland (Thursday Nov. 19th). It has become somewhat of a tradition that fellow cluster geek and Linux Magazine writer Jeff Layton and I have dinner on this night. Our conversation usually goes something like:

Doug: I’m exhausted.

Jeff: So am I.

Silence

Doug: What was I saying?

Jeff: Something about car exhaust.

Doug: Oh yea, so I was …

We did manage to make some sense eventually. At one point, I made the rather random comment, You know with all that application driven dynamic provisioning stuff now offered by Platform and Cluster Resources (now Adaptive Computing) we are basically done (with perfecting the clustering model). Jeff asked for a little clarification. I continued, Well, from a users perspective, they are now in the drivers seat. Issues that the community used to fret over are gone. For instance, getting codes to compile for a specific OS. Now you can just automatically provision the nodes you need with the OS you need. It is almost like the application is now bundled with the entire run-time environment that gets dynamically loaded at run-time. It is kind of like grid was supposed to be, but cloud over-simplified. The OS, interconnect, and file system have now become schedulable resources. Jeff replied that it was an interesting way of thinking about it. I’m not sure I made a lot of sense in any case.


I had the whole plane ride home to think about that discussion. I decided that I need to write a longer treatment of the topic as it not easily explained in two or three paragraphs. I have called this Cluster 3.0 — dynamic provisioning.


Jumping back to the beginning of the week, the Beowulf Bash was a big success. A big thank you to our sponsors. The event has become one of the highlights of the show. As Tom Sterling remarked, It is a special vendor-neutral community driven event where you can meet people and talk about HPC. In case you missed it, there are pictures over at Inside HPC. There will be some video posted real soon.

Speaking of video, I was in front of the camera again this year. I managed to get a pile of interviews and some other (cough, cough) commentary. I should mention that these are not the boring kind of public relations interviews. I pretty much show up with my trusty camera man, Vien Hong, and start asking questions. There are no scripted questions or retakes. If it looks awkward and real — it is. We should have the first batch loaded next week. I’ll supply some back story and links to the videos in my column. I’m still wondering if the gladiator movie comment was out of line. Running around doing video and helping out in the Appro booth made for some busy days. Plus I was showing my Limulus system across the isle.

In terms of new technology, I think GP-GPUs have made a mark on the market. There was quite a lot of activity in this area. Between NVidia’s Fermi and ATI’s Firestorm, the HPC market is in for some big changes. Heterogeneous computing is the new buzz-word. Recent news about IBM Deep Computing dumping the Cell processor confirms that it is now a two pony race (NVidia and AMD/ATI) for the best SPMD co-processor. Over on the x86 side of the market there was news of, get this, more cores.

Another piece of news was the SDSC Gordon Cluster. While the forthcoming cluster can crunch numbers like everyone else, it is designed specifically for data-mining operations. For the first time, a major supercomputing center is not touting where their cluster will land on the Top500 (although SDSC has provided estimates), but rather how well it will do in IOPS (Input/Output Operations per Second). Gordan will use a large amount of flash storage (get it, if you don’t ask an old person about Flash Gordon). I’ll have more on this topic real soon. I consider it a new breed of cluster which will become an important tool in data heavy HPC.

One other piece of news was the debut of my single case Limulus Machine (four microATX motherboards in one case with a single power supply). It arrived in more pieces than when I shipped it, but I was able to repair it on Monday before the show started. As I am not one to single out any one company, I’ll be coy about it and mention that the shippers name rhymes with FedEx. If you hop over the project page in the link above, you can see some pictures and view copies of the slides that were running at the show. I’ll have more detailed pictures in the coming weeks. Overall the response was very positive. Most people were surprised how quiet it was and how little power it used (I had my Kill-a-Watt meter attached one of the days). A special thanks to Jess Cannata for helping out.

And finally, my SC09 Twittering was kind of boring. Looking at the posts, I can’t figure why anyone would be interested in me searching for coffee. On the other hand, shooting around URL’s seems like a good use for Twitter. I am inching closer to my 250 follower goal. The current tally is 183. That and a cup of coffee will get me, let’s see, a cup of coffee.

Fatal error: Call to undefined function aa_author_bios() in /opt/apache/dms/b2b/linux-mag.com/site/www/htdocs/wp-content/themes/linuxmag/single.php on line 62