The Big Show

I am standing in the middle of SC07 (Supercomputing 2007). SC07 is THE HPC event of the year. 318 exhibitors have made the trek to Reno, Nevada for SC07, and scores of attendees are here for the week-long conference. If you're attending the show, you're probably not reading this because you're either completely exhausted, back-logged, depressed from losing your money at the casinos, still trying to get your shampoo back from the TSA agent at the airport, or some combination of the above.

I am standing in the middle of SC07 (Supercomputing 2007). SC07 is THE HPC event of the year. 318 exhibitors have made the trek to Reno, Nevada for SC07, and scores of attendees are here for the week-long conference. If you’re attending the show, you’re probably not reading this because you’re either completely exhausted, back-logged, depressed from losing your money at the casinos, still trying to get your shampoo back from the TSA agent at the airport, or some combination of the above.

In any case, I’m thought I might trying my turn at real-time blogging. As many of you know I write quite a bit about clusters, but not in the real-time sense. I normally to try to write clever and {insight/incite}ful articles with some take-away for the reader. Not this week. I’m blogging, baby. Plus, Linux Magazine editor-in-chief Joe ‘Zonker’ Brockmeier just came by and asked me when he can expect this week’s column…

11:00 a.m. I’m on the trade show floor standing in an vendor’s booth (Appro) as part of my duties as a “booth geek”. When not trying to invent TSA-safe shampoo, I like to test and benchmark new technology. When Appro asked me test some new Harpertown processors and write a white paper I jumped at the chance.

For Harpertown, the news is good. My results show a much improved quad-core processor from Intel. How much improvement, you ask? How about overall improvements of at least 40% on my 16-core MPI runs (two 8 core servers with Infiniband). I’ll have more to say about this in the weeks to come, but let me just say, I was late finishing the paper because the test results were so good, I had to run the benchmarks twice just to be sure I did not make a mistake.

11:30 a.m. Here’s something new, the entire tradeshow just went dark for a few seconds. Systems are rebooting all over the show floor. The SCinet wireless is down. Heavens! No Internet in this big room full of computer geeks!

Speaking of no Internet, many hotels that are used for the SC shows often have a bandwidth issue when the hordes of laptop toting computer jocks show up and take down their network at least once a day. The problem is the attendees are too used to fast and ubiquitous networking and while at the show have SCinet. No, not the one from the Terminator movies, but close.

Every year at SC they build one of the most powerful networks in the world. SCinet serves as a way for show exhibitors — including government labs, academia, and vendors — to demonstrate the advanced computing resources from their home institutions and elsewhere by supporting Supercomputing and grid computing applications.

SCinet is designed and built entirely by volunteers from universities, government, and industry. SCinet connects multiple 10-gigabit per second (Gbps) circuits to the show floor. To put it in perspective, you could download two DVD movies in one second.

SCinet has three major components. First, according to the Web page, “it provides a high-performance production-quality network with direct wide area connectivity which enables attendees and exhibitors to connect to the Internet and other networks around the world.” In other words, free and fast WiFi all week. Additionally, SCinet includes an show-wide Open InfiniBand (OpenIB) Network. Perhaps more impressive than a bunch of words is the control center in the picture.

12:30 p.m. Wireless is back up. Time for lunch, so I’ll take a break. Nice to get away from the server fans for a while. Off to grab a quick lunch. Convention food… well, that is another blog altogether.

1:30 p.m. I have been talking about my Intel Harpertown, sorry Xeon 5400, white paper to several people. Speaking of Multi-core, we just launched the Multi-core Cookbook. Take a look and learn to cook the multi-core way.

3:30 p.m. I have a little time, I thank I’ll check the new Top500 results. And this years winner is, surprise, IBM BlueGene coming in at 596 TeraFLOPS! That is 596 times 10 to the 12th power, or the equivalent of 50-60 thousand desktop machines all working on the same problem. In the case of BlueGene there are 212,992 PowerPC 440 (700 MHz) processors chugging away under the hood. I you are wonder why they processors are only 700 MHz, just remember more Hz means more heat. Using a large number of slower but cooler processors can have advantages.

6:00 p.m. The first full day of the show is winding down. So am I. Last night I attend the sort of annual LECCIBG. For an East-coaster, you can only take so many of those late (2 a.m.) events on the West side of country. I must soldier on, however. Tonight is the Beowulf Bash, a yearly anniversary of the of the early Beowulf Alumni. It is also a great place to associate a face with an email address. The crack Linux Magazine HPC media team will be there collecting interviews and candid footage. Stay tuned to Today’s HPC Clusters for the videos in coming weeks.

So, blogging is kind of cool. I managed to capture a fraction of the SC experience. I wish I had more time to write about other thoughts I had to day. Like TSA brand shampoo. A safe alternative for problem hair. I need some sleep.

Fatal error: Call to undefined function aa_author_bios() in /opt/apache/dms/b2b/linux-mag.com/site/www/htdocs/wp-content/themes/linuxmag/single.php on line 62