A Second Smattering of SC09 Videos

This week we hear from Intel, Numascale, and Mellanox. Plus some thoughts on single node HPC solutions

Most major cities have symbols or icons that identify them. For instance, Philadelphia has the Liberty Bell and New York has the Empire State Building. Portland has the majestic Mount Hood as its moniker. Of course, back east we don’t call those things mountains, we use their proper name — a volcano. Mount Hood is considered dormant, but in reality it has an estimated 3-7% chance of erupting in the next thirty years, which, by the way, is about 2.5 million times more likely than winning the Power Ball lottery. (A Power Ball ticket has a 1 in 80 million chance of winning). The USGS characterizes it as “potentially active.” Let’s keep that in mind next time we have the SuperComputing conference in Portland. Next year we head to New Orleans.

The videos for this week are from Intel, Numascale, and Mellanox. As I watched these videos, I realized something. I did not quite smack myself on the forehead, but I think we are approaching a fork in the road. If you read my Small HPC article, you may recall that that multi-core is dramatically increasing core densities in a single box. It will soon be almost standard to get 16 cores per box. Using four socket motherboards, 24 and 32 cores will be possible as well. Why are these numbers important? If you look at survey data that reports scalability limits of applications, you find that over 55% of HPC users uses less than 32 cores per run. As I mentioned previously, pretty soon half the HPC users can get the number of cores they need with a single motherboard. I believe this will change things. And, with the advent of SMP scaling methodologies (both hardware and software), it is conceivable that virtually all the HPC market could run on commodity shared memory platforms. Performance is another issue, but in theory, a large amount of HPC may never leave a single server.

With that in mind, listen carefully to the following videos. The first is an interview with Richard Dracott, general manager of Intel’s High Performance Computing Group, we learn about the up coming Intel EX (8-core Xeon) and a higher clocked six core variant for the HPC market. The un-named six core version is actually a good use of 8-core processors that have two flat tires. The video has much more including the role HPC plays in Intel’s development cycle and the Ct language.

While gallivanting around the show floor, I sometime run in to people I know and grab an “on the spot” interview. In the video below, I meet Jim Cownie, and after ten years learn how to pronounce his name correctly.

Speaking of scalable SMP systems, I had a chance to talk Einar Rustad from Numascale about there plug and play HTX card that transforms AMD servers in to a full shared memory computer. An HTX port is needed of course, but think of it as extending the HyperChannel bus off the motherboard. I’ll let Einar explain further.

My final interview this week is with Mellanox. As you know Mellanox is a leading provider of InfiniBand technology. They continue to push the performance envelope with new technologies. This year they announced CORE-Direct, which provides hardware assisted collectives for MPI programs. For example, a typical MPI program may have stages where there are data broadcast, global synchronization, and data collected across all nodes. By offloading the collective communication, ConnectX-2 adapters help to reduce communication time and CPU cycles. I’ll let Sanjay from Mellanox explain further:

I’ll put a bookmark in the video parade for this week. There are plenty more to come. Stay tuned for the high tech french fry machine and T-shirt hustle. I can promise that these two topics have virtually nothing to do with HPC, which is why you want to watch them.

Leave a Reply

Your email address will not be published. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>