This week I’ll be presenting the last of the SC09 videos. I have to apologize as we seem to have misplaced the interview with Joe Landman and his amazingly fast storage devices. I’ll keep looking and hopefully it will turn up.
My final stop this week is with Appro International. As many of you know, Appro is a cluster vendor that deals exclusively in HPC. They have been around for a while and seem to be very good at delivering integrated solutions that work. In case you missed it, last month I wrote about the San Diego Supercomputing Center (SDSC) Gordon Cluster to be built by Appro. I believe the work they are doing on data intensive computing is important as it opens up a new class of solutions to scientists and researchers. I’ll let Mike Norman of SDSC explain the hardware design:
“We decided to push the envelope and have come up with a design that gives us 32 supernodes. Each supernode consists of 32 HPC nodes and is capable of 240 GFLOPS/node and 64 GigaBytes (GB) of RAM. A supernode also incorporates 2 I/O nodes, each with 4 (TeraBytes) TB of flash memory (using SSDs). When tied together by virtual shared memory (using ScaleMP), each of the systemâ€™s 32 supernodes has the potential of 7.7 TFLOP of compute power and 10 TB of memory (2 TB of DRAM and 8 TB of flash memory). We will also be using dual rail QDR InfiniBand to connect the nodes.
You can learn more from the video. And don’t miss the guy walking in front of the camera toward the end of the video. We had people politely redirecting the traffic around the interview, but this guy literally pushed through the interview. Thanks “pushy HPC want-to-be guy” in the red shirt. We’ll be watching for you next year.
While in the Appro booth I managed to corner Vice President John Lee. He is the guy who in charge of designing all the Appro hardware. In this interview, he talks about how they have integrated the NVidia GPU’s into their blade products. In one sense it amazes me how fast GP-GPU technology is gaining ground in HPC. Appro and other vendors would not be building these products if there was not customer demand. I’ll let John describe the NVidia solutions Appro has to offer.
The final video from the Appro booth is about 10-GigE. I’m not exactly sure who the guy is giving the talk because mostly all I see is the back of his head. In the talk he references a 10 GigE white paper. The paper provides some recipes and background for putting together a 10GigE cluster. (It maybe helpful in understanding some the cabling issues as well.) I have previously written about why I think 10 GigE is going to have a play in HPC. And, as I state every time when I talk about the issue, “I don’t think it is going to be a winner take all 10-GigE vs InfiniBand battle. Both have their advantages and the needs of the customer dictate what is the best solution. Remember, HPC it is about choice and maximizing price-to-performance for a particular application set.” Let’s see what the guy in the video has to say.
And now, my patient readers, we come to last video. At the end of each SC I tend to wax philosophically about the HPC industry and what it all really means in the end. I don’t want to give away my insights just yet, so you will have to watch the video.
Thanks Lara for being a good sport. Next week I’ll be back talking about other HPC issues. I should mention that I have my four node Limulus Cluster up and running Percus in the new case (using a single power supply). I’l have lots to write about as I test and benchmark this system. I’m also still trying to keep tweeting on twitter. Now that I have 192 followers, I feel like I should be tweeting more. I mean I can just imagine 192 people waiting at their computers saying “Come on Eadline tweet something” or not.
Some of the products that appear on this site are from companies from which QuinStreet receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. QuinStreet does not include all companies or all types of products available in the marketplace.