SC10 in New Orleans is two weeks away and I can feel the sense of overwhelm starting already. In my typical fashion, I have too much to do and too little time. I have a few things to cover this week before the avalanche of press releases and interviews comes my way.
The SC10 HPC community gathering, a.k.a The Beowulf Bash is official. We’ve got banners, a web-page, and the collectors edition invitation. Spread the word, we are on the boat. There is a short interview with organizer Lara Kisielewska over at insideHPC in case you are wondering what all the fuss is about. Plus, in my previous column, I bemoaned the importance of the whole community thing, so I won’t repeat myself. This year we made sure there would be places for quiet conversations in case you were not in to the music thing. And, in case you are wondering, when I say “we,” I mean the community of sponsors who made it all happen: Penguin Computing, AMD, Adaptive Computing, Aeon Computing, ClusterMonkey.net, Kove (previously Econnectix), insideHPC, Intersect360 Research, Numascale, QLogic, SICORP, Terascala, Versant, SuperMicro, and Xand Marketing.
Moving on, I will be camping out in booth number 1731. The booth is part of SICORP, but on my section of carpet I will not be shilling for my host. Although, I understand they will be giving me a sofa, so I have to say they are a good bunch of good people and if you need someone who knows what they are doing in HPC, call them. I’ll be there talk about small hardware and big software.
Let’s start with the small hardware. I will have my Limulus Machine with me again this year. For those that missed it last year, picture one case, one power supply, ten cores (soon to be 18), quietly working next to your desk. I’ll be showing some cool stuff like automatic power control of the three worker nodes via Sun Grid Engine. The interesting thing about this type of system is it runs the same software as the big clusters. I have been told by some sysadmins that they could use something like this to test new software or try some things outside of the server room. Educators and software developers also like the idea of a real cluster in a small foot print.
For those who wanted to buy one of these systems last year, they were not available. This year there may be an announcement about commercial availability. Note, I am not playing the “I have a secret I can’t tell until Supercomputing” game, but rather I am still working out the details with the company that will eventually build these things. In addition, I can’t give any details just yet, but if people buy these systems, I will commit to putting some of the proceeds into improving the open software that is part of the Limulus package. One of the goals for the software is an integrated, open, turn-key cluster software stack with a collection of real applications. I’ll have much more to say in a few weeks.
Moving from hardware to software, my other mission at SC10 will be to introduce and discuss the HPCTools.org site. The site, which will be operational by SC10, is sponsored by SICORP and is actually an idea whose time has come. A little background may help. From a software standpoint, the typical HPC cluster is a collection of software tools and applications all functioning as one system. There are countless packages and methodologies that are used to “make a cluster work.” Many of the tools, some of which are redundant, are some custom code or open source packages developed by the community. The level of effectiveness and polish, however, runs the gamut from orphaned to active projects.
I consider this situation to be one of the “hold backs” of the HPC community/market. There are several issue that could be improved. The first is the awareness of tools. Very often users write their own tools because they are not aware that someone else has done exactly the same thing. In addition, there is also the maturity or robustness of a tool that is often not apparent when searching on the web. I know I have been excited about finding a project only to learn there has been no activity since 2002 and the authors email bounces. Another issue is that of documentation. Open source is notorious for not having documentation. Enough said. The final issue is one of actual support. Many of the HPC tools have been paid for with your tax dollars and as such are by definition community property. As everyone knows, there is huge difference between a production code and tar ball of open source. There is a real need to bridge the gap between freely available and easily usable in the HPC space.
As way to assist the community and move the market along, SICORP will be introducing a repository for open source code, system and user tools, industry-specific benchmarks, and relevant manuals and documentation under the HPCTools.org website. The repository will be total open and act as a focal point for the vast amount of open software available for HPC. They will also offer technical support through ongoing consulting, custom development, testing, monitoring, and analysis of results. Not a bad deal.
If you are strolling around the SC10 exhibit floor, stop by and let’s talk about personal HPC hardware, software tools, and “life. the universe, and everything.” That is if you did not have too much of a good time at the Beowulf Bash. In that case, you can use the sofa.
Fatal error: Call to undefined function aa_author_bios() in /opt/apache/dms/b2b/linux-mag.com/site/www/htdocs/wp-content/themes/linuxmag/single.php on line 62