Breakthroughs of the Pedestrian Nature

Since we are already talking about packaging, let's consisder those forgotten breakthroughs that help make it all possible.

Last week I mentioned my desire for a personal cluster case. I’ll have some more news on that front by the end of the year. While I was thinking about motherboards and cases, I also thought about some of the advances in HPC clustering that concerned packaging and not performance. These improvements are not breakthroughs by any means, but they certainly enabled the whole cluster thing. What am I talking about? Well, let’s consider the 1U case.

When I first started putting clusters together there were plenty of rack-mount cases as long as you needed ones that were 4U (7 inches) in height. (A “U” by the way, is about 1.75 inches. I think U stand for Rack Unit in a GNU self referencing kind of way. It does, however, equal a vershok, an old Russian unit of length.) These 4U cases were almost like a tower case but turned on their side. Slide rails were added so the entire case could be pulled out and serviced while in the rack. Most standard rack-mount chassis are 42U so that means, back in the day you could fit ten dual CPU nodes in a rack. However, you needed room for a switch and maybe a network power control module of some sort, so eight dual processor (single core) nodes was a nice computer kind of number for each rack. Employing some higher math, that means there would be 16 cores in an entire rack. Using today’s four socket motherboards and quad core processors, this same number of cores can now fit into a 1U chassis — a 32 times increase in compute density.

Of course, the 1U server case is the reason for such densities. A good 1U server requires a bit of engineering. The original 4U case had nice big fans pushing air through the mostly empty case. When servers cases dropped to 2U, the fans got smaller, faster, and louder. When we got to 1U, the fans got smaller, faster, louder, and increased in number. Moving air across the motherboard became very important because in a 1U case as the CPU heat-sinks often do not have fans on them. The case fans are responsible for moving air across the copper heat-sinks. If everything is not designed just right, you end up cooking your parts.

The HPC market was not the biggest driver for the the 1U “pizza box” server case. At the same time, the web server was becoming popular and thus pushed vendors to deliver “server” computing into smaller and smaller spaces. The cluster market certainly took advantage of these advancements, but there was one other thing needed before the 1U could become the case of choice.

As server motherboards were developed, all the essential hardware was packed onto the motherboard (i.e. dual networks and video). Thus, a motherboard could be dropped in the 1U server case and be ready to run. What if you wanted to use an additional PCI card in your server? Since the standard PC card was about 4.75 inches tall, there was no way to fit a PCI card vertically into a 1U case. Enter the “bus bender.” This little piece of hardware solved a big problem. Computer nodes could now have additional network cards installed in at least one PCI slot. Fortunately that was all that was needed for a Gigabit Ethernet, Myrinet, QS-Net, InfiniBand, Dolphin SCI, or whatever networking card. The 1U with buss bender has become the standard unit of HPC packaging. I suppose the 1U power supply also should get some credit as well.

There is something a bit more subtle about my obvious 1U case, power supply, and bus bender revolution. The fact that the customer can decide what goes in the box is perhaps the greatest breakthrough in HPC packaging. Recall in the past, the supercomputer was a big black box that ran Fortran. You had little choice about what was inside. And, if you did not need the additional cache sniffing vector tweaking hardware, you still bought it anyway. Similar things have been said about blades for HPC as well. I have heard more than once, “Blades are nice, but they have a lot of stuff I really don’t need. I can buy more 1U nodes for the same price”.

Depending on where you buy your “pizza boxes” the choice of the innards can be totally up to you. The bigger companies often have less choices, while smaller outfits can do almost what ever you want. Buying what you need and nothing else is a relatively new idea in HPC. Want 1024 cores in 64 nodes with a single port IB card and no hard drives, no problem. How about 512 cores each with 4 GBytes of RAM, 500 GB hard drive? Sure thing. The empty 1U case let’s you maximize the system for your applications.

Are there any other packaging breakthroughs ahead? I’m not sure. Recently I had a chance to get up close and personal with an IBM iDataPlex and found some of the design ideas almost too obvious, but curiously absent elsewhere in the market. Of course I have a few ideas of my own.

One thing I always wondered about was using the bottom of a 1U (or any rack-mount case) as the top of the lower case. Essentially, in a rack you have two sheets of metal, the top of one case and the bottom of the other case right next to each other. There is a small gap between the nodes, that seems to serve no purpose. What if the cases were engineered to fit together in a way that the bottom of one case becomes the top the case below it (except on the the top node). In a 42U rack chassis that could save 41 large pieces of sheet metal.

Here is another idea. I have often thought about the thermodynamics of a racks and servers. A traditional rack chassis is cool on the front and hot in the a back. From a thermodynamic standpoint, a difference in temperature means you can get some work done. One way to harness that temperature difference is a device known as a Sterling Engine. A Sterling Engine can extract useful work (like turning a fan blade) from even minute temperature differences. I doubt there is enough energy to turn every fan, and there is that Second Law issue as well, but there may be enough to reduce some of the power load. Indeed, the interesting thing about Sterling Engine is the bigger the temperature difference the faster a fan blade can spin. I am happy to see this idea actually being tested by MSI.

My final idea, is really a desktop application. I have always wanted to connect a CPU heat sink to the top of a tower case. The top of the case would have a round area that radiates heat for placing a coffee mug, or occasional muffin thus keeping my morning beverage and snack at a warm tasty temperature. As things tend to collect on any flat surface within an arms length of where I sit, this could be a bad idea in the long run. For now however, such ideas will only be flights of fancy as I work on much more pressing issues — like how to fit four motherboards in single case, without tripping breakers or a visit from the local fire department.

Fatal error: Call to undefined function aa_author_bios() in /opt/apache/dms/b2b/linux-mag.com/site/www/htdocs/wp-content/themes/linuxmag/single.php on line 62