dcsimg

Breakthroughs of the Pedestrian Nature

Since we are already talking about packaging, let's consisder those forgotten breakthroughs that help make it all possible.

Last week I mentioned my desire for a personal cluster case. I’ll have some more news on that front by the end of the year. While I was thinking about motherboards and cases, I also thought about some of the advances in HPC clustering that concerned packaging and not performance. These improvements are not breakthroughs by any means, but they certainly enabled the whole cluster thing. What am I talking about? Well, let’s consider the 1U case.

When I first started putting clusters together there were plenty of rack-mount cases as long as you needed ones that were 4U (7 inches) in height. (A “U” by the way, is about 1.75 inches. I think U stand for Rack Unit in a GNU self referencing kind of way. It does, however, equal a vershok, an old Russian unit of length.) These 4U cases were almost like a tower case but turned on their side. Slide rails were added so the entire case could be pulled out and serviced while in the rack. Most standard rack-mount chassis are 42U so that means, back in the day you could fit ten dual CPU nodes in a rack. However, you needed room for a switch and maybe a network power control module of some sort, so eight dual processor (single core) nodes was a nice computer kind of number for each rack. Employing some higher math, that means there would be 16 cores in an entire rack. Using today’s four socket motherboards and quad core processors, this same number of cores can now fit into a 1U chassis — a 32 times increase in compute density.

Of course, the 1U server case is the reason for such densities. A good 1U server requires a bit of engineering. The original 4U case had nice big fans pushing air through the mostly empty case. When servers cases dropped to 2U, the fans got smaller, faster, and louder. When we got to 1U, the fans got smaller, faster, louder, and increased in number. Moving air across the motherboard became very important because in a 1U case as the CPU heat-sinks often do not have fans on them. The case fans are responsible for moving air across the copper heat-sinks. If everything is not designed just right, you end up cooking your parts.

The HPC market was not the biggest driver for the the 1U “pizza box” server case. At the same time, the web server was becoming popular and thus pushed vendors to deliver “server” computing into smaller and smaller spaces. The cluster market certainly took advantage of these advancements, but there was one other thing needed before the 1U could become the case of choice.

As server motherboards were developed, all the essential hardware was packed onto the motherboard (i.e. dual networks and video). Thus, a motherboard could be dropped in the 1U server case and be ready to run. What if you wanted to use an additional PCI card in your server? Since the standard PC card was about 4.75 inches tall, there was no way to fit a PCI card vertically into a 1U case. Enter the “bus bender.” This little piece of hardware solved a big problem. Computer nodes could now have additional network cards installed in at least one PCI slot. Fortunately that was all that was needed for a Gigabit Ethernet, Myrinet, QS-Net, InfiniBand, Dolphin SCI, or whatever networking card. The 1U with buss bender has become the standard unit of HPC packaging. I suppose the 1U power supply also should get some credit as well.

There is something a bit more subtle about my obvious 1U case, power supply, and bus bender revolution. The fact that the customer can decide what goes in the box is perhaps the greatest breakthrough in HPC packaging. Recall in the past, the supercomputer was a big black box that ran Fortran. You had little choice about what was inside. And, if you did not need the additional cache sniffing vector tweaking hardware, you still bought it anyway. Similar things have been said about blades for HPC as well. I have heard more than once, “Blades are nice, but they have a lot of stuff I really don’t need. I can buy more 1U nodes for the same price”.

Depending on where you buy your “pizza boxes” the choice of the innards can be totally up to you. The bigger companies often have less choices, while smaller outfits can do almost what ever you want. Buying what you need and nothing else is a relatively new idea in HPC. Want 1024 cores in 64 nodes with a single port IB card and no hard drives, no problem. How about 512 cores each with 4 GBytes of RAM, 500 GB hard drive? Sure thing. The empty 1U case let’s you maximize the system for your applications.

Are there any other packaging breakthroughs ahead? I’m not sure. Recently I had a chance to get up close and personal with an IBM iDataPlex and found some of the design ideas almost too obvious, but curiously absent elsewhere in the market. Of course I have a few ideas of my own.

One thing I always wondered about was using the bottom of a 1U (or any rack-mount case) as the top of the lower case. Essentially, in a rack you have two sheets of metal, the top of one case and the bottom of the other case right next to each other. There is a small gap between the nodes, that seems to serve no purpose. What if the cases were engineered to fit together in a way that the bottom of one case becomes the top the case below it (except on the the top node). In a 42U rack chassis that could save 41 large pieces of sheet metal.

Here is another idea. I have often thought about the thermodynamics of a racks and servers. A traditional rack chassis is cool on the front and hot in the a back. From a thermodynamic standpoint, a difference in temperature means you can get some work done. One way to harness that temperature difference is a device known as a Sterling Engine. A Sterling Engine can extract useful work (like turning a fan blade) from even minute temperature differences. I doubt there is enough energy to turn every fan, and there is that Second Law issue as well, but there may be enough to reduce some of the power load. Indeed, the interesting thing about Sterling Engine is the bigger the temperature difference the faster a fan blade can spin. I am happy to see this idea actually being tested by MSI.

My final idea, is really a desktop application. I have always wanted to connect a CPU heat sink to the top of a tower case. The top of the case would have a round area that radiates heat for placing a coffee mug, or occasional muffin thus keeping my morning beverage and snack at a warm tasty temperature. As things tend to collect on any flat surface within an arms length of where I sit, this could be a bad idea in the long run. For now however, such ideas will only be flights of fancy as I work on much more pressing issues — like how to fit four motherboards in single case, without tripping breakers or a visit from the local fire department.

Comments on "Breakthroughs of the Pedestrian Nature"

dmpase

Maybe this is where you are going with this topic, but while 1U cases have an element of flexibility, as you point out, they are thermally quite inefficient. The problem, also as you alluded to, is forcing air across the critical components. Energy is proportional to the square of the velocity, and the smaller cases require air to flow at much higher velocities. By ganging cases together you can use larger, more efficient blowers or fans and save a significant amount of energy. This is one of the major effects that is exploited by various blade designs and IBM’s iDataPlex.

jc2it

I think that air cooled PCs are fine, but inefficient. In a rack situation if you could use industry standard push-to-connect fittings and control the liquid in such a way as not to get everything all messy, think you could setup a quiet and cool rack-mount unit.

If the servers were all hooked to a central liquid cooling unit with a common collection and delivery system it would be easier to use that sterling technology. You would have a Hot collection side and a Cool delivery side. With the servers in between at one end and the cooling/refrigeration source at the other end.

I think with a bit of engineering a liquid cooling standard could be developed and used in many server rooms/data centers/server closets.

gcreager

First things first: Back when the Earth was young and recreation was watching the crust cool, the designation I recall was “1-RU” or even “One Rack Unit”. After awhile we all got lazy and went to “1-unit”, then even lazier, and started calling ‘em “1u”. Less to say, and less to write/type. A couple of thoughts come to mind.

When designing a small satellite some time ago, heat dissipation was a considerable issue. In microgravity, one must accomplish cooling solely via conductive transfer and what comes close to black-body radiative dissipation. The satellite design was an external aluminum cube formed by layers which each held the electronics boards. Each board was multi-layer, but the first layer deposited was heavy-gage copper used as a combination common bus and cooling bus, to which all active devices were thermally coupled. Note that this took some careful work as some active devices had to be selected so that their heat-transfer element could be bonded to a common electrical connection. This thermal transfer bus was thermally connected to the spaceframe structure to transfer heat to the greater thermal mass of the spaceframe.

Taking Doug’s thought here a bit further, thermally bonding to the top plate of the case and becoming a little creative in its subsequent cooling (liquid or airflow) could result in better thermal management. However, this isn’t a trivial design exercise (although it might be accomplished by the interested student).

I would like to see a “standard” case that could accept some number (4? 8? 16?) motherboards and use large fans plus some form of liquid cooling to promote a thermal-neutral airflow. It could get interesting to achieve 1u density as things like power cabling would have to be engineered so one could make all the connections… I suspect this could be accomplished more readily than the pizza box design: I agree that the more you restrict flow by decreasing the space in a case, the more air you have to move across the components to accomplish adequate cooling. That said, Microsoft and Google had some very interesting results allowing clusters to reside at ambient temperatures, but I couldn’t do that in Texas most of the time.

Throughout this awesome design of things you actually get an A just for hard work. Exactly where you misplaced me personally was first in your details. You know, people say, details make or break the argument.. And that could not be more correct right here. Having said that, allow me inform you just what exactly did give good results. Your writing is definitely rather convincing which is most likely the reason why I am taking the effort to comment. I do not make it a regular habit of doing that. Next, even though I can certainly notice the jumps in reasoning you come up with, I am not really convinced of exactly how you appear to connect your points which in turn help to make the conclusion. For right now I shall yield to your position however trust in the near future you actually link your facts much better.

I think the article is very helpful for us,it has solved our problem,thanks!
Wholesale Oakley sunglasses polished black orion blue iridium one sale a day outlet http://www.fleetsale.ru/new-arrival-oakleys-781.html

Hddk5C rvhmnxdwhfmb, [url=http://ouxxvzdiqiiv.com/]ouxxvzdiqiiv[/url], [link=http://tomrpccxprjf.com/]tomrpccxprjf[/link], http://qzmsgoswnnys.com/

Leave a Reply