The Servtainer Has Arrived

What happens when large-scale forces re-think the traditional solution? The shipping container has become the new supercomputer case.

When I was a young buck, I worked in a commercial bakery the summer between college and graduate school. This place could sure turn out the bread. It was highly mechanized and ran, as far as I could tell, very efficiently. The first day on the job, I asked one of the workers about all the large empty vat type things along the wall. When I mean large, I mean, about five feet wide, by ten feet long, by about three feet high, stainless steel behemoths on wheels. The worker looked at me and said, “That where they mix the dough, you don’t’ think we make this like your mommy makes it at home do you college boy?” I just nodded and went back to pushing my broom.

Scale is an interesting thing. Often the scale of a problem determines the best way to solve it. For instance, Frank Castanza building several PCs a week in his garage is quite different than Dell pushing out PCs 750,000 a week. Neither method is cost effective at the opposite end of the scale. Which brings me to cluster scaling and the Google approach.

It is not secret that Google has a lot of servers. Granted they are not folding proteins or simulating car crashes, but they are managing a large number of systems. Recently the design of the Google server was revealed to the general public (you can view a video as well). Not that there was anything fancy to reveal, it is a rather low tech solution — aluminum tray, motherboard, hard drives, some Velcro, and a backup battery. Perhaps the most remarkable thing about it is how un-remarkable it is. And, that is the point. If you are going to put up a lot of servers at a minimial cost, you need to focus on the essentials. There is no fancy 1U case (the unit are in a 2U high tray), no redundant power supply, no array of LED’s, no slide rails, or fan banks. The server case is essentially “open”, there is no “top” other than the bottom of the server above it in the rack.

Which brings me to the most important point. These servers were not designed for the typical computing center. They were designed to live a in modified shipping container. Each container is a standard 1AAA shipping containers packed with 1,160 servers. Indeed, both the container and servers function as one unit. I have coined the term servtainer for this type of super-node. Note, I consider this a 4640 core compute node (9.286 cores if quad-core processors are used). The end of the article talks about the container, but a better description is in this video called “Google container data center tour.” It is important to note that the servtainer is sub-component of a larger system. It has standard inputs (electric, water, network), can be easily moved (replaced), and
can be easily replicated.

The reported power efficiencies of the Google data centers are due in part to a highly controlled servtainer environment. Since the case for the 1,160 is actually the container, Google can exercise a large measure of control over the internal environment. For instance, they keep a very firm boundary between hot and cold air. The other aspect to the servtainer is that humans can safely enter and work inside. There is a master power control switch (“on-off” button) for the servtainer as well.

As data center space and cost become more of a premium, the servtainer approach may become more popular in HPC as well. I have seen designs where containers can be stacked to build large multi-level data centers almost anywhere (provided the power and cooling is available). This approach reduces the cost for traditional data center expansion or construction. As node counts continue to increase, the servtainer maybe the new “cluster super-node.” Granted there is a certain scale where this approach become economical as it is not a small scale solution. For these situations a standard rack mount solution (1U servers, blades, etc.) makes sense.

One final note. There is certain low cost aesthetic of the servtainer. Back in the day, the organizational supercomputer was often in a “fish bowl” environment. (i.e. placed where many workers and visitors could see it). These systems were even designed to look good in the fish bowl — blinky lights and all. Imagine an organization that just spent $10 million on a few servtainers explaining their investment to vistors or directors, “Those containers in the back parking lot next to the trash dumpsters are running our HPC applications as we speak. They are not pretty and they are cold and dark inside, but they get the job done. After all who was it that said, the container is the cluster.”

PS I’m now on Twiter. I’ll try to post interesting news and links rather than what kind of peanut butter I use — Jiff if anyone is inerested. But I don’t eat it much anymore.

Fatal error: Call to undefined function aa_author_bios() in /opt/apache/dms/b2b/linux-mag.com/site/www/htdocs/wp-content/themes/linuxmag/single.php on line 62