dcsimg

Doug Meets The iDataPlex

Travel with Linux Magazine's HPC editor as he finally learns why everyone is fussing over the the IBM iDataPlex.

When the name “iDataPlex” was first mentioned to me, I thought to myself;

“Hmmm … That sounds like some kind of a store at the mall …”

(And yes, before you even say it — I already know that I need to get out more …)

But anyway, after receiving confirmation that IBM’s iDataPlex is actually the hottest (or more accurately, the coolest — but more on that later) new “clustering system” on the block, I decided that I needed to find out just what the buzz is all about.

Now, the term “clustering system” may strike you as having kind of an odd ring to it, but that is exactly what the iDataPlex actually is. A clustering system. In fact, it is more than that. The iDataPlex is a remarkably well engineered solution for HPC cluster computing. But before I get too far ahead of myself, let’s just back up a little bit …

Back in September, when I first heard about the iDataPlex I slid right over to Google to do a little research, which led me to an IBM product web page. Looking it over, I have to admit, I was still not sure what was so special about this thing. It seemed like just another set of nice looking servers in a rack — been there done that.

However, that was before I attended the High Performance on Wall Street conference in NYC at the end of the month. Heading over to the IBM booth there was a large graphic of the iDataPlex and a real node. A nice man started running through a list of product specifications with me, but I still did not quite get it. What the heck is so special about this thing?

TIME FOR A FIELD TRIP

There is an old saying in this business that goes something like this, “If you don’t now what something is, then make a few calls and head over to IBM at 590 Madison in NYC and see for yourself”. Sure enough, a few phone calls later and I had an appointment in the big apple. As an added bonus I could even ask some questions.

When I arrived at IBM, I was greeted by Dave Weber and David Franko. Dave Weber is the program director of the Wall Street Center of Excellence, Worldwide Client Centers for IBM. David Franko is the Worldwide Business Development Manager for iDataPlex.

“Ah-ha!” I thought, “this guy HAS to know what this thing is.”

First things first, the name; “i” is for internet, “Data” is for Data Center, and “Plex” is for large scale.

OK. But, Large Scale Internet Data Center?

“That’s not HPC,” I thought.

“That’s Google. eBay. YouTube. That kind of stuff. I’m an HPC guy. I hope I’m not in the wrong meeting …”

But they had great coffee, so I kept listening to what they had to say.

And as David Franko walked through an explaination of the origins of the iDataPlex, the big picture began to become more clear …

Originally, the iDataPlex had been designed with Web 2.0 applications in mind. However, IBM soon came to learn that the iDataPlex design fit the needs of their Financial and High Performance Computing (HPC) customers extremely well.

“OK, wait a minute …” I thought.

“HPC. Now THERE is a familiar word. If IBM found that the HPC jocks in the Financial sector were really excited about the iDataPlex, then there must really be something here …”

Now, as most of you probably know, HPC jocks don’t exactly have the same priorities as “normal” techie folks. No. Their shopping list usually looks something like this:

  1. They want to see as many cores as physically possible crammed into the smallest possible space.
  2. They want the fastest performance/lowest power usage for their dollar.

    As a corollary to that, they don’t want anything built into their compute nodes that does not directly contribute to better performance. The list of unwanted extras usually includes redundancy, management interfaces, or specialized parts that are not standard.

    And

  3. They want all of systems that they buy to be rolled into their lab or computing center ready to go. “Plug and play” on the largest scale possible.

Now, with a priority list like that, you can expect that these guys are going to like pushing their systems to the outer edge of the computing envelope. And at the outer edge of any envelope, things have a tendency to break. But real HPC jocks are ready for that, and they expect it.

Many HPC applications can tolerate hardware failures by design. As a matter of fact, HPC jocks almost expect hardware to fail due to the shear number of servers they often use.

Of course, they don’t like it when their job that has been running for 2 weeks dies, but that is why they design their applications so “check point” files get written out every hour or so. That way, you can always restart from the latest checkpoint.

In many cases, HPC jocks live with and manage failure because they would rather put their budget into cores instead of redundancy.

Were my friends at IBM trying to tell me that the iDataPlex was designed to meet the needs of HPC jocks the world over?

YOU PUT THEM IN THE WRONG WAY

“OK!” I thought, as I took a deep slug of coffee.

“What did those financial HPC jocks see here that I haven’t seen yet? I still wanna know what makes this thing so special.”

And just then, I saw some guy put the server nodes into the iDataPlex rack the wrong way.

“Should I say something?” I thought.

“No, I’ll just be polite and listen. Then I’ll drop the boom and let them know that no one else does it this way. I’ll let them down easy.”

And then David Franko then showed me a picture of the 2U case he was inserting into the chasis. It was shorter than a standard 2U case, and only about 15 inches in depth.

“That’s odd,” I thought, “it looks like you could fit a motherboard and hard drive on that, and that’s about it.”

But then again, an HPC system doesn’t really need any more hardware than that, does it? Maybe a high performance network card (10GigE of IB), but nothing else.

And hey! Look at that! Since the cases were shorter, they could turn the rack chassis sideways and create more room for servers. So for a minute, I thought this thing might turn out to be an 84U system vs. the standard 42U.

“Well, guys, you almost have something there,” I said to myself, “but a 2U server? Come on. That is so yesterday. You need a 1U server.”

Just then, as I was about to hit these guys with a clue stick, Franko showed me the inside of a server.

And wow! Look at that! The thing had two 1U server trays, a shared power supply and shared fans! And, get this, it was not an IBM motherboard! Each tray had an Asus motherboard and was holding two quad-core Intel processors.

In addition, the back of the motherboard faced the front of the case. That is, all the interface connections came out of the front of the cases. There was room for a single hard drive.

“Wow, that’s cool! This thing is starting to make some sense real sense!” I drowned my thoughts out with another gulp of coffee.

I mean, this was obviously not your typical 1U server node. Indeed, it was almost like someone ask a group of HPC users to design a node. I was beginning to see the a method to their madness. I mean somebody really put a lot of thought into this thing.

My highly trained mathematical brain kicked in at this point. If they really have the ability to put one motherboard in 1U then they just doubled their potential compute density. Clever.

The super-coolness of the chasis was just beginning to sink in for me, when we started to take a closer look at the 2U node case. Oh, but maybe I better back up here for a second …

With the iDataPlex, there are a variety of servers which can be installed in the rack chassis. A 2U compute server is shown in Figure One. Note how each tray slides out and has one hard drive and room for an extra PCIe card.

Figure One: Front view of iDataPlex 2U chassis with removable 1U trays
Figure One: Front view of iDataPlex 2U chassis with removable 1U trays

The compute servers are available in two versions. A power-optimized version which offers two dual or quad-core Intel Xeon (Bensley) processors, up to 64 GBytes of RAM (when those 8GB DIMMS are available), a 1333 MHz FSB, and an 8x PCIe slot.

The high-performance node offers two quad-core Intel Xeon (Stoakley) processors, up to 128 GBytes of RAM, 1600 MHz FSB, and a 16x PCIe slot. Both version have dual GigE ports, a single hard drive and a BMC controller port.

There are other types of 2U chassis as well. There are I/O rich nodes (extra PCIe slots) and Storage-rich nodes with extra drive bays for either SAS or SATA drives.

There is even a 3U storage chassis for those in need of heavy duty storage. Similar to the compute nodes, each node has all connections and drive access on the front of the chassis.

As shown in Figure Two, the back of the chassis has a removable fan tray and a power connection (hidden from view).

Figure Two: Rear view of iDataPlex 2U chassis showing fan tray and power supply
Figure Two: Rear view of iDataPlex 2U chassis showing fan tray and power supply

The fans are interesting. Most 1U cases have eight or more small (and noisy) fans which use on average 88 Watts of power per server. The iDataPlex uses four 13W fans which use a total of 33 Watts per server. That is a savings of 55 Watts per server.

Why so much? Since the enclosure is shorter, the distance that air must be moved is shorter, which allows a more efficient fan to be used. It does not stop there.

The power supply has a similar story. Since the power supply is shared and highly efficient, it also saves on the power budget.

Looking closer at the back of the 2U chassis I noticed there were no cable connections. There was a simple power connection, but not the usual morass of cables found in the back of a standard 1U server rack. Having seen many messy (and even a few orderly) backs of clusters, this image stuck me as weird. I kept asking myself;

“where are the cables?”

Then I remembered that the cabling is around front where you can work with everything in one place.

“This thing is going to take some getting used to, but man, what a great idea.”

Did I say that out loud this time? I needed to refill my coffee …

A SOMEWHAT CHILLING EFFECT

Like most clusters, the back of an iDataPlex is pretty much a space heater. Since there are no wires or cables obstructing the airflow, it really is like a big space heater producing a steady flow of hot air. But as it turns out, the clever folks at IBM came up with a way to do something about that too.

They added “The door”.

That is, the iDataPlex has a water cooled rear door heat exchanger option. This feature means you don’t need a data center environment for an iDataPlex. As long as you have chilled water, you can install one of these systems anywhere.

Not only that, the rear door heat exchanger can remove more heat than the iDataPlex creates (upto 100,000 BTUs).

Think about this for a minute.

You know the old closet in your lab that has the frog with two heads in it?

We all know you would love to turn it into your own private genomic data center, because the campus computing center is too full, right? So, just run some chilled water, and presto! You now have a mini-data center! Put the jar with the frog in the bookcase next to your desk.

At this point I realized that the last of the coffee was gone, but I was so into the iDataPlex that I didn’t care.

But there was still one more big HPC jock issue that was nagging at me.

How hard is it to put one of these things together?

Is this something that needs to be staged and tested at the customers site? that kind of thing can add weeks and literally mountains of cardboard to the installation overhead.

However, just as I was about to stump them with my final “So, how is the customer supposed to put all this together?” question, they mentioned that iDataPlex systems come pre-built, tested (using xCAT clustering software by the way), and ready to run directly to the customers site.

“Darn.” I thought, “I haven’t even gotten one good zinger in here during this whole presentation.”

But then again, I didn’t care that I couldn’t hit them with a “gotcha”, because I was just so impressed with the design of this thing.

In fact, it’s worth looking at from a few more angles, so let’s do that for a second.

Figure Three gives you a perspective on the front of the iDataPlex.

Figure Three: Front view of the iDataPlex (note the vertical switches).
Figure Three: Front view of the iDataPlex (note the vertical switches).

The rack chassis will hold standard 19 inch components such as switches and RAID arrays (Remember at about 24 inches it is not as deep as a standard chassis). It has 84U of space for servers because it is basically turned to the side (double the standard 42U rack).

Another thing to notice is the cleaver use of side space for vertical 1U components (in most cases switches). The use of vertical space adds an additional 16U of space to the iDataPlex rack for a total of 100U of usable space.

This particular iDataPlex they showed me was wired using GigE. But of course you could add a 10GigE or InfiniBand card to the 1U node trays.

Unfortunately, I forgot to ask about cable management with those types of interconnects, as the cables they require are much thicker than standard GigE cables.

Rats! Maybe, just maybe, I could have stumped them.

I doubt it, however, because these guys just kept going on, pointing out details like fan holes, fan louvers if you remove a tray, power feeds, etc. It seems like they thought of everything.

Hmmm …

What about custom paint on the door? I’ll bet they couldn’t paint a frog with two heads on the doors. Yeah! Well, I’ll have to stump them with that next visit because I was about to miss the last bus home.

PER-PLEXED NO MORE

One of the reasons I don’t like writing product reviews, is because at the end you often have to give a rating to whatever it is you just reviewed. The whole point of the rating is to allow the reader to compare one system to another.

But in this case, I have a problem.

The problem is that there is nothing else like an iDataPlex. I suppose you could compare it to a standard 1U rack solution, but that seems so unfair.

Never at a loss for words, however, I do have a few comments.

These days the “guts” of HPC systems are pretty much the same. The processor, chipset, motherboard and interconnect each have their strong points, but all vendors start with basically the same commodity components.

I have spent enough time with my hands inside clusters to know that there is more to HPC than just getting the commodity guts right. And the iDataPlex is a great example of a system that does everything exactly the way you would want it to. Not only the guts, but also everything beyond the guts.

It is a well thought out solution, and even if it was originally designed with the web 2.0 crowd in mind, we’ll just have to send them a thank you note. Because it seems just as much like the iDataPlex was designed to give the HPC jocks out there exactly what they have been asking for as well.

Indeed, after thinking about it, much of the design is just good engineering, nothing overly remarkable or even breakthrough in nature. Perhaps the most remarkable thing about the iDataPlex is that nobody has thought to combine all of these great ideas into a single system like this until now.

Of course in HPC, everything depends on price-to-performance. I don’t have any hard numbers, but I am told the initial cost if an iDataPlex is often less than that of a comparable 1U solution. (Remember, the commodity guts of both are the same).

Besides the basic cost of the system, the iDataPlex lowers TCO (Total Cost of Ownership) because of reduced power and cooling costs it provides.

And don’t forget, the rear door heat exchanger allows you to put an iDataPlex almost anywhere you can run chilled water, which is often easier (and definitely cheaper) than installing chilled air.

So, there you have it.

My time with the iDataPlex up for now, I climbed onto the bus and sank into an empty seat for the ride home.

And as we drove past a bustling shopping center in the suburbs of New Jersey, I found myself drifting back to my initial impression of that name again — iDataPlex …

And that’s when it hit me! I realized that there was still one thing they had not thought to do.

Instead of calling it the “iDataPlex for Web 2.0″, they should have just called it the “iDataPlex for HPC 2.0!”

Finally, someone got it right.

Fatal error: Call to undefined function aa_author_bios() in /opt/apache/dms/b2b/linux-mag.com/site/www/htdocs/wp-content/themes/linuxmag/single.php on line 62