Five Myths About Blade Servers

Since their debut in March 2001, Blade Servers have generated a ton of interest from enterprise IT departments. And a slew of misconceptions. We separate the truth from reality.

One of the problems with the IT industry is that what you know now is likely to be outdated by tomorrow. That is to say, facts become dated pretty quickly, and what’s true today may be a misconception within a few months. For example, blade servers have come a long way since their debut in March, 2001.

When RLX shipped the first systems branded blade servers in 2001 (just a few years ago, but an eon in IT years) they were an answer to the density problem — getting a bunch of systems packed into racks — but they had their problems too. Unfortunately, many of the perceived problems with blade servers are based on old information. So, let’s take a look at where the blade systems stand today.

Myth 1: Blades Aren’t Good for I/O Intensive Tasks

When blade servers made their debut, they came with underpowered hard drives that tended to be less than adequate for I/O intensive tasks. That meant that blades were well suited for tasks that required a lot of processing power, but not so hot for tasks that required a lot of memory and/or heavy disk access.

However, things have changed with greater choices in external storage. By choosing a storage area network (SAN) connected via 1GB or 10GB Ethernet or Fibre Channel, you can have excellent storage options with room to grow. Jim Kemp of Coraid says that “The advent of 10GigE is well suited to providing the bandwidth needed for storage, and AoE storage fits this need [for storage] nicely.”

The bottom line: Blade systems aren’t limited to tiny, slow disks anymore. They can have the same speed and storage capacity as other systems, so long as they’re configured correctly.

Myth 2: Blades Don’t Have as Much Memory

Another common complaint about blades is that they suffer from limited memory expansion. This was true, just a few years ago. However, the current crop of blades can hold their own when it comes to RAM.

For example, HP’s BL460c and IBM’s HS21 x86 servers can pack up to 32GB of RAM and two Intel quad-core Xeons in one blade. Not too shoddy for a system that will fit in the palm of your hand.

Myth 3: Blades Cost More

So, blades can have the same amount of RAM and decent storage, but it’ll still cost me an arm and a leg, right?

While blades may have a higher initial price tag, you have to look at what you’re getting for your dollar. Tease says that when you factor in the reduced costs of cabling — because that’s part of the enclosure — and management, you’re saving money on blades.

IBM’s worldwide manager of BladeCenter systems Scott Tease notes that you can use a single login to access up to 14 systems in a blade enclosure, a serious management advantage that you won’t get from 1Us.

More importantly, Tease also says that you’ll see cost savings from cooling and power that you won’t see from 1U systems. Given the increased need to reduce costs related to power and cooling, it’s definitely worth comparing blades to other servers to see which systems will cost the least to power.

Myth 4: Blades Equal Lock-in

One of the strongest objections to blades is that you can only buy from a single vendor. Once you choose a vendor, you’re stuck buying add-ons and new blades from that vendor, which isn’t ideal for most IT managers. Conversely, with stand-alone servers, you’re able to pick and choose — you can have a 42U rack full of 1U servers from 42 different vendors if you’re willing to buy white box 1U systems.

Tease admits that it’s unlikely that you’d be able to buy a Dell blade and put it into one of IBM’s BladeCenter enclosures. Primarily, Tease says this is not about anti-competitiveness, but about the fact that vendors have different target markets and don’t always agree on how to configure systems to meet customer needs. So blades and enclosures from major vendors aren’t likely to be compatible in the near future.

However, Tease did say that IBM works with third parties like Cisco and Brocade to enable vendors to develop add-ons that work with IBM BladeCenter enclosures. IBM isn’t alone in this, as HP and other vendors also have third party switches and other add-ons that allow organizations to choose the solutions that are right for them.

Kemp also says that blade customers might worry about lock-in, but “this isn’t much different than any hardware purchase.”

The bottom line for lock-in is to be careful when designing a system from the ground up.

Myth 5: Organizations Buy Blades for Density

Blades got attention at first because they were a way for organizations to squeeze more systems into less space. Rackspace is expensive, and many organizations were looking for ways to shove as much compute power into the smallest space possible.

But, a lot has happened since blade systems were introduced. For one thing, virtualization has solved a lot of the problems related to putting a bunch of systems in limited space. Rather than having different machines for each task, we can now have beefier machines that run a bunch of virtual guests and really eke out the last bit of performance of our hardware.

So, many IT folks are left with the idea that blade systems are slim, but underpowered. However, when compared with their 1U counterparts, the truth is that modern blade systems can pack in the RAM and have I/O to spare when configured with external storage.

In fact, many systems support running blades diskless, which allows for blades to be stateless — which is another advantage for blade technology.

Want to upgrade that old single-core system with a quad-core Xeon system with more RAM? No problem. Just power down the blade, swap in the new one with the same network parameters and you’ve gotten a boost in speed without the corresponding headache of configuring a new server.

Of course, blade systems aren’t the solution to every problem — but they’ve come a long way in a short time, and it’s worth evaluating all the options before just ordering a new batch of 1U systems that will be obsolete in a few years.

Fatal error: Call to undefined function aa_author_bios() in /opt/apache/dms/b2b/linux-mag.com/site/www/htdocs/wp-content/themes/linuxmag/single.php on line 62