dcsimg

Five Myths About Blade Servers

Since their debut in March 2001, Blade Servers have generated a ton of interest from enterprise IT departments. And a slew of misconceptions. We separate the truth from reality.

One of the problems with the IT industry is that what you know now is likely to be outdated by tomorrow. That is to say, facts become dated pretty quickly, and what’s true today may be a misconception within a few months. For example, blade servers have come a long way since their debut in March, 2001.

When RLX shipped the first systems branded blade servers in 2001 (just a few years ago, but an eon in IT years) they were an answer to the density problem — getting a bunch of systems packed into racks — but they had their problems too. Unfortunately, many of the perceived problems with blade servers are based on old information. So, let’s take a look at where the blade systems stand today.

Myth 1: Blades Aren’t Good for I/O Intensive Tasks

When blade servers made their debut, they came with underpowered hard drives that tended to be less than adequate for I/O intensive tasks. That meant that blades were well suited for tasks that required a lot of processing power, but not so hot for tasks that required a lot of memory and/or heavy disk access.

However, things have changed with greater choices in external storage. By choosing a storage area network (SAN) connected via 1GB or 10GB Ethernet or Fibre Channel, you can have excellent storage options with room to grow. Jim Kemp of Coraid says that “The advent of 10GigE is well suited to providing the bandwidth needed for storage, and AoE storage fits this need [for storage] nicely.”

The bottom line: Blade systems aren’t limited to tiny, slow disks anymore. They can have the same speed and storage capacity as other systems, so long as they’re configured correctly.

Myth 2: Blades Don’t Have as Much Memory

Another common complaint about blades is that they suffer from limited memory expansion. This was true, just a few years ago. However, the current crop of blades can hold their own when it comes to RAM.

For example, HP’s BL460c and IBM’s HS21 x86 servers can pack up to 32GB of RAM and two Intel quad-core Xeons in one blade. Not too shoddy for a system that will fit in the palm of your hand.

Myth 3: Blades Cost More

So, blades can have the same amount of RAM and decent storage, but it’ll still cost me an arm and a leg, right?

While blades may have a higher initial price tag, you have to look at what you’re getting for your dollar. Tease says that when you factor in the reduced costs of cabling — because that’s part of the enclosure — and management, you’re saving money on blades.

IBM’s worldwide manager of BladeCenter systems Scott Tease notes that you can use a single login to access up to 14 systems in a blade enclosure, a serious management advantage that you won’t get from 1Us.

More importantly, Tease also says that you’ll see cost savings from cooling and power that you won’t see from 1U systems. Given the increased need to reduce costs related to power and cooling, it’s definitely worth comparing blades to other servers to see which systems will cost the least to power.

Myth 4: Blades Equal Lock-in

One of the strongest objections to blades is that you can only buy from a single vendor. Once you choose a vendor, you’re stuck buying add-ons and new blades from that vendor, which isn’t ideal for most IT managers. Conversely, with stand-alone servers, you’re able to pick and choose — you can have a 42U rack full of 1U servers from 42 different vendors if you’re willing to buy white box 1U systems.

Tease admits that it’s unlikely that you’d be able to buy a Dell blade and put it into one of IBM’s BladeCenter enclosures. Primarily, Tease says this is not about anti-competitiveness, but about the fact that vendors have different target markets and don’t always agree on how to configure systems to meet customer needs. So blades and enclosures from major vendors aren’t likely to be compatible in the near future.

However, Tease did say that IBM works with third parties like Cisco and Brocade to enable vendors to develop add-ons that work with IBM BladeCenter enclosures. IBM isn’t alone in this, as HP and other vendors also have third party switches and other add-ons that allow organizations to choose the solutions that are right for them.

Kemp also says that blade customers might worry about lock-in, but “this isn’t much different than any hardware purchase.”

The bottom line for lock-in is to be careful when designing a system from the ground up.

Myth 5: Organizations Buy Blades for Density

Blades got attention at first because they were a way for organizations to squeeze more systems into less space. Rackspace is expensive, and many organizations were looking for ways to shove as much compute power into the smallest space possible.

But, a lot has happened since blade systems were introduced. For one thing, virtualization has solved a lot of the problems related to putting a bunch of systems in limited space. Rather than having different machines for each task, we can now have beefier machines that run a bunch of virtual guests and really eke out the last bit of performance of our hardware.

So, many IT folks are left with the idea that blade systems are slim, but underpowered. However, when compared with their 1U counterparts, the truth is that modern blade systems can pack in the RAM and have I/O to spare when configured with external storage.

In fact, many systems support running blades diskless, which allows for blades to be stateless — which is another advantage for blade technology.

Want to upgrade that old single-core system with a quad-core Xeon system with more RAM? No problem. Just power down the blade, swap in the new one with the same network parameters and you’ve gotten a boost in speed without the corresponding headache of configuring a new server.

Of course, blade systems aren’t the solution to every problem — but they’ve come a long way in a short time, and it’s worth evaluating all the options before just ordering a new batch of 1U systems that will be obsolete in a few years.

Comments on "Five Myths About Blade Servers"

thecolombian

Blades Cost More

On the cooling and power side Blades to cost more if you do know own your own data center. If are space on one of the largest data center providers they will charge you the same amount of money for power regardless if your servers can run on lower power when not fully used. On an outsource datacenter you pay full price for the maximum capacity of a 120/20 and the corresponding cooling that is associated with that power. That is:

Amps * Volts = Watts
Watts * hours = watthours
watthours * 3.41214148 = Btu

Reply
gcreager

Our experience with blades has been that while they’re useful for HPC applications, they tend to concentrate heat discharge, making good thermal management a significant factor in planning and operations. Doing it over, I’d rather concentrate my systems in a multinode 1u chassis with more even rear-discharge air than a blade system with forced vertical exhaust.

Reply
srikanth.cheruku

I think, manufacturers know that blades are not that great, but they are just trying their luck by advertising too much.

Reply
jskondel

We currently have 21 Dell blades with RHEL 5 and WMVware ESX suite installed. Now we have over 110 VMware guests running and growing, ranging from Windows server to Linux to FreeBSD.

Reply
srikanth.cheruku

how much did you pay for the vm ware esx license?

Reply
elittle

I have worked with the HP Blade systems in conjunction with an HP EVA 4400. This has been a winning combination for me and my company. I can get more out of a blade system running ESX using boot from SAN than I ever could with a bunch of 1U servers ever could. I think that this is the way to go and the ROI is there.

Reply
bongsf

I never like the idea of having blades + VMware/virtualization for a fit all your 60 servers in 6U solution. Even though it sounded and looked cool, there’re just to many drawback in it. Let’s start with the hardware. As mentioned in the article, and contrary to what was written, IO is STILL a bottle neck. Just look at IBM’s blade where there are limited to 2 Qlogic FC controllers per blade. This is already an indication it is never meant for high I/O demand. Remember we could squeeze 8 cores/32GB RAM in a blade, but we get a mere 8gbps total for disk access. This has not factor in the flexibility of design of SAN. As for the network interfaces where all blades are connected to a 6 ports Nortel switch. One really need to design the network carefully to segregate the bandwidth by configuring the switch for all the blades in order to not having all of them squeezing through a mere 1gbps uplink!
As for virtualization, it’s really about saving the money in hardware and send it to VMware. Look at the pricing of ESX, and it’s RIDICULOUSLY expensive. Sometimes I just think there’re too many trend followers dumping their money for a non-feasible solution. Where’s the ease of management where all vm are running on its’ own? And what is so much about hardware management cost? Do we expect to see a faulty component on all the individual server everyday? That’s not including the case when one blade burn down 20 vm would just die at ONCE! When a 1u dies, ONE machine dies.

Reply
stoggy

blade management sucks, i hate those webpages.

give me ssh or give me death!

Reply
shanedawg

stoggy, you can do a lot on hp c class with the command line. For example you can upgrade 16 blades firmware with a couple lines of code.

As for the article above, it\’s mostly quoted marketing BS. blades saving you money by having fewer cables? Give me a break.

Reply
zwing

LMAO at:

blades saving you money by having fewer cables.

Blades are not for every one since they require additional planing (network, storage, cooling,etc)

For example us, were a small data center and instead of having 60 servers ranging from 1U all the way up to 4U we bout an HP B-class enclosure with 4 Blades and a couple of redundant Aberdeen SAN\’s

This occupies less than 1 rack!!!

Sure the network configuration was a bit tricky but everything works out better that before,now we have HA where it wasn\’t possible before due to space limitation of the site.

As far as ESX goes we are using ESXi and its more that enough for our needs.

So blades for us did work out well enough, and we still have room to grow on that chassis.

Reply

It is the best time to make some plans for the long run and it’s time to be happy. I have learn this put up and if I may just I desire to recommend you some attention-grabbing things or tips. Perhaps you could write subsequent articles relating to this article. I desire to read more issues about it!

Reply

It’??s in reality a nice and helpful piece of info. I am satisfied that you just shared this useful info with us. Please stay us up to date like this. Thank you for sharing.

Reply

Bv41sg web page, and post is in fact fruitful designed for me, keep up posting such content.

Reply

Many thanks for sharing your superb web page

Reply

Usually posts some incredibly fascinating stuff like this. If you are new to this site.

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>