dcsimg

Bang for the Buck

With the new year right around the corner, it's worth thinking about where you can get the biggest bang for your buck--quite literally. In a lot of organizations, budgeting is a funny exercise that requires you to "use it or lose it" at the end of the year while also having surprisingly detailed plans for next year's money.

The vast majority of IT folks I know are not fond of the budgeting process. That’s probably putting it very mildly. In a field like ours, it’s incredibly difficult to know what you’re going to need nearly a year from now. Requirements and technology both have a way of changing faster than we’d really like. Even the impressive technological leaps that appear from time to time result is a lot more short-term hassle that we anticipate. That’s even more true for early adopters who get to find the bugs and support nightmares that make second generation products so much better.

But instead of ranting on about how budgeting for IT projects and ongoing support/infrastructure should be done, let’s take a very pragmatic view. Assuming you want to get the most bang for your buck right now (before the end of the year) or later next year (after the next generation hardware appears, pricing improves on today’s high-end gear, and you actually have money again).

My aim here is to improvement in the typical LAMP or LAMP-like stacks. For each component, I’ll consider where it can help now as well as where the trends are point for 6-9 months from now.

CPU

If your servers are aging and your CPU-bound application servers or analytic databases could use a speed boost, now may be a great time to swap in some new Nehalem (Xeon EX) based machines. Doing so is a great way to modernize your servers with replacements that use less power, can accommodate more DRAM, and provide noticably higher single and multi-threaded performance for the same power utilization of previous generations. Modern quad-core Nehalem processors use Hyper-threading to provide 8 virtual CPU cores (as far as Linux is concerned). Replacing an older generation Dual CPU, Dual Core AMD box with a modern Dual CPU, Quad Core Intel Nehalem machine means going from 4 cores to 16. Combined with the increased memory sized that are supported on the newer machine, you can load lot more concurrent work onto a single box that draws less power.

Looking ahead to next year, Intel is expected to release the 6 (“Gulftown”) and 8 (“Beckton”) core Nehalem based CPUs. That means two primary things if you’re willing to wait. First is the obvious: you’ll get even more bang for the buck in the form of higher density and concurrency (up to 32 “cores” in a dual-processor machine with Hyper-threading), not to mention any incremental improvements Intel throws in along the way. Secondly, the release of the next generation always means the the price of the existing generation is pushed down even farther. That means you can expect to see some very affordable quad core machines in the market.

If you’re thinking about getting into virtualization to better consolidate your services next year, you’re going to find some incredibly powerful CPUs waiting for you. If you’re planning to experiment or do more with MapReduce style batch processing using Hadoop or similar tools, you’ll be amazed at what’s possible with all those cores.

RAM

Memory prices have been fairly good in recent times. And since it’s common to procure servers with less than maxed-out memory configurations, it’s a good time to think about whether adding memory anywhere will make sense. On fairly memory intensive applications, such as memcached, Redis, or any number of distributed in-memory key/value stores, you often see fairly low CPU utilization but data may be LRU-ing out fast than you’d like. Or maybe you simply wish to grow your caching teir but don’t like the prospect of having to administer even more servers (assuming you have room for them in the first place).

So in the short them, look around and see what your configurations look like. Do you have a handful of 16GB boxes that you could easily double to 32GB to allow for some more breathing room? If so, that may be an easy sell with a few thousand dollars left in the budget.

Looking deeper into next year, your best bet on memory is to pair a substantial upgrade with any new server purchases you make. I won’t be at all surprised if it’s common to see servers supporting at least 128GB and likely 256GB of RAM. Sure that new capacity comes at a price (often a very steep one), but you have the less expensive option of populating it with less dense memory and waiting a year or so in the hopes that the really high density chips come down substantially in price. It doesn’t always work out that way, but the odds are generally in your favor.

Storage: Hard Disks and SSDs

Probably the single biggest performance boosting change in the last few years has been the introduction of solid-state storage, which I’ve written about several times already. For about three times the cost of a traditional hard disk, you can get something that’s easily 10 times faster in applications that are primarily bottlenecked by seek time: databases, mail servers, and even day to day desktop or laptop computing. And not only do you see very dramatic changes in performance, solid state disks use less power than their mechanic counterparts and have no moving parts to worry about.

If you’ve got a bit of money left at the end of the year and want to give your larger-than-RAM database servers a real performance boost, consider looking at some solid state storage. Intel’s “Extreme” line of SSDs have proven to be very popular in demanding environments. They’re reliable, fast, and very well supported.

The biggest downside of today’s solid state storage is capacity (or density). You really can’t go beyond a few hundred gigabytes per drive with today’s technology. So for very large datasets, you’ll end up needing to RAID a fair number of SSDs together. There is hope on the horizon, though. I’ve spoken to people from several companies that work with flash memory manufacturers and they all assure me that technology under development now will significantly close that gap while also boosting performance even more. They’re expecting the next generation of storage devices to appear in the second half of 2010. Plus, newer RAID controllers are getting smarter about being able to take advantage of the unique characteristics of SSDs too.

If you’re in an environment that’s more likely to use a Network Attached Storage (NAS) or Storage Area Network (SAN) solution, there’s good news for next year too. The work of companies like Sun (with products like their 7000 series) has proven the value of hybrid flash and traditional storage configurations in larger environments as well. That combined with the maturing of the solid state storage market has prompted virtually all the major NAS and SAN companies to add solid state offerings to their product lines as well.

So the bottom line is that if you haven’t added solid state storage to your environment this year for cost or capacity reasons, late next year should have a very compelling selection of products available.

Networking: 10 Gigabit Networking

It’s common to see gigabit networking gear in just about any data center these days. And even though the next generation, 10 gigabit networking, has existed for a while, the price for making that jump has been very high. If you’ve been waiting for the price to come down before upgrading, now may be a good time to look again. The prices have improved a fair amount.

But what’s more likely is that you’re not seeing a compelling reason to upgrade. In a lot of deployments, standard gigabit ethernet provides enough bandwidth to handle bursts of activity while delivering very good performance for most workloads. Part of the problem is that when looking at networking gear, it’s all to common to focus on the bandwidth–how much data can you move per unit time? But the bottleneck for a lot of applications, such as cache servers and NFS servers, is actually latency–how long does it take to move a single packet from machine A to machine B?

When it comes to latency, 10 gigabit ethernet provides some very real benefits. It’s essentially a 10x improvement in both latency and bandwidth. That means applications like memcached can respond up to 10 times faster without anything more than a network card swap. And, of course, the prices of 10 gigabit gear continue to fall as the volume sold increases.

Given the current pricing and availability, I’d look at doing only very selective upgrades today–focus on the places where you have real latency bottlenecks. A year from now, however, the pricing may have improved enough that it’s worth opting for 10 gigabit networking on new server orders.

Fatal error: Call to undefined function aa_author_bios() in /opt/apache/dms/b2b/linux-mag.com/site/www/htdocs/wp-content/themes/linuxmag/single.php on line 62