Are the days of IT's ultra-geeks are numbered with the introduction of virtual data centers and cloud computing?
Out with the old and in with the new or so the story goes. New technologies call for new strategies, new thinking and new ways of dealing with old problems. Yes, that is the definition of a paradigm shift and that’s exactly what we’re experiencing in today’s data centers. A shift from the traditional physical server and its failure-prone parts to those completely made up of electrons. Virtualization and cloud computing are here to stay and they independently remove the need for any capacity planning for services rendered on those technologies.
Do you agree?
I hope not.
Capacity planning often makes the difference between $2 million well spent or $2 million wasted on a project’s infrastructure.
There’s almost nothing worse than wasted computing power — gaggles of systems sitting there 97% idle, 10% disk used and all those gigs of RAM unfilled. That’s the sound of wasted capacity and wasted money. Capacity planning keeps this waste to a minimum. It is defined loosely as the science of calculating expected computing power needed to perform a specific job or function. The word calculated is the operative one in that definition. Capacity planners perform complex calculations based on their experience, hardware technical specifications, similar or comparable loads on other systems and service load expectations.
Capacity planners also concern themselves with power consumption, network capacity and performance of all hardware, software and user loads on the systems in question.
In projects where no capacity planning exists, hardware is either grossly underutilized as previously described or there is a scramble to add capacity to an underpowered solution. Both scenarios are costly and result in reduced service to customers.
Ongoing performance monitoring with tools such as sar, iostat or vmstat are useful in providing performance snapshots but are not predictive. Orca is an example of a predictive tool that provides you with numerical and graphical performance trends allowing you to make decisions based upon hourly, daily, weekly, monthly and quarterly data.
This data provides the capacity planner with enough information to make decisions far enough in advance of a failure to avert service loss.
The Virtual Bottleneck
So, what does capacity planning and performance monitoring have to do with virtualization? Many virtualization projects arise out of a need to save money on hardware through server consolidation. Server consolidation is performed on systems that are underutilized and therefore their workloads can be consolidated via virtual machines and virtual hosts. Without performance monitoring and capacity calculations, these projects would fail leaving virtualization to the ivory towers of academia.
So, are underutilized systems the only ones that are considered for virtualization? Certainly not. Even systems that are reaching their resource limits are excellent candidates for virtualization. Why? There is a tendency to think of virtual machines as being very different from physical machines. They are different in form but not in function and that’s where one needs to experience a paradigm shift. Think of a computer, physical or virtual, as CPU power, memory, disk space and network connectivity — in essence, a resource or set of resources available for workload deployment. A system that has reached its end of life, or is aged beyond a reasonably priced service contract or one that has exceeded capacity needs its workload moved to a virtual format.
The system that is over capacity should have its workload split into multiple virtual machines. The way to know if the system is over capacity is to monitor its performance over time bringing us back full circle to the concept of capacity planning.
Capacity planning is a preliminary and an ongoing task in any data center for all resources, whether they’re physical or virtual. Remember that workloads use resources. Forget the concept of distinct computers and physical resources. Use your resources wisely by tuning in to proper resource allocation with a little planning. The idea of saving money isn’t obsolete but wasting it is.
Kenneth Hess is a Linux evangelist and freelance technical writer on a variety of open source topics including Linux, SQL, databases, and web services. Ken can be reached via his website at http://www.kenhess.com
. Practical Virtualization Solutions by Kenneth Hess and Amy Newman is available now.