Virtualization Through Thick and Thin

Thin provisioning sounds like a good idea in theory, and sometimes it is, but when it goes bad, it really goes bad.

Back in the good old physical server days, you bought a server system, added disks to it and, after a time, when you came close to filling those disks, either you added more disks or replaced them with larger ones. Times have changed in the virtual world. You can still provision a static disk (Thick provisioning), which is typically much too large for the workload and any reasonable amount of growth. But, you do it to prevent that middle-of-the-night ‘disk is full’ call. With thin provisioning, you don’t have to worry about that call anymore. Or, do you?

Thin provisioning is an option presented to you during the virtual machine’s disk creation step. The “thin” option provided is a dynamically expanding disk. Initially, the virtual disk only uses as much space as needed and then expands as necessary during the life of the virtual machine. And, what’s even better than that, you ask. You can convert from a static disk to a dynamic (thin) disk even while your virtual machine is running. This sounds like a dream come true, doesn’t it? It is. But, if you aren’t vigilant, it can be your worst nightmare but you won’t find that fact in any marketing literature.

Thin Theory

The theory behind thin provisioning is solid. All too often, system administrators over-provision virtual machine disk space, which results in hundreds of gigabytes of unused, and therefore, wasted space. Thin provisioning comes to the rescue with the solution of only using what’s necessary. And, to its credit, thin provisioned systems use up to 80% less space than their thick provisioned counterparts. An extreme savings indeed.

One downside to this incredible disk and money saving technique is in VMware’s own words: “Thin provisioning can lead to oversubscription.”* Oversubscription is sort of the whole point of thin provisioning isn’t it? Your desire is to cram more virtual machines onto a virtual infrastructure than you really have disk resources to support. That is the definition of oversubscription. And, it sounds a lot like memory overcommitment. Another great idea hatched by someone trying to squeeze another ounce of capacity from an already stressed resource pool.

The other major downside to thin provisioning is performance. Dynamically expanding disks are not as nimble as their static brethren. They can’t be due to the nature of current disk layout technology. Performance problems occur from fragmentation of these dynamically expanding files. Remember that the composition of a virtual disk is files. When your files become fragmented, you take a performance hit.

Other performance problems arise from having too many disk I/O intensive applications running in your virtual machines. Think about how writing to multiple places at once on your disk might affect performance. Logging is perhaps the major culprit in these bottlenecks.

Thick vs. Thin

Let’s say that you’ve provisioned ten new virtual machines that consume 100GB of disk space each. You’ve used 1TB of space with those ten systems. Is that really what you want to do? How much SAN do you have available for more virtual machines? If your SAN is 1TB, then you’re out of space and you can’t provision new virtual machines. You’re done until you purchase more SAN space. So, start your ten virtual machines and be happy forever. Or, is there something you can do about it?

You realize that your shiny new vSphere 4 allows you to create thin provisioned disks or convert thick ones to thin. You convert all ten of your newly created virtual machine’s disks from thick to thin. You now have over 400GB of free SAN space available. Look at all the space you wasted by using thick provisioning.

Now, you have space for several more thin-provisioned virtual machines. You create two test systems and four new staging systems and still have almost 300GB of free SAN. You’re giddy with excitement.

Finally, you create a virtual machine to provide a software repository for you and your team. Additionally, you create a file server for the Marketing department’s presentations, graphics and other multimedia files.

You’re bursting with excitement at the prospect of solving everyone’s problems with your newfound thin provisioning magic.

Achieving Balance

How can something so good go so wrong, you ask. When the marketing department starts filling their system with files and your support cohorts store too many of their favorite ISOs, music files, gold images and network-available software, your world isn’t so rosy. In fact, it might turn any color except rose at this point. You’ve run out of space and there’s nowhere to put more files.

You also notice that the performance of your production systems has slowed to a crawl. There’s no room for expansion for the systems that really need it. You’ve also errantly mixed production, test and user systems on the same infrastructure.

One solution to the problems of thin provisioning is to create statically-sized virtual disks for your logs and other write-intensive application data stores. The other solution is to figure out which of your virtual machines would benefit from thin provisioning and which ones would be negatively affected by it. Thick and thin provisioning is not an all or nothing prospect. You can mix the two with good results.

You should also separate your environments. Never place volatile systems (Test, Development, User) on the same infrastructure as your production ones. Such configurations cause instability that is too risky for production.

There are advantages and disadvantages to any technology strategy. Thin provisioning isn’t the exception to that rule. Careful planning and smart provisioning will help you achieve the disk savings you’re looking for without sacrificing performance.

* VMware whitepaper titled, “VMware vStorage Thin Provisioning.”

Fatal error: Call to undefined function aa_author_bios() in /opt/apache/dms/b2b/linux-mag.com/site/www/htdocs/wp-content/themes/linuxmag/single.php on line 62