Question: Hyper-V, VMware, Citrix -- do any of them have it right? Answer: Only if you could combine all three into a single product.
Taking several virtualization products for test drives lately prompted me to develop my virtual wish list for the ultimate Hypervisor software. To be fair, each of the big three (Microsoft, VMware and Citrix) is drawing nearer to my ideal Hypervisor with their newest version but they all still fall short of perfection. This wish list gives them something to strive toward in their quest for the ultimate virtualization software.
When I say that all of the major virtualization software vendors fall short of perfection, I mean that there is no single Hypervisor for all seasons. Citrix is great for cloud computing, VMware is awesome for server virtualization and for providing absolute zero downtime and Microsoft’s Hyper-V offers the best in Exchange and SQL Server virtualization. But where is the Hypervisor that does it all? Sure, they all claim to but in reality (The Data Center); They don’t.
VMware’s vSphere, as discussed in VMware vSphere: Out of the Box and into the Clouds, has the right idea here and in my Ultimate Hypervisor (UHV), high availability is at the top of the list. I would prefer that the fault tolerance algorithm work with two, three, four and more host systems. The kind of fault tolerance or high availability needed isn’t marketing hype or hearsay; it’s the real thing.
Each host system is aware of the other host systems and their virtual machines (VMs). Not only should the host systems enjoy fault tolerance but so should the VMs. Hosts would keep track of other host heartbeats and VM heartbeats. For example, you have two host systems running 8 VMs each. The two hosts would monitor VM heartbeats and, should a VM fail on one host, the other host would pick up that workload so that its services are never in an offline state. That’s true fault tolerance and high availability.
Superior Disk I/O
My second major wish is for superior disk I/O. I want to be able to virtualize any workloadâ€”even those that require heavy disk reads and writes. Why should I go to the expense of a Storage Area Network (SAN) when any disk should work? I should be able to use local disks, Network Attached Storage (NAS), or SAN with excellent I/O.
This aspect is where XenServer shines. It has unsurpassed disk I/O for VMs of almost any type. XenServer also has an added advantage in that it can install on and use any type of disk. Restricting one’s software to the use of SCSI disks only is cutting one’s own throat but this is covered under my Expanded Hardware Compatibility wish.
When you create a VM in VMware ESX and then you decide to switch to XenServer, do you want to recreate all those VMs from scratch? It might be possible by faking a P2V migration to convert from ESX VMs to Xen but wouldn’t it be easier to just have a common format to export to and import from? And, yes, I know about the Open Virtual Machine Format (OVF) initiative and Project Kensho but it needs to be implemented in all virtualization products not just proposed and kicked around or implemented as a third party application.
My wish is for the integrated ability to export to and import from OVF definitions so that VMs are Hypervisor agnostic.
True Automated Workload Motion
The wish for true automated workload motion might prove to be the most difficult of all to implement. When I say â€œtrueâ€ automated workload motion, I mean that the Hypervisor would have some built-in intelligence to it for monitoring hosted workloads. This intelligence would come with the ability to provide dynamic feedback on the total system workload (All virtual hosts and VMs), idle resources, and capacity.
Each virtual host donates all of its non-operating system resources to the virtual machine resource â€œpoolâ€ where all VM workloads exist. VMs pull resources, as needed, from the resource pool and return them when finished or when needs subside. This feature is similar to Distributed Resource Scheduling (DRS) in VMware’s Virtual Infrastructure products.
The difference is that instead of redistributing workloads to different hosts, in my wishful scenario, the workloads would exist in the pool and not on specific hosts. The â€œmotionâ€ part is only concerned with resources in and resources out of the poolâ€”not with any individual host.
I want my management software to be vendor-agnostic. I want to be able to manage my UHV, XenServer resources, VMware vSphere resources, Hyper-V resources, KVM resources and even Solaris Zone resources from a single management application. An application like this would serve hosting environments, large corporations, cloud vendors and smaller companies with multi-vendor solutions.
To create the application in this vendor agnostic fashion, you would purchase modules for the base application to manage your environments. For example, you have Hyper-V, Solaris Zones and XenServer running in your Data Center. Purchase the base application and the modules for Hyper-V, Zones and XenServer. It would be sort of an Eclipse for Virtualization.
If host and VM management were left to third party vendors, virtualization software developers could focus their efforts on creating something closer to the UHV. All management shouldn’t be left to a third party though, I’m not that short-sighted. There should be minimal tools provided to create, delete, edit and do some housekeeping tasks but for elegant tasksâ€”a third party application with modules makes a lot of sense to me.
Expanded Hardware Compatibility
I’ve been extremely frustrated in trying to install VMware’s ESX and ESXi on my hardware. The same goes for Hyper-V. Virtualization is supposed to save money not make you spend a bundle trying to use it. Vendors need to expand their hardware compatibility lists (HCLs) to include SATA, IDE, SCSI and all controller types. They should also allow you to install on hardware that isn’t necessarily virtualization-enhanced. There are plenty of multi-core systems capable of handling virtual workloads without the special Intel-VT and AMD-V extensions.
Commodity hardware and repurposed hardware are fine examples of systems that could be used for handling virtual workloads. If I want to convert my 20 desktops and 5 physical server systems to virtual machines, why should I have to spend $50,000 or more to do so? For that much money, I can refresh my hardware every three to five years and deal with the same old break/fix model that I’ve dealt with for years.
Give us a break in these stressful economic times with a Hypervisor that runs on anything.
I won’t hold my breath waiting for my UHV but it’s nice to ponder on it while I wait for current virtualization-compatible hardware prices to fall into my budget range. If vendors want uptake of their products, they’ll make them more accessible in price, in compatibility and in useful features. Let’s see someone create a virtualization product for SMBs. I think Desktop-level and Enterprise-level virtualization are well covered.
Fatal error: Call to undefined function aa_author_bios() in /opt/apache/dms/b2b/linux-mag.com/site/www/htdocs/wp-content/themes/linuxmag/single.php on line 62