Being technical, you're probably eager to jump into the "how" of virt. But let's take a step back for a minute and look at just "what" this technology is.
I don’t know anything about Linux. “Why then,” you may be asking, “am I reading this?” This is, after all, Linux Magazine, and you probably kind of expect the contributors to know a little something about, well, Linux.
I’m not writing this column because I know Linux. I’m writing this column because I know virtualization, which, when deployed on a desktop or server, lets me enjoy Linux and all that it has to offer without giving up other OSes (and more importantly, their applications) that I depend on every day to get my job done.
This column, which focuses on the “what” of virtualization, is the first in a series that will explore the different technologies and methodologies that populate the industry, best practices for deploying virtualization in desktop and server environments, how to effectively integrate real and virtual resources into a single, smoothly running IT infrastructure, and of course, how to manage it all without wanting to blow your brains out.
What is virtualization?
Amit Singh, author of Kernelthread.com and an all-around fount of knowledge for all things virtual, defines virtualization as “a framework or methodology of dividing the resources of a computer into multiple execution environments, by applying one or more concepts or technologies such as hardware and software partitioning, time-sharing, partial or complete machine simulation, emulation, quality of service, and many others.”
In English, this means that via virtualization, you can make one computer work as multiple computers, thereby allowing you to run multiple disparate OSes or multiple instances of the same OS, at the same time, on the same box, without rebooting.
Although it’s received a lot of press recently, virtualization as a concept has been around almost as long as computers themselves. In fact, computer scientists were deploying basic forms of virtualization on mainframes as early as the 1960s via a technology called “time sharing.” Since then, virtualization has gradually moved from mainframes to servers to desktops for both consumers and enterprises.
You’ll find several popular approaches to virtualization, some of which are “true” virtualization, and some of which aren’t. We’ll talk about the short list of technologies that you’ll need to know.
In a hardware virtualization solution, guest OSes are run in completely isolated, independently running virtual machines that perform exactly like stand-alone computers. Each virtual machine works with its own processor, RAM, floppy and CD drives, I/O devices, keyboard, mouse and hard disk — everything a physical computer contains. This provides a tremendous advantage to IT infrastructures as it means that multiple disparate OSes can run simultaneously on a single machine.
The drawbacks to this approach include a slight to moderate performance hit compared to native operation, and that OSes running in virtual machines must work natively on the available chipset. For example, a virtualization solution running on an x86 chipset can only run x86 operating systems.
A technology that is increasingly important to hardware virtualization is the hypervisor, which in layman’s terms, is a thin layer of software that sits directly on the hardware layer to control critical hardware resources.
Hypervisor technology, which is offered by several vendors including XenSource, VMware, and my company’s Parallels business unit, enables all operating systems to work at the same layer (as opposed to a non-hypervisor solution, in which guest virtual machines run on top of a primary OS). This means that each virtual machine’s virtual hardware resources can connect directly to the host machine’s hardware resources by making “hypercalls” to the hypervisor, rather than having to “tunnel” through the primary OS.
The hypervisor approach can lead to better virtual machine stability, isolation, and performance versus a non-hypervisor solution. Hypervisor technology also enables the use of processor-level technologies from Intel-VT (Intel Virtualization Technology) and AMD-SVM (AMD Secure Virtual Machine), which offload a substantial amount of the heavy lifting from the virtualization layer to the processor layer, meaning that OSes can run much more like they would on a native hardware set. With a hypervisor in place and Intel VT or AMD-SVM present, virtual machine performance and stability increase exponentially.
Paravirtualization is completely different from hardware virtualization. Here, virtual machines are presented with a software interface that is similar, but not identical to, the physical machine’s hardware resources. Rather than virtualize a complete set of hardware that runs as an independent virtual machine, it virtualizes the operating system level, effectively dividing it into multiple isolated virtual environments that can run simultaneously.
This approach can provide extremely high-performing virtual machines, but there are some critical drawbacks. Most significantly is that in most cases, the guest OSes must be ported (i.e., modified for compatibility) to run within the paravirtualized environment. This means that most “out of the box” operating systems won’t be compatible with the paravirtualization solution. The open-source Xen hypervisor is an excellent example of a virtualization tool that uses paravirtualization.
Because this type of virtualization puts individual environments (“containers”) in direct contact with real hardware and doesn’t add additional processing layers (such as a hypervisor), overhead is almost zero. With the single OS, critical memory resources are used more efficiently than in any other virtualization technology, enabling up to 10x the number of virtualized environments that you could get using hardware virtualization on the same server. However, all virtual machines must run the same operating system.
In a large server deployment, this supposed limitation can actually be an advantage, particularly by eliminating or decreasing “OS sprawl,” which can seriously complicate datacenter operations and waste thousands on unused licensing. Sun employs this method via its Solaris Containers, and my company, SWsoft, does work in this field via our Virtuozzo product for Windows and Linux and via our sponsorship of the OpenVZ open-source virtualization project.
Although not a true form of virtualization, emulation is often lumped into the category since it provides a similar end result to hardware virtualization. In a pure emulation solution, a complete hardware environment is created in software. For example, GuestPC and Microsoft’s Virtual PC for Mac both emulate an x86 chipset on a PowerPC Mac, thereby allowing the Mac to run Windows and OS X simultaneously. The upside is that any hardware scheme can be created, regardless of the host machine’s configuration. The downside is that pure emulation solutions are very complex and, as a result, deliver poor performance.
The Value of Virtual
The real value of virtualization lies in drastically reduced costs. It eliminates the need to equip each employee with a separate PC to work with each required OS and application set (in the case of desktop computing) or the need to allocate only one OS for each server in the datacenter, even though most enterprises run on some mix of Windows and Linux and that even a “maxed out” single-OS box usually doesn’t even get close to using 100% of its hardware resources. For both server and desktop virtualization, there a number of potential benefits:
- You can get more out of every dollar you put in to your hardware by actually using the hardware you paid for more efficiently. Better yet, eliminate some of your hardware spending altogether.
- Go “green.” By consolidating multiple physical servers, you can save not only on hardware costs, but also lower your company’s energy footprint by reducing operating and cooling costs. A dense solution like OS virtualization is a good choice here.
- “I need, I want, I need, I want…” Now you can give each employee access to multiple machines from a single workstation. Key for developers and testers who work cross-platform.
- System admins rejoice! Test critical new software and patches on disposable virtual machines/virtual environments before deploying them to “real” hardware. No more “guess and check” with mission critical systems.
- Run legacy OSes such as DOS, OS/2, and eComStation without supporting obsolete real hardware. For this, you’ll need hardware virtualization’s ability to run disparate operating systems simultaneously.
Given the many different solutions to choose from, and considering that each has a long list of advantages, drawbacks and uses, the next logical question is “Which one is best for my business?” Conveniently enough, that’s what we’ll be discussing in the next installment. We’ll be discussing common failure points of the traditional “one machine for one OS” physical IT infrastructure, and how these virtualization solutions can help alleviate them.
is the director of corporate communications for SWsoft. Have a question or comment about what you read here or what you'd like to see discussed in future articles? Email me at firstname.lastname@example.org!