User Mode Linux (UML) is open source software that allows you to run Linux in a "virtual machine" on top of a physical Linux box. This opens up some pretty powerful possibilities...
Virtual machines have long been a staple in the mainframe world. They offer the ability to partition the resources of a large machine between a large number of users in such a way that those users can’t interfere with one another. Each user gets a virtual machine running a separate operating system with a certain amount of resources assigned to it. Getting more memory, disks, or processors is a matter of changing a configuration, which is far easier than buying and physically installing the equivalent hardware.
This partitioning of a large machine into a number of virtual machines also has security advantages. If a virtual machine gets compromised, only that virtual machine’s resources are at risk. The users of other virtual machines are completely isolated from it, and their data is not in any danger.
These advantages have recently become available to the PC user. Two PC hardware emulators, VMWare and its open-source counterpart, Plex86, implement virtual machines by emulating a physical PC in software. This allows them to boot an operating system and run whatever they want under it.
This article describes a third virtual machine implementation that has a fundamentally different design than VMWare or Plex86. User-Mode Linux (UML) is a port of the Linux kernel to itself. That is, it considers the Linux system call interface to be a platform just as the Intel x86 architecture and Alpha’s processors are platforms, and it is a port of Linux to that platform.
Booting UML requires a kernel executable and a filesystem to boot it on. Disks inside UML are files in the host filesystem, so the root filesystem is going to be contained inside a file that typically contains an ext2 filesystem image. You can download both the kernel executable and the root filesystem file from the UML SourceForge project page, (http://sourceforge.net/projects/user-mode-linux) or from the UML ftp mirror in the Netherlands (ftp.nl.linux.org:/pub/uml). At both sites there are RPM and Debian packages that contain the kernel executable plus a number of other useful items. Download and install the package and then download one of the many available filesystems and uncompress it.
Now you’re set to go.
If the filesystem file is named root_fs, then UML will boot with the command:
If the filesystem is in a different file, then you need to associate that file with the boot device, which is normally ubd0. For example, you might end up booting UML with something like the following command:
% ./linux ubd0=root_fs_slackware_7_0
You can make more devices available to UML by associating them with host files. The following example provides a swap device, another filesystem, and the host CD-ROM and floppy drives:
% ./linux ubd0=root_fs_slackware_7_0
Normally, these filesystems would have entries in /etc/fstab and be mounted automatically, but you can also mount them by hand just as you would on a physical machine:
% mount /dev/ubd/2 /usr/local
% mount /dev/ubd/3 /mnt/cdrom
% mount /dev/ubd/4 /mnt/floppy
The ubd devices change their names inside the virtual machine because of how the driver registers itself with Linux’s devfs. The driver creates the /dev/ubd directory and an entry in it for each configured block device, so the root device is ubd0 on the UML command line and /dev/ubd/0 inside the virtual machine. (If you are unfamiliar with devfs, or if you would just like more information about it, check out the May 2000 Gearheads column located at http://www.linux-mag.com/2000-05/gear_01.html.)
You can easily create your own UML filesystem file. First, create an empty file on the host with a command like:
% dd if=/dev/zero of=my_filesystem
seek=100 bs=$((1024*1024)) count=1
This will create a 100 MB empty file. Change the 100 if you want a different size.
Now boot UML with that file assigned to a block device:
% ./linux ubd1=my_filesystem
After it boots, and you log in as root, you can create a filesystem on it and populate it just as you would on a physical machine:
% mke2fs /dev/ubd/1
% mount /dev/ubd/1 /mnt
Once it’s available as a filesystem, you can copy data into it over the network, or if the data is on the host, you can mount the host directory inside the virtual machine and copy it directly. You can also create the file on the host by creating the empty file as described above and then, as root, creating the filesystem and loopback mounting it:
# mke2fs my_filesystem
# mount my_filesystem /mnt -o loop
Now you can copy files into it and unmount it when you’re done:
At this point, you can assign it to a device, boot UML, log in, and mount it.
Once you’ve got the hang of booting UML, you might want to access the files on the host without packaging them in a filesystem-in-a-file. This can be done with the hostfs virtual filesystem, which turns file accesses directly into file accesses on the host without going through the ubd block driver. You can mount the host /usr onto a UML directory like this:
% mount none /mnt/host -t hostfs -o /usr
Now, the host /usr is available at /mnt/host.
You may notice that, even though you’re logged in as root on the virtual machine, any files you create through hostfs are owned by whatever user you were on the host when you ran UML (and that should be your normal login — there is no reason to run UML as root). This is because the host kernel knows nothing about the virtual machine considering you to be root. All it knows is that files are being created by this process that is running as a normal user. So, when those files are created, they are owned by that user.
At this point, you may wish to start fully exploiting the existence of a new machine and network it with the host machine and any other machines that are available on your net. There are two network drivers available. One is based on slip and the other is an Ethernet driver that can use the host Ethernet device and the ethertap and TUN/ TAP interfaces. The Ethernet driver is technically superior, but I will use the slip driver in this example because it’s easier to set up.
If your local network does not happen to be 192.168. 0.x, you’ll need to give UML an unused address on your network by booting with a command like:
If, by extreme coincidence, your local network does happen to be 192.168.0.x, and if the address 192.168. 0.253 is unused, then you won’t need to do anything special when you boot.
Log in and configure the umn device with another unused address as follows:
ifconfig umn address hw etherhex-address
hex-address is the same as address except it’s disguised as a six-byte Ethernet address. You convert each byte of address into hex, separate them with colons and put two zeros on the end. So, 192.168.0.254 would become c0:a8:0:fd:0:0, and the ifconfig command would then look like this:
ifconfig umn 192.168.0.254 hw ether c0:a8:0:fd:0:0
Now your virtual machine is on the network. You can telnet, ssh, and rsh in and out of it. If it is running Apache, you can even use it as a Web server.
If you want to allow access to a virtual machine from the network without actually enabling it, you can attach its serial lines to a host port by adding this option while booting UML:
This will make the pool of serial lines accessible at port 9000 on the host. To connect to one, all you have to do is telnet to it like in the following:
and if there are gettys running on all the serial lines, you’ll be able to log in on it.
To complete the transformation of your virtual machine into a full-blown Linux box, you will, of course, need to run an X server on it. (Isn’t this always the acid test?) It turns out that there is a little-known — but very useful — X server called Xnest, which serves perfectly as a local X server for UML. It is an X server that uses another X server rather than a video card for its display.
You will need to have Xnest installed in the virtual machine to run it; you will also need to have the network run-ning in UML so that Xnest can communicate with the host X server.
Once the two X servers are talking, you can give the virtual machine permission to display to your X server by running xhost on the host machine (assuming UML has the IP address 192.168.0.254):
and then, inside the virtual machine, running Xnest with its DISPLAY variable set to the host X server:
Of course, you’ll need a window manager like fvwm, which provides a fairly useful desktop environment by default:
Notice that the fvwmDISPLAY variable is set to the virtual machine’s local X server. As far as X clients are concerned, the virtual machine now has its own video card that they can display on.
In the previous example we ran Xnest as an X client on the host X server. If you want to run X clients from inside the virtual machine, and have them display on your normal X server, set your DISPLAY variable to point at the host:
% export DISPLAY=host:0
% xload &
% xterm &
Building and Testing New Kernels
Now that you have this Linux virtual machine at your disposal, what can you do with it? There are a number of possibilities. If you’re a bleeding-edge kind of person and want to try the latest kernels or latest version of your favorite distribution, UML is perfect.
If you want to try the latest version of the kernel, you can boot the UML port of it. There are packages containing the latest kernels and versions of UML on the download sites mentioned earlier.
If you prefer to build your kernels from source, this will soon be possible. An effort is under way to include UML as a separate architecture in the main kernel source tree. Once it’s complete, you should be able to build a UML kernel in the same way you build a kernel for other platforms. This may even be a reality by the time this article sees print. If so, you should be able to build UML by downloading the kernel sources and doing the following:
and choosing the configuration options you want:
% make dep ARCH=um
% make linux ARCH=um
Defining ARCH=um is required to tell the build process that you want a UML kernel rather than a native kernel. You can define it in your environment if you don’t want to put it on the make command line.
Even if UML is not yet part of the standard kernel source tree, a patch will be available from the download sites referenced earlier. Uncompress it and apply it just like any other patch:
% cd top of kernel source tree
% patch -p1 uml-patch
Then build it according to the directions above.
Now you can boot up the UML port of that kernel and test it without any risk to your files or data. The only data that it can damage is the data that you explicitly give it access to. Booting UML is also a lot more convenient than rebooting a physical machine on a new kernel.
Trying out new distributions is also easy. You don’t need to install it and hope that everything still works afterwards. You can install it into a file and boot UML from it. Then you can make sure that everything you rely on works as expected and install it on a physical machine after you have become comfortable with it.
UML is also great for things such as trying out new network services that you are afraid might foul up your physical network. In this case, you can run the service on a virtual network between a set of virtual machines, which can be completely isolated from the physical network. Again, you can move it from the virtual network to your physical network once you’re comfortable doing it.
If you’re concerned about network security, you may not be completely happy with the security of the services that you’re running. Services like sendmail and named have a disproportionate number of weak points that can be exploited. You can protect yourself against any new vulnerabilities by running these services inside a virtual machine. If someone manages to break in through one of these services, they’ve only managed to break into a virtual machine, which can be easily replaced, and they have no access to your data on the host.
The Future of UML
At this point, UML creates the illusion of having a fully functional, uniprocessor, virtual machine that closely resembles a physical Linux machine. However, there is work currently underway that is going to change this.
The hostfs filesystem currently allows a UML user to mount host directories inside the virtual machine. The plan is to allow hostfs to mount other items as well. The first objective is to mount remote directories via a ssh or rsh connection. This is essentially the same result that can be achieved with nfs, except it won’t require root access on either side.
The next step will be to allow hostfs to mount other host resources, such as databases, inside the virtual machine. This would allow database queries to be performed on the filesystem. A further step would be to mount both a host directory and a database containing the same data at the same time. Writes would go simultaneously to the host directory and the database, keeping it up to date. This would also allow database queries to be performed on the filesystem without requiring the data to be stored solely in the database.
A further possibility would be to mount many nearly identical remote directories at the same place in the virtual machine. Again, performing a write to the filesystem, such as an installation of a new package, would go out to all of the filesystems at once, making this a convenient way to keep a number of machines up to date with each other.
A generalization of this is to represent other external resources as Linux primitives inside UML. For example, a collection of outside servers could be represented inside a virtual machine as a single process, so that killing it would kill all of the external server processes. Fully exploiting this idea could lead to UML being used to construct specialized environments from outside resources in such a way that those resources are represented as Linux primitives and can be manipulated by all the usual Linux tools, with those actions having arbitrary side effects on the outside world.
A second major area of future development work is getting SMP emulation to work. This will allow a UML virtual machine to be configured to have more processors than its host. Since the virtual processors are threads running on the host, SMP emulation is done by time-sharing them on the smaller number of host processors. This is primarily useful for kernel development. It will allow developers who lack SMP hardware to develop and test SMP kernel code, making it more likely that the code will be SMP-safe from the start. It will also allow scalability experiments to be done with the kernel that go beyond the available hardware. If the emulation is authentic enough, it will mean that the kernel will be ready for big iron even before that hardware is actually available.
Having SMP emulation working will set the stage for the next step, which will be to make a single UML instance run across multiple hosts. It will do this by partitioning its physical memory across those hosts and faulting pages from one host to another as needed. From inside, this will appear to be a single machine and it will not be obvious that it is spread out over multiple physical machines. So, the resources of the hosts will be essentially combined into a single machine, making UML into a type of clustering technology as well.
Initially, the performance of this kind of cluster will be terrible because there are data structures inside the kernel that would be accessed frequently from all processors. This data is constantly going to be copied from one node to another, and the nodes are going to spend much of their time waiting for this. However, the ongoing work to support NUMA machines is relevant here because a UML cluster is essentially an extreme form of NUMA with no global memory and extremely expensive access to the local memory of other processes.
So the UML-based clustering technology is still a bit off, as it would require strong NUMA support in order to perform well. However, it’s likely that UML would greatly assist the NUMA development effort, since it would allow more developers access to virtual NUMA hardware.
The Best Is Yet To Come
UML was originally written to simplify kernel development and debugging. While it has been very good for that, kernel development is just the first of many uses for it. Many people have started experimenting with UML as a virtual hosting mechanism; others are looking at using it as a jail for distrusted users and services.
This is great, but many of the most exciting applications of UML lie in its undeveloped potential. Creating artificial, specialized environments inside virtual machines and clustering are potentially revolutionary applications; you can be sure the UML team is hard at work, making them a reality. It’s safe to say that you’ll be seeing much more of UML in the future.
At this point, the most common use for UML is as a kernel development tool. UML offers two main advantages that make it extremely useful for kernel development — it allows for a shortened development cycle and it reduces the hardware requirements for kernel development.
When the kernel crashes while a developer is working with a physical machine, it is necessary to reboot the machine on a known-to-be-stable kernel so that a fixed test kernel can be copied in; another reboot on the new kernel is then needed before debugging can continue. With UML, the first reboot is not necessary. Also, UML can boot much more quickly than a physical machine, mostly because there’s no wait while the BIOS decides to boot the kernel. An average UML environment can boot to a login prompt in about 15 seconds. A stripped-down boot can be up and running a lot more quickly than that.
UML’s other major advantage is that it eliminates the need for a separate test machine. This is particularly nice when you’re away from home. With UML, you can do kernel development anywhere you can take a laptop. For example, last summer, Paul “Rusty” Russell (the author of the kernel’s netfilter module) was on a tour of Europe and North America for a number of weeks. During that time, he submitted netfilter patches to Linus that had been developed and debugged under UML.
Aside from the benefits it offers kernel developers, UML is also generating major interest among hosting providers. UML allows hosting providers to give an entire machine –root access and all — to a customer without needing to rackmount a new box. With this much interest from the Web hosting community this early, it seems likely that this will become a major use for UML.
UML is not the only virtual machine to become available recently. VMware is another example, and plex86, which is an open-source version of VMware, is still another. VMware emulates a physical PC to the satisfaction of the operating system and everything running under the OS. So, while UML is a special version of the Linux kernel that runs directly on Linux, VMware is an emulator that runs on Linux (and Windows) that can run the native Intel Linux and Windows kernels.
These two different basic designs lead to different tradeoffs — both have their strengths and weaknesses. Since VMware is a hardware emulator, it can, in principle, run any OS that runs on Intel hardware. However, it would require a major rewrite to make it emulate a different hardware platform. So, VMware is specific to IA32 hardware — but it can do anything that a PC can do, which is what allows it to run Windows under Linux.
Meanwhile, UML isn’t tied to a specific type of hardware, so it can be ported to Linux and other operating systems running on other platforms. However, it will never be able to be any guest operating system besides Linux.
So, the disadvantage of not being able to be anything other than Linux is balanced by much more flexibility in the platform that UML can provide. This article has looked at some aspects of this, such as being able to provide access to the host filesystem and other host resources. In principle, UML can take any resource on the host or anywhere on the network and make it available inside the virtual machine in any form that makes sense.
UML is a port of the Linux kernel to a new platform. This new platform is unusual in that it is not a new piece of hardware. Rather, it is a software platform defined by the Linux system call interface. In this sense, UML is a port of Linux to itself.
The Linux kernel, like all portable software, is divided into a platform-independent piece, which is the bulk of the kernel and is the same on all platforms, and a platform-dependent piece (the arch layer, in the parlance of the kernel), which is much smaller and is generally written from scratch for a new port.
The arch layer implements all of the low-level functionality that requires direct involvement with the hardware. It includes things like entering and leaving kernel mode, switching contexts between processes, changing memory protections, and creating and destroying threads.
Thus, implementing UML involves writing a new arch layer, which implements all of that functionality in terms of Linux system calls. The most important technique that is used to do this is system call interception. Since UML runs the same binaries as the host kernel, and those binaries are going to make system calls in the same way as on the host, UML needs to be able to intercept them, prevent them from executing in the host kernel, and execute them in its own context instead. This is done by using the Linux system call tracing mechanism. This mechanism allows one process to intercept and modify the system calls of another.
UML has one thread (the tracing thread) that traces almost all of the other threads. When a UML process makes a system call, the tracing thread reads it from the process registers, changes it to be a call to getpid() (effectively nullifying it), and forces the process to run in the UML kernel code to actually do the system call. At this point, the tracing thread stops intercepting the thread’s system calls, which allows the thread to perform system calls in the host kernel.
Turning off system call interception when a thread is running kernel code is how UML implements a privileged kernel mode.
When a thread finishes a system call and is ready to return a value back to the process, it signals the tracing thread that it wants to return to user mode. The tracing thread assigns the system call return value to the appropriate register, restores the process register values, and continues the thread with system call tracing turned back on.
The next most important piece of the port is the virtual memory emulation. An important job of the kernel is to maintain a separate memory context for each process, making it impossible for one process to access memory belonging to another. Native kernels accomplish this by allocating physical pages and doing hardware magic to map them into the appropriate location in a process virtual memory. UML emulates this first by creating a file that is the same size as the physical memory that UML has been told it has, and then by mapping this file as a whole into an area of its virtual memory that will be treated as its “physical” memory. When a process needs memory to be allocated, pages will be allocated from this area and the corresponding pages in the file will be mmapped() into the process virtual memory.
UML must also emulate hardware faults and device interrupts. The most important fault that needs to be emulated is a page fault, which happens whenever a process does an invalid memory access. In UML this generates a SIGSEGV; the handler does the necessary checking to see if the access is valid and a new page needs to be mapped into the process (if it’s invalid, the process is sent the SIGSEGV). Device interrupts are generally emulated with SIGIO on the file descriptor used to communicate with the virtual device. The timer is implemented by requesting a SIGVTALRM timer from the host kernel.
SIGSEGV, SIGIO, and SIGVTALRM are the equivalent of hardware traps. Just as processes on a physical Linux machine aren’t affected by hardware traps unless the kernel converts them into Linux signals, these signals don’t affect any UML processes unless the UML kernel converts them into Linux signals. UML installs its own handlers for these signals, which run in UML kernel mode.
When a process installs a signal handler, it simply results in data structures being set up to record that fact. It does not result in UML establishing a signal handler on the host. When a signal is delivered to a Linux process, either by being sent from another process or being generated by the kernel itself, it is queued in the process task structure. The kernel occasionally will check for signals that need to be delivered and then will deliver them by invoking the appropriate signal handler.
So, processes inside UML are completely insulated from whatever signals may be delivered to UML on the host. The UML kernel will handle those signals itself and may or may not decide to pass them along to one of its own processes.
Since Linux is a multi-user OS, UML must implement separate processes and be able to context switch between them. It does so by creating a process on the host for each of its own processes. Context switching is then mostly a matter of stopping the process that’s being switched out and continuing the process that’s being started again. The host kernel actually does most of the work on behalf of UML.
These mechanisms, coupled with a few less substantial techniques, make it possible to emulate a complete hardware platform on top of Linux.
Jeff Dike is an MIT escapee and a refugee from DEC’s Unix Engineering group. He can be reached at firstname.lastname@example.org.