Graphics into the kernel?

Whether or not the graphics subsystem is part of the operating system's kernel or not is highly controversial, and popular opinion seems to change over time. For example, a certain other major OS has had graphics in, and then out of, the kernel several times now.

Whether or not the graphics subsystem is part of the operating system’s kernel or not is highly controversial, and popular opinion seems to change over time. For example, a certain other major OS has had graphics in, and then out of, the kernel several times now.

For those who have been tracking the Linux kernel for a long time will know about the General Graphics Interface (GGI) controversy way back, for example. Part of the controversy is that some of the things a graphics subsystem needs to do are really complex in nature, and thus not so suitable for running in kernel space.

Today in Linux, the graphics happen mostly in userspace, with a kernel modules (agpgart and DRM) to do some security checks. This is about to change, with Jesse Barnes from Intel and Dave Airlie from Red Hat putting a “graphics card driver” into the kernel.

Understanding the Controversy

To understand the controversy, lets take a trip back to Computer Science 201 and look at the fundamental tasks of an operating system kernel:

  • Resource management, including sharing and arbitration.

  • Abstracting hardware differences from the rest of the system (this implies having device drivers).

  • Enforcing the security policy.

The first element, resource management, is frankly a big mess right now for graphics in Linux. Both X and the kernel talk to the hardware, with no coordination or exclusion between them. Both X and the kernel have a lot of detailed knowledge about PCI (both AGP and PCI express are part of the PCI family of technology).

The result is that every time PCI gets some new functionality, both X and the kernel have to be updated at the same time. This should have happened in the past with technologies such as PCI hotplug, PCI domains, PCI MMIO configuration space access; the sad thing is that this in practice hardly ever happened, if at all. The end result was that distribution vendors and hardware manufacturers have been putting Band Aids on the system to keep it going– sort of.

Another area that is problematic is that of the so-called quirks: Sometimes buggy hardware is released and software needs to work around this, this workaround is called a quirk. Some of these quirks are of the type” the hardware claims to have 16MB of memory, but really has 32MB and will also use 32MB of address space”, others are even worse. In the current software stack, X will try to work around this without telling the kernel it changed the system resources. While this is just one example, and it’s not a very typical one thankfully, the total of these makes for a very messy and fragile system.

The kernel framebuffer’s driver is another weak spot. It drives the same hardware as X does, and while the framebuffer driver uses the kernel resource management infrastructure, that doesn’t help much in the arbitrage of the hardware resources since X doesn’t use these. Again the result is a fragile situation that sometimes works and sometimes doesn’t.

For me the conclusion is clear: there’s a legitimate case for putting the resource management side of the graphics driver into a central place in the kernel and resolve all these conflicts once and for all. So, now that we all agree there’s a sensible reason to at least put some part of the graphics driver into the kernel, it’s a valid question to ask how much more makes sense.

How Much is Enough?

Barnes and Airlie are drawing the line at what is called mode setting, the part of the graphics driver that tells the hardware to go to a certain resolution and refresh rate. It’s important to realize that this is different from mode selection, which is the part of the driver that decides which hardware settings are appropriate for the machine’s capabilities and the user’s selection. Mode selection can be a highly complex operation involving many factors while mode setting is a much simpler operation: it comes down to putting a series of numbers (which are calculated in the mode selection code) into specific hardware registers. In the past, the mode selection and mode setting code was seen as one big whole, and a lot of the controversy around the graphics-in-kernel issue centered around the” all or nothing” situation.

I consider the arguments for putting the mode setting in the kernel driver and have mode selection remain in userspace convincing. After resuming from suspend-to-ram, the BIOS has left the video mode in an undefined state– some BIOSes actually restore text mode, others just don’t set anything at all and leave the screen back. The only way to get the screen to work again is for” something” to reprogram the video card to have the same mode it had just before going to suspend. The mode-setting code in the kernel driver is clearly the right place to do this: doing critical suspend/resume actions in userspace is a tricky proposition which has many nasty issues.

A second argument is that the kernel can now, in the event of a kernel oops while in graphics mode, switch back to text mode and display the oops to the user. Without the mode-setting in the kernel, the user just sees his screen freeze, his keyboard blink, and can’t get any debug information off his system at all to help kernel developers find the problem.

So, in the new kernel graphics driver, the driver knows one mode at startup (the mode the hardware already is in, usually text mode) and can be informed of the hardware settings for other modes by X, or another userspace program. Switching modes then just becomes a matter of telling the driver which of the previously communicated settings should become active now. This is a setting which is easy to save during suspend, and recreate during resume for some of the graphics cards.

A positive side effect of moving the resource management and mode setting into the kernel is that it should now, in theory, be possible to run the X server as an unprivileged user rather than as the root user (since the direct hardware accesses for which X needed root privileges are now done in the kernel). It’s not quite clear yet when, or if, X will make this step, but the security conscious among us will be waiting in anticipation for this change.

Comments on "Graphics into the kernel?"


I was thinking, is this the same thing MS did to its OS? Then. whenever the graphics break, th IS is tosted. i would think that making both the X server, and the kernel, work together better will be the route to go, but I am not an expert on this field.


i would have to agree witn alvaro on that…


I think there must be seperation of handling Graphics which is a complicated and its a seperate area.

Interface to access the hardware directly must be developed something like what Hypervisor does
for Virtual machines.



IMHO the graphics should stay out of the kernel, just to prevent situations when the system hangs because of that.

I’m not very familiar with what happens right now in the graphics system, but when my X hangs (very rarely although I have activated the composite manager) I just restart it with a simple Ctrl+Alt+Del and it just works again. Plain and simple.


“when my X hangs … I just restart it with a simple Ctrl+Alt+Del and it just works again.”

I think you mean Ctrl-Alt-Backspace? In any case,
I rarely encounter a situation any more where an X freeze can be fixed by simply restarting X from the keyboard. Usually the keyboard focus is gone and I end up having to log in to the machine remotely to kill X, which is a royal PITA. Despite a very generic install of Ubuntu 7.04, OpenOffice 2.x regularly freezes the X server on my machine, necessitating a remote restart. There is absolutely no circumstance in which a user application should be able to take down the server — period. There is no question in my mind that a lot of this insanity would simply go away if we “Render Onto Caesar What Is Caesar’s”, namely let the kernel manage the hardware like it is supposed to do given the barest minimum definition of what any kernel, even a microkernel, is supposed to do. To that end, I completely agree with the writer of this article that the X/linux situation is an unholy mess that needs to be straightened out. In my experience linux is an order of magnitude more stable than Windows; linux + X is an order of magnitude LESS stable — that is simply intolerable.


When X freezes most of the time I’m unable to kill the X server because the keyboard gets locked. I agree with the fact that having to log in remotely to kill X is a pain… and honestly … I prefer to hit the reset button like any normal user would do instead of playing the SSH guy. That’s kind of unacceptable.


Wanting to keep the kernel pure and clean is intellectually very appealing. If we had remained in a test-based world, I would be totally onboard with keeping any kind of graphics stuff out of the kernel. But outside of headless servers, you just have to deal with graphics issues every day, X is here to stay. Given that basic truth, I am very ready to accept into kernel space the limited and simpler slice of the graphics pie that Arajan describes. When X freezes, we truly are hosed, there’s no direct way to slap the system back to its senses. I accept this smaller intrusion into the kernel as the best way to deal with a constant headache.


…If this became standardized, maybe something trick from Intel would have a little corner of hardware either on the die (fat chance) or in the chipset to make the graphics-in-the-kernel method callable with even less resource allocation inside the kernel, maybe just a few flags? I am no hardware guy, as must be obvious to the chip guys right now, but the concept seems reasonable enough to me.


From my unix knowledge, I’d say that the kernel is the place for the drivers. So it should at least provide a general way to talk to devices. On unix level, there are character devices like tapes and the console (/dev/tty). I’d say a graphical display compares to a character display as a disk-device compares to a tape-device. Hence for the kernel, a graphics device should be a block-device. That makes the frame-buffer device a kernel device.

However, the way X11 is designed and the way it behaves, I can even say that X11 is a kernel on its own. X11 can use the framebuffer device, It can also use the device as it is provided by the kernel. However it does not need to: look at vnc on unix, the binary xvnc has nothing to do with the local hardware display.

On the other hand, what to use as the protocol to use between the device as the os-kernel is and the X11 server? What was wrong with the postscript as was used with SunOS? It might just be to much overhead (and/or to slow) Or should we use the X11 protocol itself?

Then the X11 server could/should be the device-driver in the kernel. Then the devices can be /dev/${DISPLAY}, similar to /dev/tty for character devices.


The graphics system must stay separate from the kernel to ensure that the kernel doesn’t crash. I disagree with jjourard in his statement that graphics is here to stay. Most of my Linux boxes are not running X Windows by default. I use startx when I want to use X Windows, then shut it down when I am done to save on resources and reduce attack exposure. I even shut it down on my desktop because it frees up resources that X Windows doesn’t always free up on it’s own. Can’t the graphics hardware/software interface be executed in a separate kernel process, isolated from the actual kernel? I think that the Graphics hardware/software interface should be virtualized in such a way that video hardware vendors can drop in OpenGL drivers with direct access to the underlying hardware without having to compile their drivers into the actual kernel. Maybe, this way, OpenGL won’t be left on the sidelines, and the graphics sub-system can become more stable as well, while the underlying kernel will stay stable.

Pretty nice post. I simply stumbled upon your weblog and wished to say that I’ve really loved browsing your blog posts. In any case I’ll be subscribing in your rss feed and I hope you write once more very soon!

With everything that appears to be building throughout this particular subject material, a significant percentage of points of view are actually fairly exciting. Having said that, I am sorry, but I do not subscribe to your entire theory, all be it exhilarating none the less. It seems to everybody that your commentary are generally not totally rationalized and in actuality you are generally yourself not wholly certain of the point. In any case I did take pleasure in reading it.

Leave a Reply