Kernel Konfiguration

We thoroughly demystify Linux' ultimate rite of passage. It's not as daunting as it once was.

Once thought of as the exclusive domain of Linux gurus, the act of compiling your own Linux kernel remains a kind of rite of
passage into Linuxdom. I remember the thrill of my first kernel compilation, back in the days of Linux 0.99.14. Things have changed since then, and though compiling the kernel is still not a simple task, it is becoming easier. And if you really want to get the most out of Linux, it’s worth the effort.

Compiling your own kernel is an especially good idea if you’re running an old 486 or 386 machine with a small amount of RAM. These old computers really suffer when running a noncustomized kernel. Kernel recompilation is also essential if you run an SMP (Symmetric Multiprocessor) computer, since SMP support must be enabled at compile time. This article will cover Intel x86 kernel recompilation under Linux 2.2. Version 2.2 is what you’ll want to use in the foreseeable future on any recent personal computer.

First some caveats. Building a customized kernel can lead to trouble. Make a mistake during configuration and you may find yourself unable to boot. Also, you may find that a hard disk running a customized kernel might not boot if you connect it to a different host computer. This may happen because your new machine has different hardware peripherals, or it may be that the kernel as you’ve compiled it simply doesn’t work with the new CPU.

What is a Kernel?

Kernel Figure 1
Figure 1: The different parts of the Linux kernel.

The kernel gets its name from being the innermost portion of the operating system that handles hardware management. Its job is to hide the computer’s hardware from applications by providing a common interface for them, so that they can run independently of whatever hardware is installed and however it works. Figure 1 shows the main building blocks that make up the kernel.

Since the kernel is the only program allowed to deal with your computer’s hardware, the kernel’s configuration shapes what the operating system and the software that runs on top of it can and cannot do with your computer. Kernel configuration determines things like what types of network-interface cards you can use and what types of filesystems you can access (like DOS and FAT32 partitions). It also determines which networking protocols will be available to you (like TCP/IP or AppleTalk).

So the task of kernel configuration is something of a balancing act between system size and system features. Since kernel code (and kernel data) constantly resides in memory, keeping the kernel small frees up more RAM for your applications. With today’s over-equipped PCs, this is not a grave concern, but when you’re dealing with a more resource-constrained environment — an embedded system, for example — careful configuration is in order.

A 2.2 kernel running on a conventional PC can be 1 MB to 2 MB in size. It can extend to 3 MB or 4 MB if you add SCSI and a few extra devices.

Kernel Modules

Fortunately, you are not forced to choose between permanently linking in or leaving out kernel features — you can also build parts of the kernel as loadable modules. A module is, technically, an object file that can be linked to a running kernel at runtime. Less technically, it is code implementing kernel-level features that can be added to a running system only when you need them, and discarded later when they are no longer needed.

A number of system features are available as modules. Figure 1 indicates just some of the areas of Linux’s kernel that can be modularized.One of the great advantages of modules is that you can always compile and run a new module without having to reboot the entire system.

An interesting feature of kernel modules is Linux’ ability to load them automatically when they are needed by the applications and unload them when they are no longer needed. This feature is called kmod, and you most likely want to enable it during configuration. During the configuration process, you’ll be asked several questions about which features to support or discard. Most of these options are also available as modules. That’s how, for example, the Linux distributors are able to deliver a system that works with most hardware without bumping the kernel to an unmanageable size.

Nowadays a lot of working code is distributed by third parties and made available as loadable modules. If you want to run an experimental filesystem or a device driver that is not part of the official distribution, you’ll usually just want to compile them as modules and load them into the running kernel, rather than patching your kernel sources, recompiling, and rebooting the system.

Getting Ready

Before you can start compiling your own kernels, you’ll need some infrastructure in place.

First of all, you’ll need source code to feed to the compiler. Every distribution includes the complete source for the kernel it installs, since this is required by the licensing policy adopted by the kernel (The GNU General Public License or GPL).

The source for version 2.2.11 is in a file that is usually called linux-2.2.11.tar.gz. Depending on the distribution you are using, you may find that you have a repackaged kernel source with the .deb or .rpm suffix. The sources may not be installed on your system by default, but you’ll find them on the CD-ROM.

If you have no source available — this can happen if you, say, get a reduced distribution from a magazine — you can always download it from ftp.kernel.org.

Then you’ll need the development tools. If your system doesn’t have at least the gcc, binutils, bin86, and gmake packages installed you won’t be able to build a kernel. You’ll also need the bash shell and awk, but they are standard to all major Linux distributions. Basically, you can only rebuild your kernel if you are running a complete system.

Finally, you’ll need a good combination of processing power and free time. My Pentium II takes seven minutes to compile my own kernel configuration with its modules; my 486 takes two hours longer, and if I happened to try this on my 386 with 2 MB of system memory, I’d need to write off a week or two. You’ll need some spare disk space on your system as well: After compilation my linux-2.2.11 directory is slightly more than 70 MB.

Input Files

No matter which configurator you run, your responses get saved as input files for both make and the C source files. The configuration used by make is saved to .config in the top-level source directory (the directory where you extracted the source files, usually usr/src/linux); the information used by C code is saved to autoconf.h in the include/ linux directory.

The input files for the configurators are:

.config. This current configuration will be used as input the next time that you want to configure the system, so the default replies to the questions will match your previous choices.

arch/i386/defconfig includes the default configuration, and is used when no .config exists. I suspect the file represents Linus’s own configuration. The default for non-PC architectures is still called defconfig but is found in a different arch directory.

arch/i386/config.in This file includes the questions asked by the configurators. arch/i386/config.in includes other files (usually called Config.in) from other directories that are common to several hardware architectures.

The most efficient way to configure your kernel is to use make xconfig the first time you are at it. Later, in order to make changes, you can then simply remove items from .config and run make oldconfig to be asked only a few questions. For example, you could enable network multicast in a kernel that was already configured and compiled by simply removing the CONFIG_IP_ MULTICAST line and answer affirmatively to make oldconfig.

and Compiling

The first step is decompressing the sources: By invoking tar xvzf linux-2.2.10.tar.gz or any equivalent command you’ll extract a directory called linux that includes the whole source tree. This applies to the pristine source, which is what you’ll have most of the time; if you are using the kernel as repackaged by your distribution the details may vary, and you’ll find the source already available in the /usr/src directory, either as a linux-2.2.x directory or as a .tar.gz file, depending on the distribution you are using.

Once the source package is uncompressed, you’ll find further information about the compiling process in the README file included. There are basically four commands you’ll need to use to compile the kernel.

1The make config command runs the interactive configurator. It will ask whether each OS feature should be compiled in the kernel, created as a module, or simply discarded. The number of questions you’ll be asked ranges from about 50 to a few hundred, depending on the options you choose. Every question comes with associated help information, and the whole help file is more than 500 K. If this massive Q&A session seems daunting, have no fear. There are alternatives to this step, as explained under A Simpler make config below.

2The next step is make depend, which instructs the system to check what files depend on what other files. C-language source files refer to (or “include”) header files. This step ensures that every file that includes a header is recompiled whenever that header is modified, thus guaranteeing a correct recompilation after you re-run makeconfig. In fact, the step is so important that make depend will be automatically invoked when you perform step three.

3The command make bzImageperforms the actual compilation, to build a bootable image file.

4Finally, make modules will compile the parts that you requested to build as loadable modules. You’ll need then to run make modules_install to install the modules in the default modules directory (inside /lib/modules). If you are only compiling modules for the kernel you are already running (and not necessarily compiling a whole new kernel), you also need to call depmod -a.

A Simpler
make config

Configuration is the most important step because it requires you to make important decisions about the structure of your system. But the huge number of kernel features has made the standard make config impractical for many people. Two alternatives and a shortcut have been introduced to try to ease the pain.

First Alternative The make
command uses a text-based menu interface to expedite the kernel option-selection process. Althoughmenuconfig includes its own help screen, it’s really most useful to those who are already familiar with the standard make config interface. No special program is needed to run menuconfig because it is included in the kernel source itself; the first step performed by make menuconfig is compiling the menuconfig program.

Kernel Xconfig
Figure 2: Make xconfig is the friendly configuration tool.

Second Alternative The make xconfig command creates a graphical interface to kernel questions. It is definitely the most friendly tool, but you’ll need the X Window System and Tcl/Tk libraries installed on your system for it to work. Also, you’ll need to be running in graphic mode. Both X Windows and Tcl/Tk come with all Linux distributions. Figures 2 and 3 offer a look at the kind of options that make xconfig provides.

Shortcut The make oldconfig command works exactly like the make config command but it only asks questions that were not answered last time you configured. This is very useful when you upgrade to a newer kernel version and don’t want to restate your answers to all the questions.

Kernel Make xconfig
Figure 3: A closer look at make xconfig.

Booting your Kernel

When the compilation has finished, you’ll find a bootable image somewhere in the directory tree. The file will be either arch/i386/boot/zImageor arch/i386/boot/bzImage, depending on how you run make. In our example, we will get bzImage. The bootable image is left inside the arch subdirectory because it is strictly architecture-specific.

The zImage file (zipped image) is a self-extracting compressed file that gets loaded into your computer’s low memory (the first 640 KB of RAM) and is uncompressed to high memory after the system is put into protected mode. A zImage file bigger than about half a meg cannot be booted because it doesn’t fit into low memory. The bzImage is a big zImage in that it can be larger than the normal zImage. It gets loaded directly into high memory with a special BIOS call, so there are no special limits to its size (as longas it fits into the memory of your computer.) This is why, in general, you will want to create a bzImage file, as we showed in our original example.

In order to boot your newly compiled kernel, you need to arrange for the BIOS to find it. This can be done in two ways: either by dumping it to a floppy (cat bzImage > /dev/fd0, but check the man pages at man rdev if you have problems at boot time), or by handing the image to Lilo, the standard Linux loader. The Lilo method is better, since it is much faster at boot time.

Lilo is configured by the text file /etc/lilo.conf and can offer a choice of several kernels when you boot. To add a new image you just need to add a few lines (a stanza) to the file describing the new image.

One important thing to remember about Lilo is that it builds a table directly on the hard drive that describes where the kernel is placed on disk. Therefore, whenever you replace the kernel image, you need to rerun /sbin/lilo as root to update that table. Since your new custom kernel might not work, it’s important always to leave a backup of your last working kernel image available and configured in lilo.conf. That way you can revert to it when things go wrong.

Listing One shows a sample /etc/lilo.conf file. Yours will be different, but you don’t need to worry about the details here. In order to add another bootable image, you need only to copy the image stanza to create another boot choice, pointing to your new compiled image and featuring a unique label.


In addition to tailoring the kernel to the specific computer running it, one sensible reason to recompile the kernel is to keep pace with the latest and greatest OS updates. Since most serious security problems are fixed at the kernel level, and since most exciting new technologies are integrated with the kernel, keeping the kernel up to date is a good and interesting practice.

When you download an “official” kernel tar file from the Internet, you might find that it is slightly different from the one used by your distribution. Most distributions don’tuse the official, unmodified kernel sources. They apply patches that they feel are useful for the user. Since the GPL requires that all derived programs fully document the changes they make, you will be able to reapply these patches to the new kernel source you downloaded. Debian, for example, shipped a 2.0.36 kernel with both the bigphysarea and the serial-console patches applied; the exact location of the patches was specified in the README. debian file, so anyone could apply the same changes to a newer kernel (for example, to the 2.0.37 kernel).

Dealing with patches is not a trivial process, but much of the time you don’t actually need the distribution-specific patches the vendors provide. The downside is that you won’t be able to upgrade your kernel by simply applying a patch to the source tree delivered by your distribution; the first time you upgrade your distribution’s kernel you’ll need to get the entire official tarball, either by pulling it off of a distribution CD (distribution vendors usually include the official kernel source in addition to their own modified version) or by downloading it from FTP. After you have a pristine source tree, you can easily proceed with the official incremental kernel patches.

When you have a pristine source tree, the procedure for upgrading
kernel version is straightforward. If you upgraded the sources by patching, you just need to type make oldconfig; make depend bzImage modules modules_ install or its equivalent. If you untarred a new source tree, on the other hand, you’ll need to recover the previous configuration by copying the .config file from the previous source directory to the new source tree and then invoke make as described. If you have no working .config handy, you’ll need to go through the lengthy configuration phase.

Kernel compilation is not as necessary as it once was. But having the tools and the knowledge at your disposal makes your system far more flexible and enables you to stay current with the latest Linux developments without relying on a Linux distributor to do the work for you. It’s all about having the full power of the OS in your corner.

Listing One: A Sample /etc/lilo.conf File

# LILO configuration file
# global section: boot from the MBR
# and delay 50 tenth of second
boot = /dev/hda
delay = 50

# First image, the custom one
image = /zImage.2.2.10
root = /dev/hda1
label = Linux

# Then, the default kernel we started with
image = /boot/vmlinux-2.0.36
root = /dev/hda1
label = debian

Alessandro Rubini used to be a programmer, but he’s turning from writing Free Software to advocating it. He can be reached at rubini@prosa.it.

Comments are closed.