One of the nice things about being a programmer is that you get to work in a simplified world. Computers are designed and built to do exactly what you tell them to do (even if it's not what you mean), and to do the same thing over and over again, yielding nice, reproducible results. The reason programs are so well behaved is that they normally execute sequentially. One step follows another, and nothing happens in between, so your data will stay the same as it was when you last touched it. Your program's flow becomes obvious and any bugs that you may have are reproducible.
One of the nice things about being a programmer is that you get to work in a simplified world. Computers are designed and built to do exactly what you tell them to do (even if it’s not what you mean), and to do the same thing over and over again, yielding nice, reproducible results. The reason programs are so well behaved is that they normally execute sequentially. One step follows another, and nothing happens in between, so your data will stay the same as it was when you last touched it. Your program’s flow becomes obvious and any bugs that you may have are reproducible.
Unfortunately, in the real world, most events are unpredictable, data usually comes in randomly, and users generally don’t want to wait for programs to get around to paying attention to their needs. Event-driven programming, used for most GUI development, is one way of handling these situations. Programs block until something interesting happens (say a mouse moving or a key being pressed), handle that action, and go back to waiting for the next thing to happen.
While this method is quite reasonable for programs that don’t have to do much work to handle a single event, it becomes much more difficult when the program has a large amount of work to do, since it constantly needs to interrupt its processing to see if there are other events that require handling. Unix provides a restricted method called signals for dealing with asynchronous events. The easiest way of thinking about a signal is as an event that can occur at any time in your program and that causes one of five things to happen:
1. Nothing (the signal is ignored).
2. The program is terminated by the kernel as soon as the signal is sent.
3. The program is terminated, and a core dump is left in the process’s current directory.
4. The program is stopped but can be resumed later.
5. The kernel transfers control of your program to another function — known as a signal handler — and transfers control back to the place the program was interrupted when the function returns (the signal is caught).
If this description sounds like catching an interrupt in a device driver, it should: Interrupts, hardware exceptions, and signals are all quite similar. Signals can be sent to a particular program by the kernel or by another program. There are quite a few events that will cause the kernel to send a signal:
1. If a program tries to read from or write to a piece of memory that doesn’t exist, or that the program doesn’t have permission to access, a SIGSEGV (segmentation fault) is sent.
2. When a process terminates, its parent is sent a SIGCLD.
3. When a user who is running on a terminal presses the suspend key (normally ^Z), every process currently in the foreground is sent SIGTSTP (terminal stop).
4. When a terminal is closed or the user logs out of a terminal, every process running on that terminal is sent a SIGHUP (hang up).
These are just a few of the many cases where the kernel will send a signal to a process; for a complete list of signals, you can check out the signal man page in section 7 (man 7 signal).
There are two things that make signal handling one of the most difficult parts of the Linux API: their asynchronous occurrence and legacy API definitions. Since signals can come at any time, programs that catch signals have to be very careful how they respond. Because it can be quite difficult to test a program against all the different ways signals can arrive, the design of the signal interface is quite important.
Before we get into the signal API, I’ll go over a couple of the different possibilities for how signal handlers should work. Assume that our program is in the middle of a tight loop when the signal handler gets invoked by the kernel, something like this:
chptr = str;
/* a signal handler gets invoked here */
return chptr – str;
This is nothing more then an implementation of strlen(). I’ve marked with a comment where the signal handler gets called, for example’s sake. There is no guarantee that that’s what really happens; the signal handler could get invoked anywhere, even in the middle of the chptr++ statement, since it probably takes more than one machine instruction to execute.
There are a few things about how signals should be caught that are pretty easy to agree on:
1. After a signal is caught, the program should resume as if nothing had happened.
2. The signal handler should be able to modify global data for the program.
The Gray Area
Programs generally don’t know they’ve been interrupted by a signal, and if the signal modifies some part of a program’s memory, the main instruction flow of the program can look for those changes. Here are a couple of things that aren’t so clear:
1. What happens if the program receives a signal while in a signal handler for a different signal (say it receives a SIGCLD while handling SIGTSTP)?
2. What happens if a program receives a signal before it’s finished handling a previous delivery of the same signal (two SIGCLDs arrive very rapidly)?
3. If the program is blocked on a system call (so that it’s stuck in the kernel), when is the signal delivered?
4. What if two signals are delivered in such rapid succession that the kernel hasn’t gotten around to running the signal handler for the first signal when the second signal is sent?
The only one of these questions that has an easy answer is the last one. If the two signals delivered in rapid succession are the same, only one of the signals is delivered. If the two signals are different, the signals are not queued, and there is no guarantee of the order in which those signals are delivered.
The answers to the other three questions depend on how the signal handler is registered with the kernel. Remember when I said that legacy API definitions make handling signals difficult? It’s because the signal() system call, which is the original (and, unfortunately, still the most-used) method of registering signal handlers, will answer those first three questions differently in different operating system implementations.
Even under Linux, the implementation of signal() changed a bit between the libc5 and glibc libraries. The lesson you should take home from all of this is simple: Don’t use signal() ever. Period. I’ll talk a bit about its variations here though, since a ton of legacy code (and older books) only know about signal().
On the face of it, signal() looks like a perfectly harmless system call.
typedef void (*signal_handler_type)(int signo);
signal_handler_type signal(int signum,
The first line declares a pointer to a function taking a signal int argument and returning NULL; this is the classical prototype for a signal handler. The signal() system call expects a signal (such as SIGHUP) and a pointer to the signal handler to use. It returns the old signal handler that was registered for that signal.
There are two special signal handlers predefined, SIG_DFL and SIG_IGN. SIG_DFL restores the default behavior for that signal, and SIG_IGN tells the kernel to ignore that signal. There are only two signals that cannot be ignored or caught: SIGKILL (which terminates the process, this is the 9 in kill-9) and SIGSTOP (which stops the process, but allows it do be restarted later). Signal() is actually handy for setting a signal to SIG_DFL or SIG_IGN; its poor specification doesn’t matter in those cases.
Let’s look at how signal() answers the questions I posed about signal handling. Every implementation of signal() allows other signals to be caught while running a signal handler for a different signal. I’ll talk later about why this isn’t necessarily a good thing, but at least signal() is consistent on this point. It has three different options for what to do if a signal is received and the signal handler for that signal is already running:
1. Go ahead and reinvoke the signal handler. This is almost never a good idea, because most signal handlers are not written to handle two instances of the same signal at the same time.
2. Hold off on delivery of the second signal until the signal handler has finished handling the first signal.
3. Right before running the signal handler the first time, reset the signal handler for the signal to SIG_DFL. This lets the signal handler reregister itself right away if it wants to handle new receptions of that signal, or reregister itself as the very last thing it does if it prefers not to be run simultaneously.
While #3 may seem like a good idea, in reality it’s a pretty rotten thing to do. Horrendous, really. The main problem with this is the window of time that exists between the kernel resetting the signal handler to SIG_DFL and the signal handler getting a chance to reregister itself. If the second instance of the signal is received within that window of time, then the default behavior will occur, and there is absolutely nothing the application can do about it. That’s not a very good way of getting consistent behavior. Needless to say, having different signal() implementations on different platforms makes registering a signal handler via signal() a difficult chore at best.
When Red Hat ported its Linux distribution to glibc, one of the least rewarding parts of the work was rewriting signal handlers not to use signal() anymore, since existing signal handlers that were registered with signal() stopped working properly.
Slow and Fast System Calls
There is still one question about signals that I’ve asked but haven’t discussed yet: What happens when the program is blocked in a system call, and a signal is delivered? Generally — very generally — signals won’t be delivered in the middle of system calls.
There are two different types of system calls, slow and fast. Fast system calls are those that are guaranteed to return, while slow system calls can keep blocking.
For example, stat() is a fast system call because it always returns very quickly, while select() is a slow system call because there is no way of knowing when data is going to show up on that pipe you want to read. Some system calls are both slow and fast; for example, read() on a regular file is fast, while read() on a pipe is slow.
The kernel will never deliver a signal while a fast system call is being executed, but on the other hand it will deliver a signal while a slow system call is being run. This behavior actually makes perfect sense, since signals are meant to be handled by the application very quickly, and not delivering that SIGKILL to an application that is waiting for data to arrive over the network would make kill -9 a much less useful club.
Not delivering signals in the middle of fast system calls makes implementing those system calls and the programs that use them simpler, and there is really no downside, since those system calls finish quickly anyway. The only problem with this is that some fast calls aren’t fast, such as reading from a normal file on a networked filesystem. This can take a long time, but signals still aren’t delivered while that read() is waiting to complete, because it’s considered a “fast” system call. This is unfortunate; it makes killing processes blocked on NFS transactions impossible until the NFS filesystem times out. This is why kill -9 doesn’t always work.
The last system call oddity I want to cover is how the system call responds after a signal has been delivered in the middle of it. When this happens there are two choices:
1. Return with an error, and set errno to a value that indicates a signal interrupted the action (say, EINTR).
2. Continue running the system call pretending nothing happened.
Unfortunately, there really is no right choice in this scenario. Both behaviors can be correct, and allowing only one of them makes writing programs more difficult than it really needs to be. The signal() system call never really decided how to behave (it depends on the operating system you use), which is another reason never to use it for real work. The POSIX (Portable Operating System Interface for Unix) signal API lets the program choose which behavior it likes best.
In next month’s column, I’ll delve into the details of the POSIX signal API. Hopefully, this month’s Compile Time will have provided you with enough background to understand the complexity of the API and the motivation behind all of the flexibility that it provides.
Erik Troan is a developer for Red Hat Software and co-author of Linux Application Development. He can be reached at email@example.com.