Apache 1.3 is already the world's most popular Web server. Now Apache 2.0 is here, and it's more powerful than ever. We take a look under the hood to find out what's new in 2.0.
For nearly three years now, the Apache Software Foundation has been working on Apache 2.0, and if it isn’t in general release by the time you read this, it should be soon. For a product that’s been in the works for as long as Apache 2.0 has, you’d expect it to be vastly different from Apache 1.3, the workhorse we’ve all come to know and love. So when you fire up your new copy of Apache 2.0 and discover that you only have to make a few minor configuration changes to get the whole thing up and running, you might think the server hasn’t changed very much. That’s likely to be good news to the countless administrators out there who just don’t have the time to undertake a major software upgrade. Yet while this approach — drop it in, tweak it a little, and go — will probably work, take a closer look or you might miss what Apache 2.0 is all about.
The problems in 1.3 are largely side-effects of its evolution. Apache has been ported to virtually every platform there is. However, this was done a platform or two at a time, with hacks inserted where necessary to make each new platform happy. This modular architecture allows the administrator a good deal of flexibility, but modules aren’t really able to work together. Apache 1.3 uses a separate process to handle each connection, which allows it to be extremely reliable but less scalable.
Apache 2.0′s main goals have been to address these and other problems. The first thing you will notice when you unpack the distribution is that 1.3′s Apache Autoconf-style Interface (APACI) has been abandoned, as the configuration system and the directory tree have been completely reorganized. You will have to make a few changes to your configuration, due in large part to the introduction of Multi-Processing Modules (MPMs), which allow the connection-handling scheme to be user-selectable rather than fixed at one process per connection. Developers will doubtless encounter the Apache Portable Runtime (APR), the new portability library, and the API changes that come with it. Thanks to the new I/O filtering system, you can now stack modules on top of one another, allowing them to interact in ways never before possible. In this article, we’ll explore the changes under the hood that make Apache 2.0 what it is and the changes you’ll have to make to your own Apache modules to make them work with 2.0.
COMPILING THE SERVER
Primarily due to the headache of maintaining a portable configuration tool, the Apache group dropped APACI and switched to the ubiquitous GNU Autoconf. Anyone who has ever configured and compiled any package has almost certainly used the ./configure scripts produced by Autoconf. Watch out, though — Apache isn’t one of those packages for which you can just run ./configure with no parameters and expect it to be set up the way you want it. If you do, you’ll end up with a minimalist set of modules compiled in. Of course, this probably isn’t news to you if you’re accustomed to APACI. The two are similar after all. A quick look over the list of parameters to ./configure should get you going — some of the most important ones are listed in the Configuring an Apache Build.
Configuring an Apache Build
The parameters to the ./configure script for Apache 2.0 can be separated into three broad categories: directory structure, modules, and features.
The two most important parameters for configuring how you want the directories for installation laid out are –prefix and –enable-layout.
You want to start with –enable- layout. In the root of the Apache 2.0 source tree, there should be a file named config.layout. In it, there are many layout styles to choose from. For example, to use the GNU directory layout, you would do:
Sometimes one of the predefined layouts is close to what you want, but not quite. If you liked the Apache directory layout but wanted everything to install under www instead of /usr/local, you could type:
./configure –enable-layout=Apache –prefix=/www
You can get more customizable than that, but this should suffice for most users.
Another important set of parameters of the ./configure script relates to which Apache modules to include (see Table One).
Most of what’s left involves which features to include in your new Apache build. For instance, one of the new features of Apache is the Multi-Processing Modules that implement different process creation strategies for handling requests. You get to choose from prefork, threaded, worker, perchild, and a bunch of others. To build a preforking Apache, you’d type:
PORTABILITY (THE CLEAN WAY)
One of the earliest changes that was made in the development of 2.0 was the creation of APR, the Apache Portable Runtime. APR is a library built upon the realization that, while different platforms can vary wildly in the features they provide and what must be done to use them, there are really only a handful of features needed to write most portable programs, and most operating systems provide them in some form.
Any program that interacts with the operating system will want to do things like file and network I/O, process and thread management, memory management, and synchronization. APR takes all of these features of the underlying platform and wraps them up in a consistent interface, making all operating systems look the same as far as the program is concerned. The benefits to Apache are obvious; it allows all platforms to use common code within Apache, making Apache equally reliable on all platforms and generally improving the readability of the code. Virtually all that must be done to port Apache to a new platform now is port APR to that platform. And since APR is a separate library, any project can now take advantage of Apache’s portability and contribute back to it.
The success of this approach is apparent in the incredibly diverse set of platforms to which APR has already been ported. As of this writing, the list includes many flavors of Unix, most versions of Windows, OS/2, and BeOS. Netware support is also under way.
Portability of this sort often means a sacrifice in some other area like performance. APR has managed to skirt this issue, fortunately, by hiding its internals from the program using it and by providing only those features that are common to a large number of platforms. Using the native API and data structures of operating systems tends to be significantly faster than using the “compatibility” libraries those OSes provide. Because each implementation of APR uses native interfaces and data structures, no APR platform has to make sacrifices to fit into the mold of another.
APR borrowed many of its features from the old Apache API. Some functionality has simply been moved from Apache to APR, so as you port modules to Apache 2.0 you’ll surely encounter a few functions whose names have changed. In some cases, this simply means their prefix changed from ap_ to apr_. In an attempt to provide more intuitive names, some function names have changed completely; ap_pfopen is now apr_file_open, for example. For help finding the new names, look in the “compat” headers, ap_compat.h, apr_compat.h, and apu_compat.h, which list functions whose names have changed but which are otherwise the same. Not all functions were merely renamed, however; some functions’ prototypes have changed as well, typically to accommodate the new APR data types and to account for the fact that pools now manage all subsystems of APR. In addition to managing your memory, files, and sockets with a pool, you can manage locks, MMAPs, etc. Migrating everything to pools necessitated the prototypes of some functions having to change a bit. To find these changed functions, you’ll actually have to flip through the API documentation for APR; the documentation is relatively comprehensive and is being expanded all the time. It’s available online at http://apr.apache.org/.
Table One: Modules for Apache 2.0
|(+) mod_env|| Set environment variables for CGI/SSI scripts|
|(+) mod_setenvif|| Set environment variables based on HTTP headers|
|(-) mod_unique_id|| Generate unique identifiers for request|
|(+) mod_dir|| Directory and directory default file handling|
|(+) mod_autoindex|| Automated directory index file generation|
|Access Control and Authentication|
|(+) mod_access|| Access Control (user, host, network)|
|(+) mod_auth|| HTTP Basic Authentication (user, passwd)|
|(-) mod_auth_dbm|| HTTP Basic Authentication via Unix NDBM files|
|(-) mod_auth_db|| HTTP Basic Authentication via Berkeley-DB files|
|(-) mod_auth_anon|| HTTP Basic Authentication for Anonymous-style users|
|(-) mod_digest|| HTTP Digest Authentication|
|(-) mod_headers|| Arbitrary HTTP response headers (configured)|
|(-) mod_cern_meta|| Arbitrary HTTP response headers (CERN-style files)|
|-) mod_expires|| Expires HTTP responses |
|(+) mod_asis|| Raw HTTP responses|
|Content Type Decisions|
|(+) mod_mime|| Content type/encoding determination (configured)|
|(-) mod_mime_magic|| Content type/encoding determination (automatic)|
|(+) mod_negotiation|| Content selection based on the HTTP Accept* headers|
|(-) mod_file_cache || Caching of open handles to frequently served pages|
|(+) mod_include|| Server Side Includes (SSI) support |
|(+) mod_cgi|| Common Gateway Interface (CGI) support|
|(+) mod_cgid|| Common Gateway Interface (CGI) support for multi-threaded MPMs|
|(+) mod_actions || Map CGI scripts to act as internal `handlers’|
|Internal Content Handlers|
|(+) mod_status|| Content handler for server run-time status|
|(-) mod_info|| Content handler for server configuration summary|
|(+) mod_log_config|| Customizable logging of requests|
|(-) mod_usertrack || Logging of user click-trails via HTTP Cookies|
|(-) mod_dav|| WebDAV (RFC 2518) support for Apache|
|(-) mod_dav_fs|| mod_dav backend to managing filesystem content|
|(-) mod_ssl || SSL/TLS encryption support|
|(+) mod_imap|| Server-side Image Map support|
|(-) mod_proxy|| Caching Proxy Module (HTTP, HTTPS, FTP)|
|(-) mod_so|| Dynamic Shared Object (DSO) bootstrapping|
|(-) mod_example|| Apache API demonstration (developers only)|
|(+) mod_alias|| Simple URL translation and redirection|
|(-) mod_rewrite|| Advanced URL translation and redirection|
|(+) mod_userdir|| Selection of resource directories by username|
|(-) mod_spelling|| Correction of misspelled URLs|
|(-) mod_vhost_alias|| Dynamically configured mass virtual hosting|
If you never really used the command-line parameters to APACI and stayed with the old-style Configuration.tmpl, this will be a slightly bigger change to you. After you’ve run ./configure once, though, you’ll find a handy script called config.nice that will let you rerun ./configure with all the same parameters you used the last time.
Another big change is that the directory tree for the source code has been rearranged. In 1.3, almost all of the modules were lumped together into src/modules/standard/, mainly because of 1.3′s tight focus on being a Web server. With 2.0, the actual protocol spoken by the server is abstracted out so that HTTP becomes just another module and Apache becomes a server framework. This allows Apache to handle other protocols, such as POP3, when given the appropriate protocol modules. Since some modules (like authentication modules) might work under multiple protocols, it is useful to have modules split out into separate directories based on their purpose. So under the modules/ directory, you will now find subdirectories.
Apache 1.3′s process-per-request model is robust in that a crash in some module only affects a single connection at a time, but it falls short in scalability. Because a separate copy of the server process handles each request, a relatively small number of requests can eat up a relatively large amount of system resources, particularly memory.
One of the primary goals for Apache 2.0 was to enable the server to be multi-threaded, thereby alleviating many of 1.3′s scalability problems. It was quickly realized, however, that different systems have different needs in this arena, so the entire connection-handling mechanism was abstracted out. Administrators now have several of these Multi-Processing Modules (MPMs) from which to choose. Some scale better on “big iron” high-end SMP servers; others are better suited to uniprocessor machines. Some MPMs are tailored to a particular platform to take advantage of low-level process-management or service-management features of the OS that just don’t quite fit well into APR (see Figure One, .
On Unix platforms, Apache provides several options for the administrator. There is one MPM, prefork, that duplicates the behavior of 1.3. A number of others use threads to handle requests; those differ from each other mainly in the way they manage the threads and in the choice of a constant or variable number of these worker threads in each process. It’s also quite likely that third-party vendors will write custom MPMs for Apache 2.0 to implement proprietary performance enhancements.
The various multi-threaded MPMs are still in a bit of flux, so it’s possible that the description given here is already somewhat outdated. As of this writing, the multi-threaded MPMs for Unix are threaded, worker, and perchild. The idea behind threaded is that each child process starts up a fixed number of worker threads, and all of those threads listen to the socket for incoming connections. Unfortunately, this design can cause problems with graceful shutdowns, since the threads must be awakened by an incoming connection before they can realize they need to terminate.
To remedy this problem, the worker MPM was designed to replace threaded. The two are similar in many respects, except that in worker there is only one listener thread per process. The listener places incoming connections in a queue to be handled by worker threads as they become available. The perchild MPM throws away all preconceived notions of how connections should be handled. It uses a fixed number of processes with a variable number of threads per process to handle connections.
The interesting thing is that each process can be assigned to particular virtual hosts and to different user and group IDs. That means it’s finally possible to completely protect the private data of one virtual host from another, not just for CGI scripts but for PHP and SSI pages and other module-generated responses as well. That’s sure to be a useful feature for ISPs and other hosting services.
Perhaps the most recent development in Apache 2.0′s architecture is the filtered I/O system, which appeared as recently as 2.0a6. The introduction of filters to Apache has had a huge impact on the way the server operates and how modules interact; one module now has the ability to modify the data generated by another module on the fly. A filter might parse outbound data for Server Side Includes, or it might compress or encrypt that data, for example. These filters can be stacked on top of one another, and the decision of which filters to insert in the stack can happen at request time.
In order to make filters work, a data-management system that would allow data to flow efficiently from one module to the next was needed. “Bucket brigades” were created to meet this requirement. Each bucket represents some chunk of the data; buckets are then strung together into a list called a brigade. Individual buckets in a brigade can be split up, copied, rearranged, inserted, and deleted, without ever copying their contents around in memory.
Buckets come in various types, where each type represents a different source of data (a file, a block of data in memory, etc.). Filters need to know only how to manipulate a bucket; the actual source of the data is hidden and managed internally by the bucket. When a filter is called, it gets passed the brigade that was the output of the previous filter. It then manipulates that brigade in some way, perhaps by inserting or deleting some buckets. When it’s finished, it simply passes the resulting brigade on to the next filter in the stack.
There is a slight difference between input and output filters, however. The input filter stack (see Figure Two) operates in a “pull” mode — when a module wants some data from the input stack, it calls ap_get_brigade(). The input filter stack has the “core input filter” at its lowest level — that’s the only level that is really aware of the presence of a socket. When the core input filter returns, it passes along a brigade containing whatever data was currently available to be read from that socket. There is an HTTP filter that handles the details of HTTP, separating request headers from the request body and managing other protocol details. Other filters might be before or after the HTTP filter; if SSL were in use, the SSL filter would be between the core and the HTTP filter, for example.
The output filter stack operates in “push” mode (see Figure Three). As content is generated, it is pushed down the output stack. Each filter calls ap_pass_brigade() when it is finished, which hands that brigade off to the next filter in line. As with the input filters, a typical output filter stack will contain a number of HTTP-related filters to handle details like calculating content length etc; it might also contain an SSL filter to encrypt the response. The last filter in the stack is the “core output filter,” which is responsible for dumping the data out to the network in the most efficient manner possible.
Because multiple modules can now manipulate the data stream, not only has a large chunk of redundant code been removed from the server, but a whole new realm of interesting configurations has become possible. As hinted above, any page generated by the server can now be parsed for Server Side Include (SSI) tags by mod_include. Previously, mod_include needed to know how to serve files from disk just to handle static SSI pages; dynamically generated ones just weren’t feasible. Today, mod_include doesn’t even see a difference between the two. Interesting performance optimizations are now possible as well, such as using the MMapFile directive with server-parsed documents. Again, this simply couldn’t be done before, because the caching module and the SSI module couldn’t both participate in the request-handling process.
The filtering system allows modules to manipulate data on its way in and out of the server, yet filters are intended to be independent of one another. Allowing modules to truly communicate with one another and to pass work back and forth in arbitrary ways was quite a headache in 1.3. To fill this void, two extra mechanisms were added to 2.0 — hooks and optional functions.
A hook represents an event that occurs in the course of handling a connection; these hooks are typically declared by the server core or by the HTTP module. Modules can register to participate in various hooks, indicating that they wish to have callback functions run during those stages of request processing. The declarer of the hook calls all of the registered callbacks at the designated time. In Apache 1.3, all of this was handled by the module structure, which is a static structure telling Apache which function, if any, a module wishes to register for each stage. One problem with the static struct was that it was difficult to add extra stages without breaking existing modules. There is already a large number of hooks (almost 30), but more can be added without the worry of breaking older modules; at this point, only a few essential functions remain in the module structure. Another problem with the module structure solved by hooks is that in 1.3, each set of callbacks had to be given the same order: the order in which the modules were loaded into the server. Now the set of callback functions associated with each hook can be ordered independently, and the modules can take care of this ordering on their own without the intervention of the administrator.
To allow even closer interaction between two modules that may or may not both be loaded into the server at the same time, optional functions were also added. Optional functions allow a module to provide a function for use by another module and provide a way for that other module to skip over the function gracefully if it isn’t available. A module registers its optional functions with the core. If another module is interested in one of those functions, it checks with the core to see if the function has been registered, retrieving a pointer to it if so.
The benefits of this system are nicely illustrated in mod_ include, which handles all of its SSI tags through an optional function. Any module that wishes to define a new SSI tag simply registers the tag and a function to handle that tag with mod_include by looking up mod_include‘s optional function for tag registration. So any module can now define its own SSI tags without having to re-implement the parsing engine, a level of cooperation that was not feasible in version 1.3.
While at first glance Apache 2.0 bears a strong resemblance to 1.3, it’s just a front; the internals have gone through a major overhaul. But what Apache 2.0 probably won’t give you is an overwhelmingly huge performance boost. People are already starting to throw out ideas for things that would be nice to have in versions beyond 2.0; more tightly tuned performance is certainly a goal for 2.1.
Great strides were made for 2.0 in the last few months, and early numbers show that it at least rivals 1.3 in performance, but the main benefits will be in scalability and flexibility. More radical changes, such as an asynchronous, event-based request-handling model, have also been proposed, though changes of that magnitude are likely more distant than 2.1.
In the meantime, Apache.org has been running 2.0 pre-release builds in production for quite some time, and it is doing well. So give Apache 2.0 a whirl. Even if you’re leery of “.0″ versions, remember this — the more people who run it, the sooner the bugs will be found!
Cliff Woolley is a graduate student in computer science and a developer on the Apache HTTP Server and Apache Portable Runtime projects. He can be reached at email@example.com