An inside look at how Apache Software Foundation projects produce code
In the second in his series, Apache Software Foundation (ASF) co-founder Ken Coar describes the rules that all ASF projects must abide by — rules that are fundamental to the “Apache Way.”
As the Apache Software Foundation (ASF) likes to say, “Community is more important than code.” Indeed, the role of the ASF is to provide a framework that facilitates communication and mutual respect among developers. The creation of code simply takes place within that framework. It’s the “Apache Way.”
There is a definite attitude and mindset surrounding activities at the Apache Software Foundation. Although individual projects can have their own specific rules and guidelines, there are some things that are part of the Apache Way that are essentially required of all ASF projects. These requirements exist because they provide some sort of uniformity across the entire Foundation. Some affect the interface between the ASF and the outside world, and some have to do with the internal operations of the ASF and its projects.
One License for All
One of the mandated outward-looking requirements is that all code developed by an ASF project must be licensed and distributed under the Apache Software License (available online at http://www.apache.org/licenses/LICENSE-2.0.html). A single, mandated license ensures that all pieces of Apache code, from whatever packages, are distributed under the same rules.
Another requirement is that all committers (those developers who have access to the master repositories of sources) must execute a Contributor License Agreement (CLA, http://www.apache.org/licenses/icla.txt) and file it with the ASF secretary. This is intended to guarantee that all code changes committed are appropriately licensed to the Apache organization for inclusion, modification, and redistribution.
As a matter of fact, access to the repositories is largely granted by the ASF’s Infrastructure Group, which is composed of the Apache contributors who have volunteered to keep the systems running. The infrastructure folks won’t grant access until they see confirmation that the CLA form has been received and filed.
One of the inward-facing requirements has to do with how decisions are made. This is actually at the very heart of the Apache Way, and if you ask someone, “What is the Apache Way?” there’s an excellent chance they might respond with a description of the decision-making mechanism.
Almost all decisions are reached by voting in email. This might sound like a hopelessly heavyweight and cumbersome process, but in fact, it generally moves very quickly and smoothly. (Of course, there are the occasions in which someone puts a monolithic or extremely controversial matter up for a vote, but those tend to be uncommon.)
In a fashion only geeks may appreciate, all decision makers express their vote in numeric form:
*+ 1 means, “I’m in favour of this.”
*–1 is either a veto or a vote against the proposal.
*Anything else is essentially an abstention with an opinion attached.
Because of the distributed and volunteer nature of the community, votes typically have a minimum duration that allows everyone in every timezone a chance to review the issues. How the results are determined depends on the kind of decision being made. These fall into two categories: technical and non-technical.
Non-technical votes are decided by simple majority. Typical majority votes include such things as deciding when to release a new version, granting someone commit access, adding a new subproject, or some other non-technical matter.
Technical decisions have to do with such things as whether changes to the code (or to documentation) are acceptable or not. A technical vote has special requirements: to pass, there must be at least three+ 1 votes and no –1 votes. In this case, a –1 vote is referred to as a veto and requires technical justification. In Apache terms, this is called a “consensus vote.”
The technical voting mechanism is the part that is unusual to the Apache organization. If a matter can’t garner at least three people in favor of adopting it, the consensus is that it’s probably a waste of time.
For example, a typical email thread might look something like this:
From: Julius Subject: [VOTE] krantz-fripping patch
Rasputin’s patch for fripping the krantz looks pretty good. I’ve tested it and it applies cleanly and works as described. I’m +1 on committing this.
From: Nikolai Subject: Re: [VOTE] krantz-fripping patch
Looks good to me. +1
From: Victoria Subject: Re: [VOTE] krantz-fripping patch
I thought the krantz was supposed to be unfripped by design? -0 if that’s right, or +1 if I got that wrong and it’s *supposed* to be fripped.
According to the voting rules, the patch would be accepted and incorporated if those were the only votes cast, because at least three favored it (“plus-one’d”) and no-one vetoed it.
A veto is a very powerful tool, because it is unappealable. There are only three ways to deal with a veto: abandon the issue and let the veto stand; convince the vetoer to withdraw it; or address the goal through some other mechanism that won’t incur a veto.
Because of its power, a veto is not supposed to be exercised frivolously. Every veto needs to be accompanied by a technical justification explaining what’s wrong with the item and why the veto is appropriate.
Technical justification can be a fairly subjective thing: what a person exercising a veto thinks represents justification might not seem a strong enough argument to the rest of the community. Even in cases such as that, however, the power of the veto is not overruled. Instead, social and peer factors come into play, with each side trying to convince the other.
In pathological situations, developers may try to ignore vetos or veto each other’s alternatives “just because,” but the Apache culture comes down pretty hard on the participants if such a scenario develops. “Hard” can mean anything from public censure or reprimand, to actual revocation of commit access.
None of this happens in a vacuum. Sometimes these discussions are resolved amicably, and sometimes they result in considerable acrimony and lasting bad feelings. Taking the bad with the good is an accepted part of the process, and maintaining the feeling of camaraderie and teamwork in the face of arguments and “flame wars” is one of the challenges that the Apache perspective is dedicated to meeting.
Casting a Vote, Lobbying for a Change
The question of who is eliglble to vote is a critical piece of this. Who may vote is different for each group, and sometimes for different activities within a group.
For instance, votes on whether someone should be granted commit access might be limited to the project’s management committee (for a description of the structure of projects within ASF, see the first article in this series at http://www.linuxmagazine.com/2005-01/), or perhaps to the full set of existing committers for the project. The choice of a logo or mascot for the project may be something on which everyone, developer or user, is welcome to express an opinion.
Unfortunately, the “who can vote” criteria are not always clearly described. There have been situations in which votes were cast by people who weren’t entitled to do so, and it may or may not have been explained to them after the fact — after their votes were ignored. Chalk it up to the stereotypical social clumsiness of the the culture, if you like.
In the original days of the Apache HTTP Server project, back when it was called the “Apache Group,” every patch was submitted to the mailing list for discussion and voting. Most submitters found that had to become their own advocates, often reminding people to vote on their changes. A+ 1 meant, “I have applied this patch and tested it, and found it ‘good.’” A list of outstanding issues and patches was maintained by one of the developers and he’d update it with any votes or new patches and send it out to the list about once a week.
This method of applying patches became known as “Review-Then-Commit,” or RTC (arr-tee-see). Its strength is that no changes make it into the code without having been tested. Its weakness is that it tends to slow down progress, since each change needs to be approved.
The flip side to RTC is “Commit-Then-Review” (CTR), which involves what you might expect: changes can be committed to the sources at any time and stay there by default unless someone asserts a retroactive veto. The strength of CTR is that development can move quite rapidly, pretty much as fast as the developers can churn out code. Its weakness is that inadequately tested code can be committed.
Both RTC and CTR have social ramifications as well. Review-Then-Commit can strengthen the feeling of community, since people need to work together in a quid pro quo arrangement to commit changes. Everyone needs to at least look at each submission to have one’s own reviewed (and potentially approved) in turn.
The very deliberation it imposes can cause friction, however, with some people feeling they’re being held back. Whether that’s true or not, the perception can be formed if one’s submissions don’t get voted on. It also can be problematic in small communities, where its requirements can cripple development if one or two people are unavailable and the quorum of three can’t be achieved without them.
The CTR model, on the other hand, can be seen as sacrificing community involvement for the sake of rewards on a more personal and individual level. Use of the CTR model indicates a high degree of trust among the peer participants; one of the “individual rewards” is the trust implicit in being allowed to contribute without prior review.
It is possible for both models to be used simultaneously in the same project. For instance, the HTTP Server Project has occasionally adopted guidelines under which it is appropriate to submit bug fixes and small changes under the CTR model, while more sweeping and potentially controversial changes require prior examination and therefore follow the RTC method, with each being adopted only after discussion and at least three people approving the change — and no one vetoing it.
The Review-Then-Commit model ensures that there is ongoing discussion, since it requires interaction for progress to be made. It might be viewed as being better suited for areas in which a lot of people are working together on a specific concept and potentially overlapping each other. If every developer is working on something distinct from every other developer, having to examine code that’s completely unrelated to your own work might become a distraction or an irritant. The CTR approach may be better suited in such cases.
One thing that may appear odd — and has actually caused confusion — is that releases are not managed as technical issues. In other words, a pending release cannot be stopped in its tracks by a veto. In fact, in most cases it can’t even be stopped by a consensus vote. This is because the release process is under the absolute control of the release manager, who is solely responsible for setting the freeze and release dates and saying whether the release will actually happen at all.
Obviously the release manager must be sensitive to the will of the developers, but he or she has the final say. It can be a pretty stressful operation, so a particular individual may only fill the role once in a while. Sometimes, someone has a particular skill for it, and acts as release manager for more than one release. On the other hand, many people never do it at all. It can be an extremely harrowing role.
The Release Process
When there’s general agreement that it’s time for a release, the release manager (always a volunteer) determines the dates and informs the developers, who are supposed to stop committing changes according to the schedule. However, a few years ago, I codified a phenomenon well-known in the community:” selection of a release date triggers a surge of development.” In short, there are always changes being made right up to the last minute.
At the critical point, the release manager essentially” freezes” the codebase, updates the version number from an” under development” one to a” released” one, tags the sources (labels the versions that are being used to create the release), ” rolls” the release (creates the packages), changes the version number back to an” under development” one, and” thaws” the sources.
The packages are signed by the people who built them, so that they can be validated. This consists of generating both an anonymous MD5 checksum and a PGP or GPG signature for each file. The MD5 checksum allows someone who doesn’t have access to the encryption tools to verify that what she has downloaded is what is on the distribution site (basically, that the checksums match). The cryptographically strong PGP or GPG signature is vastly preferred as a means of verification, because it not only demonstrates that the file you’ve downloaded is the same one as is on the distribution site, but also that it’s the same as the one built by the specific individual who packaged it.
At this point the release is said to have been” rolled.” The next step is to put the packages in a restricted download area and let volunteer testers download them and test the release. If the testers don’t find any problems, the packages are moved to the normal download area.
After waiting 24 hours (so that worldwide mirrors have a chance to pick up the new packages), the release is formally announced and the release manager relaxes into the serenity of a nervous collapse.
(This is specifically focused on how the Apache HTTP Server project handles its releases. Other projects may and probably do use other processes.)
If a serious bug is discovered just before a release is made public, particularly a security-related bug, there might be agitation to cancel or block the release, and in the excitement someone might mention vetoing it. (It’s really surprising how often these “eleventh hour” issues crop up.) However, according to the Apache guidelines, a veto is not an option, so the alternative is to convince the release manager that the issue is serious enough for her or him to freeze or cancel the release.
Depending on the state of the release process (see the sidebar), interrupting it may or may not be invisible to the public. If the sources haven’t yet been “tagged” (that is, labelled as “these are the versions used to create release X.Y.Z”) it may be completely transparent. The fix, when one is developed, will be incorporated and the release process can recommence.
If the source files have been tagged, but the packages either not yet built or not yet made available for public download, what will happen is that the release will be aborted, the version number that was going to be used will be marked “never released,” and the next release number will be used when the release process is restarted. Almost the same thing happens if the problem is discovered after the packages have been made available for download: the files are taken off the web site, an announcement is made, and the work proceeds with the next release number.
Due to the way in which packages are tested and distributed, it’s possible that it may take up to a couple of days to yank packages that have already been made public, since the worldwide mirrors will still have the files for a while.
Peeking Under the Derby
Given the theory of the Apache Way, let’s take a look at a practical application. Let’s look in on the Derby project, which was introduced in the last article and is currently moving through the Apache Incubator on its way to becoming a part of the Apache Software Foundation’s extensive list of packages. Part of the Incubator’s function is to educate new participants in the Apache Way, so looking at how a project in the midst of the process is coming along is more than germane.
The Derby code (and developers) entered the Incubator in August 2004. It will remain in the Incubator as a podling until it’s clear that the software files are compatible with Apache requirements, that the developer community is discussing things and making decisions in the prescribed manner, and that the community is healthy and open.
Since Derby entered the Incubator it has made significant strides. Not only has development continued, but the team actually put together a release in December 2004. In addition, during the process of validating the source code and making sure that licences and title were properly adjusted, the developers were key in discovering a shortcoming of the way the Apache Software Foundation handled such things — which led to a change in procedure Apache-wide. Both parties (the Derby podling and the ASF) have benefited from the association so far. Derby’s been attracting public involvement from users and developers, and the Foundation has corrected a procedural deficiency.
Since the initial set of committers for Derby was the team that worked on the code at IBM, diversifying the committer base is a particular concern. To graduate, the podling must have a broad enough committer base and enough development momentum to continue being viable after it has left the oversight structure of the Incubator and becomes essentially self-governing. Of somewhat lesser concern is that the committers be sufficiently diverse that the project can survive the eventual moving-on of the initial people. This is a low-key concern common to all projects, so its lack won’t keep Derby from graduating.
Podlings in the Incubator can release packages. In fact, it’s encouraged, since distributing software is a big part of what the ASF does and all projects should have a clear grasp on the guidelines and mechanisms.
However, until a podling actually graduates from the Incubator none of the packages it distributes are permitted to be labelled official products of the Apache organization. When and if it graduates, of course, this restriction disappears.
Podlings are permitted to proclaim their packages as coming from a project that is in the Apache Incubator. This is useful, since it says something about the software and the developers that’s useful to the users: namely, that the Apache Software Foundation considers the project of sufficient importance and quality to consider adding it to the ASF portfolio.
Another limitation on podling releases is that they aren’t included in the worldwide mirror system. Packages released while in the Incubator are only available from the main Apache site itself. If it turns out that there’s a problem with a release, it can be yanked or corrected and have the change immediately visible and effective — there’s no delay implicit in the change having to propagate to all the systems in the mirror program.
Keeping the Faith
The processes and guidelines which the Apache Software Foundation uses in developing its software are partly a result of deliberate design and partly a result of evolution, tweaked through trial and error. Some of them may seem like common sense and some may even seem weird, but they work surprisingly well.
The various development communities are healthy and active, the software’s quality is high, and pretty much everyone — developers and users — is happy.
Next time we’ll take an in-depth look at the Apache Incubator.
Ken Coar is a founder of the Apache Software Foundation.
Fatal error: Call to undefined function aa_author_bios() in /opt/apache/dms/b2b/linux-mag.com/site/www/htdocs/wp-content/themes/linuxmag/single.php on line 62