Doing Security the Open Source Way

As a cryptography and computer security expert, I have never understood the current fuss about the open source software movement. In the cryptography world, we consider open source necessary for good security; we have for decades. Public security is always more secure than proprietary security. It's true for cryptographic algorithms, security protocols, and security source code. Open source isn't just a business model; it's smart engineering practice.

Trenches Illustration

As a cryptography and computer security expert, I have
never understood the current fuss about the open source software movement. In the cryptography world, we consider open source necessary for
good security; we have for decades. Public security is always more secure than proprietary security.
It’s true for cryptographic algorithms, security protocols, and security source code. Open source
isn’t just a business model; it’s smart engineering practice.

Open Source Cryptography

Cryptography has been espousing open source ideals for decades, although we call it “using
public algorithms and protocols.” The idea is simple: cryptography is hard to do right, and the only
way to know if something was done right is to be able to examine it.

This is vital in cryptography, because security has nothing to do with functionality. You can
have two algorithms, one secure and the other unsecure, and they both can work perfectly. They can
encrypt and decrypt; they can be efficient and have a pretty user interface; they can never crash.
The only way to tell good cryptography from bad, however, is to have it examined.

But it doesn’t do any good to have a bunch of random people examine the code. Releasing code to
thousands of people won’t do anything to make your code more secure unless the right people get a
look at it. And the fact of the matter is that the only way to tell good cryptography from bad is to
have it examined by experts. Analyzing cryptography is hard, and there are very few people in the
world who can do it competently. Before an algorithm can really be considered secure, it needs to be
examined by many experts over the course of years.

This makes for a strong argument in favor of open source cryptographic algorithms. Since the
only way to have any confidence in an algorithm’s security is to have experts examine it, and the
only way they will spend the time necessary to adequately examine it is to allow them to publish
research papers about it; the algorithm has to be public. A proprietary algorithm, no matter who
designed it and who was paid under NDA (nondisclosure agreement) to evaluate it, will always be much
riskier than a public algorithm.

The counter-argument you sometimes hear is that secret cryptography is stronger because it kept
safely out of the sight of potential intruders, and public algorithms are riskier because they are
in plain view of anyone who might want to crack them. This sounds plausible, until you think about
it for a minute. Public algorithms are designed to be secure even though they are public; that’s how
they’re made. So there’s no risk in making them public. If an algorithm is only secure if it remains
secret, then it will only be secure until someone reverse-engineers and publishes the algorithms. A
variety of secret digital cellular telephone algorithms have been “outed” and promptly broken,
illustrating the futility of that argument.

Instead of using public algorithms, the U.S. digital cellular companies decided to create their
own proprietary cryptography. Over the past few years,several different algorithms have been made
public. (No, the cell phone industry didn’t want them made public. What generally happens is that a
cryptographer receives a confidential specification in a plain brown wrapper.) And once they have
been made public, they have been broken. Now the U.S. cellular phone industry is considering public
algorithms to replace their broken proprietary ones.

On the other hand, the popular e-mail encryption program PGP has always used public algorithms.
And none of those algorithms has ever been broken. The same is true for the various Internet
cryptographic protocols: SSL, S/MIME, IPSec, SSH, and so on.

The Best Evaluation
Money Can’t Buy

Right now, the U.S. government is going through the process of choosing an encryption algorithm
to replace DES, called AES (the Advanced Encryption Standard). There are five contenders for the
standard, and before one is chosen, the world’s best cryptographers will spend thousands of hours
evaluating them. No company, no matter how rich, can afford that kind of evaluation. And since AES
is free for all uses, there’s no reason for a company to even bother creating its own standard. Open
cryptography is not only better — it’s cheaper, too.

The same reasoning that leads smart companies to use published cryptography also leads them to
use published security protocols: anyone who creates his own security protocol is either a genius or
a fool. Since there are more of the latter than the former, using published protocols is just

Consider IPSec, the Internet IP security protocol. Beginning in 1992, it was designed in the
open by committee and was the subject of considerable public scrutiny from the start. Everyone knew
it was an important protocol and people spent a lot of effort trying to get it right. Security
technologies were proposed, broken, and then modified. Versions were codified and analyzed. The
first draft of the standard was published in 1995. Different aspects of IPSec were debated on
security merits and on performance, ease of implementation, upgradability, and use.

In November 1998, the IETF’s (Internet Engineering Task Force) IPSec committee published a slew
of RFCs (Requests for Comments) — one in a series of steps to make IPSec an Internet standard. It
is still being studied. Cryptographers at the Naval Research Laboratory recently discovered a minor
implementation flaw. The work continues, in public, by anyone who is interested. The result, based
on years of public analysis, is a strong protocol that is trusted by many.

Secret Flaws

On the other hand, Microsoft developed its own Point-to-Point Tunneling Protocol (PPTP) to do
much the same thing as IPSec. They invented their own authentication protocol, their own hash
functions, and their own key-generation algorithm. Every one of these items was badly flawed. They
used a known encryption algorithm, but they used it in such a way as to negate its security. They
made implementation mistakes that weakened the system even further. But since they did all this work
internally, no one knew that PPTP was weak.

Microsoft fielded PPTP in Windows NT and 95, and used it in their virtual private network (VPN)
products. Eventually they published their protocols, and in the summer of 1998, the company I work
for, Counterpane Systems, published a paper describing the flaws we found. Once again, public
scrutiny paid off. Microsoft quickly posted a series of fixes, which we evaluated this past summer
and found improved, though still flawed.

Like algorithms, the only way to tell a good security protocol from a broken one is to have
experts evaluate it. So if you need to use a security protocol, you’d be much smarter taking one
that has already been evaluated. You can create you own, but what are the odds of it being as secure
as one that has been evaluated over the past several years by experts?

Securing Your Code

This is the reasoning that will lead any smart security engineer to demand open source code for
anything related to security. Let’s review: Security has nothing to do with functionality.
Therefore, no amount of beta testing can ever uncover a security flaw. The only way to find security
flaws in a piece of code — such as in a cryptographic algorithm or security protocol — is for
experts to look at it and give it a proper evaluation. This is true for all code, whether it is open
source or proprietary. And you can’t just have anyone evaluate the code, you need experts in
security software to look at this stuff. You need them evaluating
it multiple times and from different angles, over the course of years. I suppose it’s possible to
hire this kind of expertise (in fact, my company provides it), but it is much cheaper and more
effective to let the community at large do this. And the best way to make that happen is to publish
the source code.

But then if you want your code to be truly secure, you’ll need to do more than just publish it
under an Open Source license. There are two obvious caveats you should keep in mind.

First, remember that simply publishing the code does not automatically mean that people will
examine it for security flaws. Security researchers are fickle and busy people. They do not have the
time to examine every piece of source code that is published. So while opening up source code is a
good thing, it is not a guarantee of security. I could name a dozen open source security libraries
that no one has ever heard of, and no one has ever evaluated. On the other hand, the security code
in Linux has been looked at very closely by a lot of very good security engineers.

The second thing to remember is that you need to be sure that security problems are fixed
promptly whenever they are found. People will find security flaws in open source security code. This
is a good thing. There’s no reason to believe that open source code is, at the time of its writing,
more secure than proprietary code.

The point of making it open source is so that many, many people look at the code for security
flaws and find them. Quickly. These then have to be fixed. So a two year-old piece of open source
code is likely to have far fewer security flaws than a two year-old piece of proprietary code,
simply because so many of its flaws will have been found and fixed over that time. Security flaws
will also be discovered in proprietary code, but this will happen at a much slower rate.

Comparing the security of Linux with that of Microsoft Windows is not a particularly instructive
process. Microsoft has done such a terrible job with security that it is not really fair to take it
as a representative example of proprietary security. But comparing Linux with Sun’s Solaris, for
example, can be enlightening.

People are finding security problems with Linux faster and they are being fixed more quickly.
The result is an operating system that, even though it has only been out a few years, is much more
robust than Solaris was at the same age.

Secure PR

One of the great benefits of the open source movement is the positive-feedback effect of
publicity. Walk into any computer superstore these days, and you’ll see an entire shelf of
Linux-based products. People buy them because Linux’s appeal is no longer limited to geeks; it’s a
useful tool for certain applications. The same feedback loop works in security: Public algorithms
and protocols gain credibility because people know them and use them, and then they become the
current buzzword. Marketing people call this mindshare. It’s not a perfect model, but hey, it’s
better than the alternative.

Bruce Schneier is the president of Counterpane Systems and the author of Applied
Cryptography. He can be reached at schneier@counterpane.com.

Comments are closed.