In August 1991, Linus Torvalds posted this humble note on Usenet:
“I’m doing a (free) operating system (just a hobby, won’t be big and professional) for AT clones. It’s not portable and it probably [won't ever] support anything other than AT hard disks, as that’s all I have :-(.”
Little did Linus know that just over a decade later, his “hobby” — the Linux kernel — would transform the computer industry and the world. Today, the Linux kernel and a vast variety of application software built upon it powers mainframes, databases, web servers, desktops, consumer electronics, and cell phones. Perhaps even more incredible, Linux remains free: free of charge, free of licensing fees, and free of encumberance. Remarkably, even the development process that produces the kernel is “free”: an open, global meritocracy where no valuable contribution is rejected.
Linus may have been the first fan of Linux back in 1991, but in 2005, he has plenty of company: hundreds of vendors, thousands of Linux software developers, tens of thousands of system administrators, and millions of end-users.
Linux Magazine Editor-in-Chief Martin Streicher rcently caught up with Linus to talk about the state of the kernel and where it’s headed.
Linux Magazine: You and others have been working on the kernel for ten years. What stands out as great achievements and significant problems?
Linus Torvalds: The technical issues tend to blur together over ten years. There have been a lot of successes and some failures, and in the end the only thing that matters about the technology is that we’ve continually improved on it.
In many ways, the biggest things have been the people — while technical details may not stand out, people do. The source of the biggest satisfaction is when the development process works, and the biggest problems have often been about how to avoid personality clashes and keeping the “social” process working well.
LM: What areas of the kernel do you feel still need significant improvement? Are those tasks on a to do list?
Torvalds: It’s getting less and less obvious as time goes by. We used to have tons of clear areas of improvements between stable releases where we knew some subsystem really needed a major rewrite. And I’m not saying I’m happy with it all now either, but the problematic subsystems tend to be less fundamental these days, and sometimes the “total rewrite” really is more of a “somebody should really clean this up.”
One that got some attention lately — and is getting more — is the TTY layer. And the direct rendering (graphics) stuff. But compared to some of the big upheavals we’ve sometimes had, such as page cache rewrite, directory cache handling, lock rewriting, block device queues, these things look fairly benign.
LM: Are there features in other kernels, such as Solaris or even Windows, that would be desirable to have in Linux?
Torvalds: That’s not how I end up thinking about it. I don’t compare Linux to other systems. I compare Linux to itself. That’s partly because I think that’s the right way of doing things — this is not a competition with anybody else, it’s about constant improvement — but partly it’s simply because I don’t really actually use anything but Linux.
And then sometimes there are great ideas in other systems, but the intriguing ones tend to be from the more esoteric and interesting ones. So some Plan 9 ideas have influenced certain design decisions, for example.
LM: What about some of the security features of the BSD kernels?
Torvalds: They are getting migrated into main line kernels, and perhaps more importantly, into main line distributions. The 2.6.x kernel already has all of the SELinux Linux Security Modules (LSM) stuff, and Fedora is actually using it, so yes, the advanced security code is getting merged.
On the other hand, no, Linux does not have that stupid notion of having totally separate kernel development for different issues. If you want a secure BSD, you get OpenBSD; if you want a usable BSD, you get FreeBSD; and if you want BSD on other architectures, you get NetBSD. That’s just idiotic, to have different teams worry about different things.
In Linux, we aim for balanced development. We do a lot of security, because people care about it, but we don’t do it by ignoring other issues.
LM: Can you elaborate more about Plan 9′s influences?
Torvalds: Well, one of the obvious long-time influences has been the conceptual reliance on straight ASCII as a portable and readable way to show information. This is perhaps most visible in the /proc and /sys filesystems. While legacy Unix also had a /proc thing, it was more of a binary interface, while Linux is more like Plan 9, believing that ASCII interfaces are good.
The kernel also internally supports the notion of filesystem “namespaces,” which is another Plan 9-like feature. You can think of it as the ability to do per-process mount maps, where different processes actually can see a different organization of the filesystem. It’s not actually used all that much, since it’s not a feature that people are used to, but it does reflect the common belief among Linux developers that the Plan 9 people had “good taste.”
On the other hand, Plan 9 has a lot of concepts that Linux doesn’t believe in, so nobody would call Linux a Plan 9 clone. Linux is definitely a Unix clone with some ideas from Plan 9.
LM: That begs a question: What’s the most exciting, recent operating system research? Are there any big things that the kernel could realize to change computing in a substantial way?
Torvalds: Quite frankly, I think operating system research at the kernel level was done in the 1960s and 1970s, and there really isn’t much point any more. The “big things” aren’t on a kernel level.
A kernel is all about a good implementation, and giving user space a stable base, so that user space can do the “big things.” The kernel, I believe, is about getting the small details right. The devil is in the details, and the kernel is too important for people to be “visionary” about it.
The most important thing for any kernel is that it works and is stable and secure. And performance (after the “works”, “stable” and “secure”) ends up more important than for most other software projects, because if the core operating system performs badly, everything suffers. In contrast, if some other program performs badly, it really only affects that sub-system.
So this is why I’m not a big believer in “OS research” today, at least not the way I’ve seen it done. It tends to focus on visionary things rather than the down-to-earth things, and I think that’s fundamentally wrong. Operating systems are too important to be “visionary.” By all means do the visionary things, but do them on top of a stable platform that actually does what people need.
That may sound boring, but it’s not. There’s a lot of excitement in the details, and the actual implementation side.
LM: So, what lies ahead for the Linux kernel?
Torvalds: I’m actually still waiting for all the main distributions to be 2.6.x only. Many of them already are, and I’m pretty happy about the feedback, but clearly the upgrade cycle is getting longer. Largely because there are more people who are perfectly satisfied with 2.4.x than there were with 2.2.x. Just another sign of the basics all being in place.
I don’t see any huge upheavals. My main worries are actually driver-related, where wireless, graphics and sound top the list right now.
LM: A Linux Magazine reader asks if you plan to move device drivers to user space? Is this even feasible?
Torvalds: It’s not very interesting for any “core” drivers, but it tends to be useful for things that have very specific uses and aren’t very performance-critical. I think all of the USB scanner drivers end up being in user space. But even then, the core hardware access is actually in the kernel, and user space really mainly implements the “protocol” on a higher level.
But no, user space drivers aren’t very interesting for most things. Serialization issues, sharing and security tend to make them complex. Performance (context switching and hw access costs) makes it infeasible in many cases.
The one major example of a user space driver is the X Window System, which obviously does most of it’s stuff in user space. But with modern graphics engines that depend on DMA and so on, and 3D, which needs low-latency client access, even there things are actually moving towards kernel driver interfaces — again with the “protocol” parts in user space.
LM: What’s your appraisal of Linux outside of the United States?
Torvalds: I find the cultural differences pretty interesting. For example, Europe, in many ways, has done more with Linux than the U.S., but at the same time, a lot of the commercial development has been more aggressive here in the U.S. I think that’s a cultural difference.
And Asia has other cultural differences in how Linux ends up being used. In particular, the language differences (and, perhaps, general developer culture differences) means that there aren’t as many Asian developers directly involved in development that I see — much of the IPv6 code ends up being developed in Japan, but a lot of it is indirect. Which I guess is inevitable, but it can be a problem.
And many of the areas that would benefit most from an open source license (China and some parts of eastern Europe) have so rampant piracy that they have a totally different view of copyrights and “free software” than the Western world. I mean that very much in the non-Free Software Foundation meaning. In other words, “free” means “no money”. Which again raises big cultural differences even on the scale of individual programmers.
For example, since people can’t afford to honor copyrights for software anyway, these areas often don’t feel as strongly about copyright licenses as we would tend to do — they just don’t have the same meaning. And I don’t say that as a complaint, but more as an observation of how at least the current development model finds some cultural barriers a bit hard to overcome.
LM: From your perspective, where is the SCO v. IBM case?
Torvalds: Oh, these days I just worry about how long it drags out. I obviously always was of the opinion that there was no case, but now the painful experience has mainly just become a boring question of “How long can these liars drag out the case?”
The bright spot has been IBM obviously being very careful about it (even if it does seem painfully slow), and especially how the open source community has reacted to it, with sites like Groklaw debunking all the SCO lies and innuendos.
LM: Recently, Clay Christensen, author of the Inventor’ Dilemma, recommended that Microsoft — and implicitly others — undertake Linux development as a way to move towards smaller, more “present” forms of computers like cell phones. What’s your take?
Torvalds: I personally try to never underestimate the small devices. I have a view about computing, which is very simple: small things grow up, but big things never shrink.
And that’s actually something very fundamental. The big and impressive hardware that a lot of people think is so “sexy” from a technological angle is not what drives technology. Technology is driven by new small things that often take the most useful ideas from the big things, and eventually end up largely displacing them.
So while I enjoy seeing Linux supercomputers etc, I don’t think they are all that relevant for the future, except perhaps as a way to learn about issues that even the small devices will eventually start hitting. Something as fundamental as SMP was considered a “big iron” thing not that long ago. Now we have it on our desktops, and people are doing things like cell-phones that have two or more CPUs, one for communication, one for graphics and user interfaces.
And the thing is, as the small devices grow up, they will take over more and more of what people used to do with the big ones. For example, I don’t think that Intel’s IA64 (Itanium) architecture is any threat at all to the x86. But the ARM might be, eventually. If only because the small devices will start taking on more and more of what people used PC’s for. Who knows?
Anyway, I definitely agree with Clay on the importance of small devices.
LM: A little while ago, OSDL and the Linux kernel team announced some changes to how code could be submitted for use in the kernel. How is that methodology working out?
Torvalds: I’m personally very happy with it. Not only has the patch sign-off been less contentious than I thought it might be, I actually really enjoy having the participants be better documented. While the bogus SCO claims were a big impetus for actually doing the documentation in the first place, what’s been good is to see how that documentation is actually useful.
Now, when we have a patch that turns out to have some technical problem, the developer sign-off that carried through all the way to the source control means that it’s easy to contact everybody who was involved and ask them to think about the problem that came up.
So while that hasn’t been a huge change, it’s turned out to be quite useful. And I think people also enjoy seeing everybody involved be better recognized. On the whole I think everybody is actually pretty happy about it.
LM: Is the kernel contribution processes and policies adaptable to other open source projects? If so, how?
I’m sure it is, but at the same time, I’m not sure it’s a “one size fits all” process, or if we even want it to be that way.
The fact is, different people work different ways, and what works for me may not work for some other maintainer or project. It’s really about a small “culture” that you build up around the project, and there’s nothing fundamentally wrong with having different cultures.
I know a lot of people like the way I do things, but I also know that other people find it confusing and prefer to have more explicit, “written down,” rules.
LM: Are there more people doing kernel development?
Torvalds: I’m pretty happy with how many people there are involved in kernel development. I do developer statistics every once in a while, and the last time I did it there were something like 900+ developers in the last year alone.
Now, most of those (about half, if I remember correctly) only did simple one-liners, but that’s actually what you want to see: a large base of people who aren’t really kernel developers, but who feel comfortable enough around the kernel that they fix something small. Most of them maybe never go any further, and that’s obviously OK, too, but I think the kernel development process is still “approchable” enough to keep our development commnity healthy.
That said, the kernel has obviously gotten a lot more complex over the years, so its something I try to keep in mind, even if I don’t actively worry about it.
LM: Back in December 2002 (see that interview at http://www.linux-mag.com/id/1231) you were hoping that AMD’s 64-bit processors would succeed, and you didn’t care much about the progress of Itanium. The Opteron and its ilk has done very well in the past two years, as has PPC. Do you agree?
Torvalds: Absolutely. I’m happy to have been right for once (well, at least it appears that way for now — things change in technology). So, yes, I still think x86-64 and PPC64 (and ARM) are more relevant than IA64, but I do want to stress that that doesn’t really mean anything.
And I want to stress again that one of the beauties of open source is that different interests can work on the same thing, and disagree about the direction things are going in, and yet still work together on the same project.
In a traditional project, if I was the project leader, I’d prioritize the things I believe in, and the rest would likely wither and die. And if I were wrong (and let’s face it — that’s definitely not unheard of — that would potentially be a disaster.
But in Linux (and open source in general), my opinion is not that relevant. I don’t have any crystal ball, and I’m perfectly happy to be totally proven wrong about IA64. And the fact is, a lot of people believe in it and end up working on the Linux port, and dammit, if I’m wrong I’ll just admit that I was wrong, and they’ll feel really superior. Good for them.
And that’s what allows technologies to flourish. There’s a lot of technologies I didn’t believe in, but people worked on them and proved me wrong on. SMP is one such thing. I literally used to believe it wouldn’t be relevant to Linux, because hardware was so rare and expensive. And boy, was I wrong. Now I’m obviously a huge believer in SMP.
So I may well be totally wrong on IA64 too. If I am, I’m going to get a lot of ribbing from the IA64 people, and that’s the way it should be. I can take it;)
So don’t take my musings too seriously.
LM: What chip architectures do you think currently hold the most promise for Linux?
Torvalds: Well, the desktop market is certainly covered by x86, and that clearly seems to be inexorably moving towards its 64-bit cousin, x86-64. The two will co-exist for a long time, though. I personally also feel that ppc64 is interesting, and that’s actually what I run on my personal desktop (it’s a dual G5 Apple box, although it obviously runs Linux, not OS X).
But clearly, there’s a lot of other areas, where cores like ARM (in th embedded space) have a lot of traction. Linux currently supports something like twenty different architectures, so we’re ready for pretty much anything.
LM: What do you think about IBM’s POWER architecture and their latest efforts to release Linux-only POWER machines?
Torvalds: As mentioned, I use a PPC64 machine myself, at least partly because I wanted to have a more “balanced” development environment, so that not all of the major developers would be running on x86 variants.
I personally think that the “big three” core variations are x86, ARM, and PPC, and in x86 and PPC, I obviously include their 64-bit variations.
But part of open source is that what I personally think doesn’t really matter all that much. Others believe in other architectures, and it’s not that important who’s right in the end.
LM: Pretend it’s 2009. What’s different then?
Torvalds: I have a hard time planning a week or two ahead, never mind five years. I’ll probably still maintain the kernel, except I’ll have a lot more gray hair. The gray hair is likely less caused by kernel development than from my kids being pre-teen.
LM: Besides the Linux kernel, what other open source projects are being especially impactful in IT?
Torvalds: Well, there’s the obvious ones: Perl, Apache, and MySQL, which are already big in IT. But at the same time, I actually think an even bigger impact may be through the desktop efforts of KDE, GNOME, and Open Office. I think the development there will have a more visible impact to “regular users” in the long run.
LM: What are the challenges to accelerated Linux adoption?
Torvalds: Oh, adoption is slow. The biggest challenges are opinions and expectations, and those take time to change. I don’t expect anything overnight. I’ve been doing Linux for thirteen years. It’s not been overnight so far, and it will continue to be a very gradual thing.
LM: Besides the Linux kernel, are you working on any pet projects, software or otherwise?
Torvalds: My other pet software project is this little C verifier called “sparse”. I’ve always been interested in compilers, and this one does a lot of what a real compiler does, except it’s used to verify type usage in the kernel (things like automatically checking that people use pointers to user space or PCI memory correctly) and to check that spinlocks and unlocks match up properly.
Other than software, I’ve been spending a few hours a week for the last month building a small playhouse for the kids in the backyard.
That still needs some finishing touches, but that was a fun project.
Fatal error: Call to undefined function aa_author_bios() in /opt/apache/dms/b2b/linux-mag.com/site/www/htdocs/wp-content/themes/linuxmag/single.php on line 62