dcsimg

Linus on Linux: The Linus Torvalds Interview Part 1

Linus reflects on 18 years of working on Linux, the developer ecosystem and his goal for Linux on the desktop.

LM: Speaking of different platforms, what computers do you have now? Any of them non-x86, or set up as a server, media player, or other special-purpose machine?

LT: I don’t tend to use a lot of computers, actually. I don’t like having a “machine room”, and my goal is to have just one primary workstation and do everything on that. And that one has been x86-based for the last few years (basically since I decided that there was no long-term desktop survival for PowerPC – when Apple switched away it became clear that the only thing that could possibly challenge x86 on the all-important desktop was ARM).

I’ve got a few other machines (mainly laptops) and there’s a couple of other machines for the family (one for Tove, one for the kids) but they are also all x86-based. I’m going to be very interested to see if I’ll grow an ARM machine this year or the next, but it will require it to be a good netbook platform, and while the potential is there, it’s never quite happened yet.

Other architectures tend to be available in form factors that I’m not that interested in (either rack-mountable and often very noisy, or some very embedded thing) so they’ve never found their way into my home as real computers.

Of course, I do have a couple of Tivos, and they run Linux, but I don’t really think of them like that. I don’t tinker with them—that’s kind of against the point—and they are just devices. And there’s the PS/3, but it’s more interesting for games than to use as a computer (I’ve got faster and better-documented regular computers, thank you).

The most interesting machines I tend to have are pre-release hardware that I can’t generally talk about. For example, I had a Nehalem machine before I could talk about it, and I may or may not have another machine I can’t talk about right now.

LM: Is there anything in the pipe from hardware designers that you think will have a major impact on Linux’s architecture? Ben Woodard wonders about increasingly complicated memory heirarchies that go beyond just traditional caching and NUMA, as well as newer sychronization primitives such as hardware transactional memory.

LT: I don’t see that being very likely, and one reason is simply that Linux supports so many different platforms, and is very good at abstracting out the details so that 99% of all the kernel code doesn’t need to care too deeply about the esoterics of hardware design.

To take the example Ben brought up: transactional memory is unlikely to cause any re-architecting, simply because we would hide it in the locking primitives. We’d likely end up seeing it as a low-cost spinlock, and we migth expose it as such, using (“fastlock() / fastunlock()”) for short sequences of code that could fit in a transaction.

So we’d never expose transactional memory as such— because even if it were to become common, it wouldn’t be ubiquitous.

(In fact, since transactional memory is fundamentally very tied to various micro-architectural limits, if it actually does end up being a success and gets common, I would seriously hope that even hardware never expose it as such, but hide it behind a higher abstraction like a ‘spinlock’ instruction with transaction failure predictors etc. But that’s a whole different discussion).

We’ll see. Maybe the hardware people will surprise me with something that really makes a huge architectural difference, but I mostly doubt it.

LM: It’s been almost a year since you got David Woodhouse and Paul Gortmaker signed up as embedded maintainers. How has having developers responsible for embedded changed the kernel development process?

LT: Hmm… So I can’t say that I personally have seen any major changes in the embedded area, but I also have to admit that if everything is working well then I wouldn’t expect to see it much. It’s more the other side of the equation (the embedded develpoers) who you should ask.

The problem with the embedded space was (is?) always that they’d go off and do their “own thing”, and not try to feed back their work or even much talk about their needs and their changes. And then when they were ready—often several years later—the kernel they based their work on simply isn’t relevant to mainstream kernel devlopers any more. And then the cycle starts all over again.

And there isn’t so much we can do at our side of the development—David and Paul were never meant to help me. They are about trying to help the embedded people to learn how to interact with the development community.

And if that ever happens (happened?) then I hopefully would never notice, since by then the embedded developers look just like any other developer.

But if you want my honest opinion, then quite frankly, I don’t think having “embedded maintainers” really ever solves the issue, and I’m actually hopeful that the whole dynamics in the embedded world will change. I think, for example, that projects like Android might be instrumental in bringing the embedded people more into the open, simply because it makes them more used to a “big picture” development model where you don’t just look at your own sandbox.

And by the way, I would like to point out that we do try to do better on “our side” of the equation too. The whole “stable” vs “development” kernels (2.4.x vs 2.5.x) was our fault, and I’ll happily admit that we really made it much harder than it should be for people who weren’t core kernel developers to get stuck on an irrelevant development branch.

So I don’t want to come off as just blaming the embedded people. They really have their reasons for going off on their own, and we historically made it very hard for them to be even remotely relevant to kernel development.

In other words, I am hoping that it’s now easier for an embedded developer to try to stay more closely up-to-date with development kernels, and that we’ll never have to see the “they are stuck at 2.2.18 and can’t update to a modern kernel because everything has changed around their code since” kind of situation again.

LM: A recent development tatic the Kernel has adopted is the drivers/staging subdirectory. These are for the so-called “crap” device drivers — which mostly seem to work — and have users, but which don’t pass the mainstream kernel code quality standards. Is having drivers in the kernel tree, in staging, better for getting them up to mainstream quality than waiting to bring them in until they’re cleaned up?

LT: Well, the people involved (like Greg) do seem to feel it’s a success, in that it does help get drivers into better shape.

And I have to say, I’ve personally hit a few machines where they had devices in them that didn’t have good drivers, and the staging tree had an ugly one that worked, so I was happy.

So it saves people from at least a few of the incredibly annoying out-of-tree development efforts. When a driver is out-of-tree, it’s not just that you have to fetch it separately, you have to find it first, and then it’s likely a patch against some three-month-old kernel and hasn’t been updated for the trivial interface changes in the meantime, yadda-yadda-yadda.

It’s been working from what I can tell. Do I wish we just had better drivers to begin with? Yes, along with a mountain of gold. It’s not an optimal situation, but it’s better than the alternatives.

LM: The Linux approach to fixing security-related bugs seems to be just fix them in the mainstream kernel, and if a distributor needs to put out an advisory for their vendor kernel, they do. Are users getting a more or less secure kernel that way than if the upstream kernel participated in what you called the “security circus?”

LT: Hey, I’m biased. I think it’s much better to be open and get the advantages of that (which very much includes “faster reaction times” both because it makes people more aware of things, and because that way the information can much more easily reach the right people).

And it seems to be working. The kernel is doing pretty well security-wise.

That said, anybody who really wants more security should simply try to depend on multiple layers. I think one of the biggest advantages of various virtual machine environments (be they Java or Dalvik or JavaScript or whatever) is the security layering aspects of them – they may or may not be “secure”, but they add a layer of indirection that requires more effort to overcome.

So I think we’re doing pretty well, and I obviously personally think that the Linux kernel disdain (at least from some of us ;) for that “security theater” with all the drama is a good thing and is working. But I would always suggest that regardless of how secure you think your platform is, you should always aim to have multiple layers of security there. Anybody who believes in “absolute security” at any level is just silly and stupid.

Fatal error: Call to undefined function aa_author_bios() in /opt/apache/dms/b2b/linux-mag.com/site/www/htdocs/wp-content/themes/linuxmag/single.php on line 62