dcsimg

Linux in the Post-PC World

This year, IBM, Compaq, Dell, HP, SGI, and numerous other companies have announced they will be shipping and supporting hardware running the Linux operating system in addition to (or instead of) Microsoft Windows. We have also seen major software vendors such as Oracle, Computer Associates, SAP, Corel and others announce their intentions to make their applications available for Linux. It is no surprise that a fervent Linux vs. Windows discussion is emerging. While this certainly makes for interesting headlines, it is myopic when contrasted with the more significant changes at hand.

This year, IBM, Compaq, Dell, HP, SGI, and numerous
other companies have announced they will be shipping and supporting hardware running the Linux
operating system in addition to (or instead of) Microsoft Windows. We have also seen major software
vendors such as Oracle, Computer Associates, SAP, Corel and others announce their intentions to make
their applications available for Linux. It is no surprise that a fervent Linux vs. Windows
discussion is emerging. While this certainly makes for interesting headlines, it is myopic when
contrasted with the more significant changes at hand.








Trenches Illustration
TONY KLASSEN

The traditional Linux vs. Windows debate is narrow for two reasons. First,
because Linux and Windows are just operating systems. By themselves, they don’t do anything useful.
Second, it frames the debate in terms of PC operating systems only. As I’ll argue in this column, we
are in the midst of a much more significant discontinuity in computing as we enter the era of
post-PC computing, and we need to start thinking about things in a post-PC way.

The traditional client-server computing architecture of the past decade is changing before our
very eyes. Driving this metamorphosis is the Internet — an increasingly pervasive worldwide network
that can be accessed cheaply by the mass market. While the Internet has come a long way in a short
while, it is really still in its infancy. Most of the world accesses the Internet via narrowband
technologies, such as < 100Kbps modems that are barely fast enough to transmit plain text and
simple graphics. As broadband systems become widely deployed, we will see data transmission rates in
excess of 10Mbps over a variety of technologies. Broadband satellite, third generation wireless, and
fiber to the home are just a few of the high-speed options that exist. With this type of bandwidth,
full motion video will be readily transmitted over the Internet. We will no longer have to wait for
the graphics on the Web page to slowly build as we tap our pencils in anticipation.

Most of the world is not yet connected to this pervasive network. A need for low-cost,
high-performance Web servers is emerging as regions like India, China, Eastern Europe, and other
countries around the world begin to get “wired” to the Internet. Linux is proving to be an optimal
solution in this environment, and Apache/Linux has already become the market share leader versus NT
and other alternative operating systems for Web servers.

On the client side, the combination of Moore’s Law and Metcalfe’s Law (the value of a network
increases as a square of the number of users on the network) is dramatically changing the way that
we will access this pervasive network and manage information. To date, the predominant form of
Internet access has been via the PC. In the near future, we will all have access to the Internet via
myriad types of devices such as Internet capable phones (you should see the new Ericsson Internet
ready phone that was announced at the German trade show, CeBit), auto navigation systems, hand-held
computers (such as the PalmPilot) and thousands of devices that have yet to be invented.

So, the interesting debate is not really Windows vs. Linux, but rather how does this changing
environment impact what users really care about — applications. The meaningful differences between
Linux and Windows, for example, are the applications that are available, and the devices upon which
those applications can run. (There is a third difference, which is the quality and comparative
reliability of the two systems. I will grant that Linux is way ahead in this regard, but I will
argue that there’s little benefit to having a perfectly reliable system that doesn’t run the
application you want today or the one that you will want tomorrow.) In the world of PC computing as
we know it today, everybody focuses on the applications that run on a given PC operating system. In
the Post-PC computing world, they will also care about what devices those applications can run on.

There are tens of thousands of Open Source applications available that any developer, not just
the original author, is free to modify to suit his own needs. Just as important, the availability of
these applications in source code form means that they can be combined in rich, new, and interesting
ways.

Do all users want to be responsible for downloading, building, installing, and maintaining every
application they’ll ever use? Of course not, which is why Linux distributions have become so
popular: they hide the details of Open Source while offering the benefits and convenience of
pre-packaged software. But the real innovation is happening at the source code level, where the
boundaries and definitions of applications can be extended beyond what can merely be packaged at any
one point in time.

At LinuxWorld in March, the Free Software Foundation announced the availability of GNOME 1.0.
GNOME represents the creative combination of dozens of Open Source packages and libraries that were
not originally intended to be a part of a single package — they were designed to be part of any
arbitrary program. Some of these components, like guile, glib, gtk, and zlib were developed and in
use well before the start of the GNOME project, while others were undoubtedly inspired or
accelerated due to the GNOME project. GNOME is intended to be portable, as part of its manifesto.
This will guarantee its availability on existing and as-yet-to-be-invented platforms. In other
words, GNOME is making it possible for applications to share a common infrastructure on computer
desktops, and also makes it possible to migrate this infrastructure to whatever platform might
develop in the future.

If we’re going to see a world of truly pervasive computing, it has to be possible to build
applications for devices that cost dollars, not kilo-dollars, which in turn means that we need a way
to build applications that can be scaled down to specific cost points. Doing this will require
control of the application/platform equation. This means that in addition to having the freedom to
choose the most appropriate microprocessor for a job, you also need source code. You need
configurability and portability in your source code, you need development tools for your source
code, and you need an operating system that can run your application without breaking your budget.
All of these are provided for you by the Open Source community.

In the past two years, we have seen Linux sprout forth in the area of embedded Internet servers.
Using a variety of microprocessor architectures, these machines have a significant price/performance
advantage over equivalent PCs, but they are still too large to be embedded into a cellular phone or
too expensive to be integrated into a set-top box.

This proliferation of Linux becomes even more exciting when one considers the vast range of
additional technology that is being developed by the Open Source community. While most of this
technology will likely first run on a Linux box, these applications are not limited to running on
one platform.

We can see that by implementing common infrastructural components for a sane common
architecture, the Open Source community enjoys the benefits of increased code reuse, feature and
functional compatibility, and enhanced quality. Also, it would be foolish to ignore another terrific
benefit of this standard infrastructure and widespread code re-use: Decreased time to market. The
more Open Source programs we create, the more we can re-use, and the faster we can leverage the
existing code to develop new applications.

Industry analysts are fond of saying “applications drive the platform,” and in some tautological
sense this is true. But we in the Open Source community know that it’s a lot more complicated than
that. We know that Moore’s law drives the silicon, Metcalfe’s Law drives the value of the network,
silicon drives compiler technology (and vice versa), availability of GCC drives Linux ports (and
vice versa), Linux drives Open Source development, the Open Source community drives new application
infrastructure, and new application infrastructure enables the development of new applications that
run on new platforms. While the application may drive the platform, the pervasive network has made
it much more organic and interesting than that.

If there is one message I’d like everybody to take away from this article it is this: When
writing your applications, don’t write just for Linux! Write general, portable software that can be
optimized, reconfigured, used in multiple ways, and provides meaningful functionality with minimal
assumptions. If you do this, your software will not only find a home in the Linux applications you
originally envisioned, but it may also surface in some cool new post-PC device that everybody’s
buzzing about because it gives them the functionality they want at a price that everybody can
afford.



Alex Daly is President & CEO of Cygnus Solutions. Before that, he held senior sales and
marketing positions at C-Cube MicroSystems, Inc. He can be reached at
adaly@cygnus.com.

Comments are closed.