In the same way that GCC supports programming in Java, it will need to support Microsoft's .NET initiative. What work needs to be done to make this happen? And does it present a threat to Free Software?


The February 2001 Linux Magazine presented an article entitled Embrace and Extend: What Can Linux Learn from Microsoft’s .NET? In that piece, Jon Udell put forth the notion that Microsoft’s .NET initiative is built upon a number of ideas that have substantial technical merit and argued that GNU/Linux users ought to consider embracing and extending the platform.

However, while that article laid out many of the reasons why .NET might be interesting to GNU/Linux aficionados, it did not spend much time on the technical aspects of how supporting .NET on GNU/Linux would work. Because this is a topic worthy of further discussion, we will take an in-depth look at what it will take to make the GNU Compiler Collection (GCC) support .NET.

The .NET initiative is Microsoft’s bid to permit the development of components, written in a wide variety of programming languages, that can execute on a wide variety of operating systems and hardware platforms. Put more simply, Microsoft thinks you’ll be able to write Python code on a SPARC Solaris workstation, Visual Basic code on a Windows NT machine — and seamlessly link and execute the two together as one program on an embedded system. You can even add native libraries provided by the embedded system vendor. Furthermore, the resulting program will run fast, because an optimizing compiler designed especially for the embedded system will compile the code before it is executed.

With .NET, Microsoft hopes to make Java’s write-once-run-anywhere mantra look as passé as the horse and buggy, but not quite as charming. After all, the Java Virtual Machine can only really execute Java programs, and the predominant model for the execution of Java code is interpretation, which is much slower than the compiled approach for which Microsoft is arguing. Microsoft’s slogan might be, “write once, link with anything, run anywhere quickly.”

There’s no question that the technology will be powerful — if it works and is widely adopted. At this point there is no way to be sure it will work, but there is evidence suggesting that it will. Microsoft employs some of the best software engineers on the planet, and many of them are hard at work on .NET. The adoption question is also unclear, but Microsoft has a long history of convincing people to use its products.

From the point of view of GNU/Linux, the most important development is the relative openness of the .NET specifications. Microsoft seems to be serious about presenting .NET as an open standard. For example, specifications for the .NET intermediate language, file formats, and run-time library are already available on the Web. Microsoft seems to be encouraging the development of alternative implementations of the .NET framework for operating systems other than Windows and the development of third-party .NET programming tools for Windows.

Modifying GCC to Support .NET

When we talk about GCC supporting .NET, there are several different things we could mean. We could mean that GCC would emit .NET IL (Intermediate Language) for any of the source languages that it can process. Then you could take your C++ program, compile it with G++ and obtain .NET IL. The resulting program could then be run on any system that supports .NET.

Another thing that we might mean is that GCC could process the .NET IL as input and emit machine code. In this case, you could take .NET IL generated by any .NET compiler and compile it to run on any system supported by GCC. For example, you might use Microsoft’s C# compiler to generate .NET IL and then use GCC to transform the .NET IL into x86 code that could run under GNU/Linux.

The last thing we might mean is that GCC could accept the C# source language proposed by Microsoft. This language is specifically designed to target .NET. C# is Microsoft’s language of choice for .NET even though .NET specifically supports multiple source languages. In this alternative, GCC could accept a C# program as input and, say, compile it to run under Solaris.

The first alternative (i.e., generating .NET IL) corresponds to writing a new “backend” for GCC. In many ways, .NET IL isn’t very different from the assembly code used for any other processor, so generating .NET IL is analogous to porting GCC to the latest microprocessor. The other two alternatives correspond to writing a new “front end” for GCC. In both cases, the work required is the addition of a module that is capable of parsing an input file (or files) containing program source code and then generating appropriate object code. In the case of C#, it is easy to see that this is so. For .NET IL the picture is not immediately clear, but the situation truly is similar; the compiled .NET IL is really just a programming language too. The compiler will accept this language as input and emit conventional machine code.

Generating .NET IL

The .NET IL is a machine language, just as is the language accepted by an x86, SPARC or MIPS processor. In particular, it is a binary format consisting of a series of low-level instructions. Before execution, the .NET IL is compiled to native code; the low-level instructions provided in the IL are transformed into instructions appropriate for the hardware on which the .NET program will execute. Although the .NET IL could be interpreted, it never is. The Common Language Runtime, or CLR, which manages the execution of .NET programs, always compiles it first.

Like most machine languages, there is an assembly language format for the IL. Using Microsoft’s ilasm program, that assembly language can be transformed into the object file format used by the CLR. Fortunately, this is the model in which GCC works; GCC always emits assembly language and then relies on an assembler to transform the assembly language into object code.

One interesting aspect of .NET IL is that it must be “verifiable,” in the same way that Java bytecode is verified before execution. In particular, the CLR verifies that programs will not access protected resources, including memory, to which they are not supposed to have access. There are, however, some correct programs that the verifier cannot prove correct. (It is mathematically impossible to build a verifier that allows all correct programs and does not allow any incorrect programs.) Thus, when generating code, the compiler must worry not only about the correctness of the generated code, but also about its verifiability.

Unlike most machine languages, but like most high-level languages, the IL has an explicit notion of “type.” One of the key operations performed by the verifier is ensuring that programs do not attempt to assign values of one type to variables of an incompatible type, just as a C compiler checks that a value of type int cannot be assigned to a variable of type struct S. Such type-unsafe code would provide programs with ways of accessing protected resources and crashing the system. Thus, when generating IL, GCC must also emit information about the types used in the source program.

Microsoft has defined a set of conventions that allow code written in one programming language to be linked with code written in another programming language. In order to facilitate this inter-language linking, the compiler will have to adhere to these conventions. In general, these conventions are designed to support object-oriented programming languages, such as C++, Java and Python. However, they are also proving suitable for traditional imperative languages like Pascal and C.

The .NET IL, like many bytecode languages, uses a “stack-machine” model. In this model, the operands for instructions are pushed on the stack. The instruction then pops the arguments it needs from the top of the stack, performs the requisite computation and then leaves the value atop the stack. For example, to add 2 and 7, the compiler would emit instructions to push 2, to push 7 and then to add. After the add instruction executed, the number 9 would be left on the slot, and 2 and 7 would be removed.

GCC translates high-level languages into a low-level representation called the Register Transfer Language (RTL). Most optimizations are performed using this RTL. RTL is, as its name implies, a register-based representation. A typical RTL instruction might express that Register 13 should be added to Register 29, and the result stored in Register 18. GCC assumes for much of the compilation that there are infinitely many registers, and then, at the last minute, performs “register allocation” to actually make use of the registers available on the native hardware.

It would be possible to use this same model for a stack-machine. You would simply allocate as many local variables as you would need registers. Then, to emit code corresponding to, say, the addition of two registers, you would push the two registers, perform the addition and pop the value from the top of the stack into a local variable.

Unfortunately, when you consider a complex expression like:

q = x * y + z / a

you find that it is not a particularly attractive model given the stack-machine nature of the abstract machine. GCC would likely create a number of temporary registers to hold the results of intermediate calculations like x * y, but in a stack-machine you could simply do:

push x
push y
push z
push a
pop q

This sequence would be much more efficient than one in which lots of copies were done to and from local variables. Unfortunately, getting GCC to emit this sequence would require a fair amount of effort. Arguably, however, inefficient IL is not too big a problem, as the IL will always be compiled again. Therefore, even if the IL generated is somewhat inefficient, it is likely that the eventual machine code will be reasonably efficient.

The IL does contain some facilities for encoding particular optimizations. For example, the IL directly supports “tail calls.” A tail call can be used if the last thing a function f does is call another function g. In that case, instead of calling g, waiting for it to return, and then returning to its caller, f can simply jump to the beginning of the code for g. Then, when g returns, it will automatically return to the caller for f, saving one return instruction. An optimizing compiler should endeavor to take advantage of these facilities.

Compiling .NET IL

Transforming .NET IL into native code presents an additional set of problems. The IL is likely to be presented in its binary format rather than in the more convenient assembly language format. Therefore, the first step will be to transform the .NET IL into a format the compiler can readily process.

Although processing .NET IL is, as explained above, similar to processing any other ordinary high-level source language, it is in some ways a bit simpler. In particular, the usual syntactic and semantic analysis, including parsing and the issuing of error-messages, is not required. Ultimately, it is not the responsibility of the IL compiler to perform type-checking or other similar functions.

The problems in going from a stack-machine representation to traditional register-based code are not as difficult as those presented in the opposite direction. In particular, GCC can simply allocate a new register for any temporary values placed on the stack.

In general, much of the work inherent in translating .NET IL into native code will involve the processing of the type-information and other data encoded in the metadata associated with an IL object file. Because the IL is capable of supporting a wide variety of languages, many of which GCC is not able to process in source form, the IL can represent constructs that GCC is not accustomed to processing.

Another challenge is that the code generated by GCC should ideally be compatible with code compiled directly for the target system. For example, it is desirable that C++ code compiled into .NET IL and then into GNU/Linux object code be compatible with code compiled directly with G++. This implies that the ABI used by .NET code should match that used by C++, with whatever additional extensions are required to handle the full .NET IL. In order to achieve this compatibility, many of the C++-specific parts of the compiler should be generalized and moved into the language-independent portions of the compiler, thus facilitating the sharing of existing code.

Compiling C#

Compiling the C# programming language is not inherently necessary for the support of .NET on GNU/Linux. After all, C# programs could always be compiled to IL on another system (such as Windows NT) and then transformed to native code on GNU/Linux. However, if C# catches on as a development language, then support for it will at some point become a requirement.

The construction of a C# front-end is a very substantial task, but will not require the level of creativity involved in either the generation of IL or the compilation of IL into native code. C# is, after all, just another object-oriented programming language. It is not, however, a small language. In fact, it is in some ways more complex that C++, which is widely considered to be one of the most difficult languages to compile correctly. One feature in C# not found in C++ is the notion of a “property,” which is essentially a method that can be treated like a field. In particular, when the value of a property is requested, code is executed to compute its value. Similarly, when the property’s value is modified, code is executed to store the assigned value. However, users interact with properties using the same syntax used for ordinary fields. Another feature of C# (but not C++) is a variation on the notion of a function pointer called a “delegate.” These entities consist of an object and a method bound together. A delegate can then be called with appropriate arguments; the callee does not need to know which method will be invoked on which object.

Run-Time Environment

The .NET framework provides a substantial run-time library akin to that provided by C++, Java, Python and other languages. This framework includes the usual container classes, support for networking, file input and output and so forth. The run-time environment is also responsible for dynamically loading new classes, performing on-the-fly verification of code as it is loaded and ensuring security.

Although the library has nothing to do with the compiler (strictly speaking), the fact is that compiling .NET IL into native GNU/Linux code won’t do you much good if there’s no run-time library support. Therefore, the development of a GNU/ Linux version of the run-time library is an imperative. In and of itself, this portion of the project is very substantial and will require painstaking attention to detail. It is possible that Microsoft will make its run-time library source code available — in the interest of making .NET as ubiquitous as possible — but as far as we know, there has yet to be an announcement to that effect.

Threat to Free Software?

At various times, people have suggested making it possible for GCC to emit some representation of its internal state so that other tools could leverage the processing done by GCC. If GCC could emit a representation of the program, you could write separate analysis tools that examine this representation.

A simple example is a program that checks if the input program conforms to certain stylistic conventions. If all function names are supposed to start with an upper-case letter, the analysis program could check the representation emitted by GCC and issue error messages if this constraint were violated. Because GCC would emit some representation of its state into a file, the analysis program would not have to be part of the actual compiler itself. This would make it easier for developers who don’t have experience working with GCC itself to write these kinds of analysis tools.

One reason GCC has never been modified to provide such a facility is that the Free Software Foundation is specifically against such a modification. The FSF fears that the external analysis programs might not be free software; they might be proprietary programs.

In particular, the Free Software Foundation fears that a company might produce proprietary optimization programs capable of reading in the state emitted by GCC, modifying it in some way, and writing it back out. The resulting output could then be read back in by GCC, and machine code could be emitted.

The scenario would play out as follows: Suppose Ty Coon writes a new register allocation pass for GCC. Hoping to make lots of money off his work, Ty writes the code in such a way as to process the state emitted by GCC, perform register allocation, and then emit the modified state. Because Ty’s program is not linked with GCC, the GNU Public License does not apply; Ty can distribute his program without distributing his source code.

In this way, GCC would actually assist people who build proprietary software; such developers could leverage the already powerful capabilities of GCC to produce proprietary code more easily than if GCC were not available. This result runs contrary to the Free Software Foundation’s goal of giving free software developers an advantage, relative to proprietary developers, in terms of infrastructure on which to draw.

Supporting .NET in GCC might, however, lead to exactly the situation that the Free Software Foundation fears. In particular, if GCC can both emit .NET IL and turn .NET IL into object code, then .NET IL could be precisely the internal state described above! In particular, our friend Ty could write a program that takes the .NET IL emitted by GCC, optimizes it some way and then emits modified .NET IL. (The example of register allocation is not a good one in the case of .NET IL because the IL does not really have registers, but higher-level optimizations would still apply.)

Whether or not the benefits of supporting .NET will outweigh the disadvantages of enabling Ty’s use of GCC is a matter that the Free Software Foundations has not, to the best of my knowledge, made a decision on.

If support for .NET becomes vital to the success of GNU/Linux, then I believe that the FSF will decide that the benefits outweigh the costs. Alternatively, the license that governs the use of GCC could be modified to specifically forbid the kind of activity that is described above.

Too Big to Ignore

It is not yet clear if .NET will succeed to the degree that Microsoft hopes it will. However, it would be surprising to see .NET fail completely. Microsoft is a major player, and this is a major initiative. The question is not if Java is better than C#, or if the .NET runtime library is more full-featured than Python’s, or if K&R C is the only true programming language. The question is whether or not GNU/Linux can be a viable operating system without supporting (at the very least) the execution of .NET programs.

If .NET is successful at all, then GNU/Linux will likely need to support it. There will be Web pages using .NET to display data, and Web browsers unable to process .NET code will be unable to view them. There will be useful programs and libraries written as .NET components, and an operating system that does not support .NET will not be able to use them. In the same way that GNU/Linux must support Java, it must also support .NET.

For this reason, it is important that GCC be able to compile .NET IL into machine code. The best approach to developing a production-quality IL compiler in a short period of time is a partnership between commercial interests and the always-strong volunteer GNU/ Linux community. It is our hope that major vendors with an interest in GNU/Linux (such as HP and IBM) will demonstrate an interest in developing this technology. Even Microsoft should be interested in making .NET work on GNU/Linux. The work required is substantial and the time to start is now.

Mark Mitchell is President and CTO of CodeSourcery and the current Release Manager for GCC. He can be reached at mark@codesourcery.com.

Comments are closed.