As a code-execution platform, Microsoft .NET is a bit more complicated than what has come before it. The support for multiple-source languages and (in theory, at least) multiple platforms required the addition of an intermediate layer between the traditional two levels of code: source and compiled native code. This additional layer lends the .NET platform additional flexibility but also increases the complexity of the system and provides an array of new application deployment options. In this article, I’ll take a crack at explaining the process of code compilation in .NET and the additional compilation options available to you, the .NET developer.
The .NET difference: MSIL
Under Microsoft .NET, applications are created using one or more high-level programming languages, such as VB.NET, C#, and COBOL .NET. Each .NET-compliant language goes through an initial compilation step that takes it from source-level code to the lowest common denominator language for .NET, Microsoft Intermediate Language (MSIL). MSIL is itself a full-fledged, object-aware language, and it’s possible to build applications using nothing but MSIL. For a whirlwind tour of MSIL, see “Check under the MSIL hood to see how the CLR is running.” An application stays in MSIL form until it’s executed, at which time it is just-in-time (JIT) compiled into native code. Figure A illustrates this process.
|.NET’s compilation process, from source to native instructions|
JIT compilation occurs at the assembly level whenever an assembly is first loaded. (For more on assemblies, see “Introducing the assembly—a cure for ‘DLL Hell’?”.) When a reference to an object is first encountered, the JITer loads a stub for each method that matches that method’s declaration. When a method is later invoked, the IL for it is compiled and the stub is replaced with the address of the method’s compiled code. This happens each time a method is invoked for the first time, and the resulting native code is cached so that it can be reused the next time the assembly is loaded during that session. Obviously, this execution system results in more required overhead than that for a traditional compiled language, but not as much as you’d think.
That should clear up one common misconception: that .NET applications are interpreted. Another common misconception is that the JIT compiled code is stored on disk and reused for subsequent executions of the same application. While it’s possible to do so, as you’ll see shortly, this is not the default arrangement. The IL code for an application is recompiled into native code each time that application is run.
A tale of two compilers
There are, in actuality, two different flavors of JIT compilers (the economy compiler and the normal compiler), and they are not created equal. The economy JITer represents the bare minimum functionality needed to run a .NET application, It directly replaces each MSIL instruction with equivalent native code, doing no optimization, thereby consuming less overhead. It’s meant for use on platforms where memory resources are at a premium.
On the other hand, the normal JITer, which is the default runtime configuration, can perform quite a few on-the-fly optimizations to the code it produces. This gives .NET an advantage over a traditional precompiled language, which can’t make anything but fairly gross assumptions about the platform its emitted code will be run on. The JITer can adjust to the exact current runtime situation, allowing it to do some things that precompiled languages cannot:
- Utilize and allocate CPU registers more efficiently
- Perform low-level code optimizations when appropriate, such as constant folding, copy propagation, elimination of range checking, elimination of common subexpressions, and method inlining
- Utilize memory more efficiently by monitoring the current demand for physical and virtual memory during execution
- Take advantage of the exact processor model in use by emitting instructions specifically for it
The .NET result, if you’ll pardon the pun, is that the extra overhead required for the JITer doesn’t exact as much of a performance penalty as you’d expect.
An option for the speed demon in you
Okay, so MSIL is JIT-compiled every time an application is started. It’s common sense, then, to suppose that initial startup times, as well as the first use of noncore functionality, could result in slower-than-optimal performance. What can you do to minimize this hit?
Microsoft provides what is known by the somewhat redundant name of a Pre-JIT compiler (otherwise known as the Native Image Generator, hence the name Ngen.exe). On the surface, at least, it offers a remedy for any performance problems. The Pre-JIT compiler is meant to be invoked before runtime, like at install time, and it compiles all MSIL in an assembly into native code. This native code is then stored in a special part of the Global Assembly Cache for later use, bypassing the JIT compilation process altogether.
At first glance, this sure sounds like a winner, particularly for client-side code. But recall that the normal JIT performs lots of on-the-fly optimizations while compiling MSIL. Many of these optimizations, particularly those involving the use of registers and memory, are driven by the current demands made on the system. Compiling assemblies in one large batch prevents these optimizations from being made and therefore may actually result in slower final code. Before you go this route, Microsoft recommends that you do your homework and profile both JIT and Ngen versions of the same assembly on the target platform under conditions approximating those found under normal use.
It’s true that unless you’re a Java disciple, .NET’s runtime behavior and compilers will be different from anything you’ve ever seen before. But they’re not necessarily mysterious. I hope I’ve been able to clear up any questions you have about them.
Send us your .NET questions
If you’ve got a burning question about .NET development, send it to our editors and we’ll answer it on the site.