Software Development

Mono 2.0: .NET goes non-Windows

Justin James wanted to learn what is new in Mono 2.0, so he interviewed Miguel de Icaza, VP of Development Platforms and a founder of Mono, and Joseph Hill, Product Manager at Novell. Find out what is and is not included in the latest release.

 

For some time now, the Novell-sponsored Mono Project has brought .NET to non-Microsoft platforms. Unfortunately, Mono has been quite far behind the .NET Framework and CLR. This all changes with the release of Mono 2.0.

Three days before the October 6, 2008 release of Mono 2.0, I had the chance to speak with Miguel de Icaza, VP of Development Platforms and a founder of Mono, and Joseph Hill, Product Manager at Novell.

Mono 2.0 is API complete in regards to .NET 2.0; this means that Windows Forms applications and ASP.NET applications will both work within it. Additionally, many of the new features in .NET 3.5, including C# 3.0 and .NET Language-Integrated Query (LINQ), are included. Miguel told me that implementing C# 3.0 was both fun and challenging and that getting the lexical closures right was "probably one of the hardest parts to implement" (which is not surprising at all). However, the big frameworks from .NET 3.0, such as Windows Presentation Foundation (WPF) and Windows Communication Foundation (WCF), are not included. There is an implementation of the Parallel Extensions Library. Because Microsoft's currently in CTP, Novell is not shipping its version with Mono 2.0, although it is available for download. And because Microsoft open sourced the DLR, IronPython and IronRuby will both work under Mono.

In terms of using code in Mono, no recompile is needed, although there is a C# compiler included. Because compiled .NET assemblies are in Microsoft Intermediate Language (MSIL), Mono acts as an MSIL interpreter like the .NET CLR. For Web applications, the Mono Project has produced both an Apache plug-in and a FastCGI binding to allow it to hook into your Web/application server. Mono works on a huge number of platforms, including Linux, Solaris (including SPARC platforms), Mac OS X, Windows, the Nintendo Wii (yes, you read that right), and BSD. In the case of BSD, you will need third-party patches to make it work; for all other platforms, Mono will work right out of the box and be running your .NET code. A recent version of Linux is needed (about three - four years old at most), since it uses some newer kernel features. On the Mac OS X platform, it includes Cocoa# for building native OS X applications in .NET. They also offer a pre-made VMware virtual machine with SuSE Linux and Mono installed for free.

They have also included a Mono migration analysis tool that will let you know if you are making calls that will work differently under Mono. Some .NET calls that access the Windows API mechanisms or low-level system calls will still work just fine, but the developer needs to keep in mind that the support is emulated or imitation -- it's not actually accessing the underlying mechanism. For example, accessing the registry is possible, but Mono creates a file-backed emulation of the registry and can only provide data that your application has stored in it to begin with. Something like trying to enumerate the NICs in the PC from the registry data will fail because those keys simply do not exist. You can also send the tool's output to Novell to get feedback on it.

From these reports, Miguel guesstimated that 45% of applications work with zero lines of code that need to be changed. About 17% have 1 - 20 statements that need to be changed (this is typically about a week's worth of effort). Another 20% have 20 - 100 calls to be changed (this is usually around two - four months' worth of work). The remainders of the applications they see require too many changes to make it practical to put them on Mono. Miguel says that the performance is about 80% of that of the .NET CLR on average. Some statements run faster, some run slower. He also mentioned that Web applications often run faster in Mono than they do under the .NET CLR, because much of Mono's slowdowns occur in the GUI portion of the system.

For developers who want an alternative to Microsoft Visual Studio, there is Mono Develop. A new version of Mono Develop is slated for release in January 2009; it will deliver improved usability and an enlarged feature set, including support for Visual Studio file formats. This will allow developers on the same team to use both Mono Develop and Visual Studio, with no conversions between the two needed.

Overall, Mono 2.0 sounds really good. In the near future, I will try it out and let you know what I think.

J.Ja

Disclosure of Justin's industry affiliations: Justin James has a working arrangement with Microsoft to write an article for MSDN Magazine. He also has a contract with Spiceworks to write product buying guides.

----------------------------------------------------------------------------------------------------------------

Get weekly development tips in your inbox Keep your developer skills sharp by signing up for TechRepublic's free Web Developer newsletter, delivered each Tuesday. Automatically subscribe today!

About

Justin James is the Lead Architect for Conigent.

38 comments
FXEF
FXEF

I have mixed feelings about Mono. Mono may help some developers and end users, however I'm not sure it will benefit open source in the end. Seems to be another way to push .NET as a standard, and we don't need proprietary standards.

ankushnarula
ankushnarula

Good article. The Common Language Framework is the first successful implementation of a cross-language core framework (ala NextStep). Mono takes this cross-platform. Miguel de Icaza should get alot of credit for this - because he's leveling the playing field on platforms alien to Microsoft-based developers. That's a boon to Linux, BSD, Mac OS X, etc. Let me address the two comments. What does this have to do with interpreted languages? If you know much about .NET, Mono, or Java - you should know that these runtimes are designed to optimize code at runtime. Not only that, but when it's necessary, the code can be compiled natively using Java Hotspot and while the application/library is being installed in .NET (see install-time compilation). As for the bloatware comment - you really should use the platforms in complex enterprise environments/projects before you make such judgements. C is great for writing high-performance code that is executed often - or perhaps to initialize a process, but practically speaking there's almost nobody left exclusively using C for enterprise application development. And if I may say so, I believe that if people spent the time learning and optimizing existing code (regardless of the language) - you will inevitably get more for your money. Finally - Microsoft has given Novell (and Novell customers) a guarantee that they are protected from any patent claims. So I would take your comment about Mono being illegal with a grain of salt.

Jaqui
Jaqui

the bloatware support has been included by default in most distros for the 2007 and 2008 releases. until .net does NOT include tons of bloated code, it will NOT be used by me. even the Mono version is bloated and un-acceptable. edit to add: and, as far as I'm concerned, all that Mono is, is an illegally reverse engineered version of MS' intellectual property, as an illegal product it should not be promoted. [ until MS releases a .net version for every os or the sources for .net it is not legally used on any os but MS oses. ]

Justin James
Justin James

Are you interested in the new avenues that Mono 2.0 opens up for .Net? Or do you think that the effort to test isn't worth the potentially bigger audience? J.Ja

matthrms
matthrms

for some reason i am under the impression that the CLR isn't owned by Microsoft, is this correct?

Tony Hopkinson
Tony Hopkinson

Almost by definition, anything else would mean writing fit for purpose code for every function. Just look at the number of methods involved in say ListBox. Even if you don't JIT the ones you don't explicitly use, you can end up touching routines implicitly. Organising a massive framework for the most streamlined compilation is an exercise in futility. The framework is an all things to all men all the time solution, there's no sensible way you won't get stuff you don't require. Optimising at the higher level you have a point, but some very inefficient practices such as reflection are often encouraged, and as far as I can make out optimisation is black magic in academia at the moment. People just throw more machine at the problem, which is where we came in.

Jaqui
Jaqui

patents on software are a violation of patent law anyway, copyright is what covers software. and MS has not released ANYTHING about the .NET framework to promote use on other operating systems, so Mono is an illegal clone under the letter of copyright law.

Justin James
Justin James

... that Microsoft's partnership with Novell covers this. I may also add, C# is an ECMA spec, so there is no need to reverse engineer it, anyone may write a C# compiler, legally. J.Ja

jslarochelle
jslarochelle

...and I like to be able to work at home. Recently I was forced to use .NET on a project. I was really iritated by this because I thought I would be stuck in the office to perform overtime. This was a major anoyance. In the past I had been used to switching seemlessly between my office Windows setup and my home setup (Linux) for development because I was using Java (contrary to what some people will tell you this is working perfectly for a lot of code and type of application). Now it turns out that most of my code compiles under MONO and I am able to work at home (what little does not compile I am able to mock). So this is a great paradox: an open source project (MONO) might end-up lessening my intolerance for M$ developement tools and indirectly lessen my avertion towards Windows as a platform for commercial deployment and actually help sell more Windows machine....

normhaga
normhaga

We are reaching the limits of how fast our current technology can operate. At the present time, we have clock and data transfer rates of such speed that it is difficult to keep on a wire (or a foil trace). Without a change to some technology that has not yet been proven, we may by a stroke of fortune reach clock speeds of five to seven Gigahertz. This is one of the reason that multicore processors are attractive. Even with multicore CPU's there are limits; doubling the CPU count does not double the effective speed of the overall CPU, rather the gain is about 70 percent, and at some point we will reach a maximum CPU count. With multiple CPU's we currently lose even while we gain because no one as yet has figured out how to effectively write a true multi-symmetric program. We are begging for more execution speed from our machines and programs, but with limits being reached and working knowledge in short supply, how are we going to achieve this? An interpretive language, whether it be BASIC, JAVA, or .NET adds more complexity, thus more machine code to program execution. To achieve faster program execution we need to simplify the languages we use at the machine level, optimize the code if you will. An interpreter does not optimize the code, no matter how good it is. We need to return to a simpler code base even though this conflicts with the modern business paradigm of "Just get it out the door." So why bother?

Justin James
Justin James

Microsoft owns the CLR (they wrote it, after all) as well as its spec (ditto). I am not sure if they have made details of the spec public or not, though. J.Ja

jhoward
jhoward

As with most things you can pick any two from the list of cheap, fast and good. Any framework as Tony says is all things to all men so by definition it will include things your particular application does not need. It has always been the developer's decision to use an existing framework to speed up development or create their own specialized framework to avoid the "bloat" resulting in a potentially slimmer app. Part of the discussion here is that this decision has been mostly taken away from the developer due to the increasing need for apps to be pumped out as quickly as possible. If you can afford the time to create your own framework you potentially get a good specialized app that runs well but at the cost of development time and probably more money due to that development time. If you use an existing framework you can pump out an app that does a good job pretty quickly but it will require more resources to run efficiently. As Tony has been saying, development is a trade-off. Unfortunately this discussion has brought to light that the trade-off choice has become more of a lopsided pre-made decision to skip optimization and feed the fat beast that is our precious frameworks with hardware.

jslarochelle
jslarochelle

As far as I can remember frameworks have always been large even when based on C++ or Delphi (or Turbo Pascal). Turbo Vision (the text based Turbo Pascal Borland framework) was large and successive generations of frameworks only got bigger (OWL, MFC, VCL, ...). Part of the problem of course is that frameworks are a layer of abstraction on top of the OS and with time include more and more functionnallity. This is needed to keep-up with emerging technology. I would not be surprise if Mono actually ran better on other OS. I know I have the distinct feeling that Java runs better on my Linux machine at home than on my XP system at work (a slightly more powerfull machine...). It is always a shock when I bring a project I started at home to the office. It always feels like there is a relativistic time distortion. JS

ankushnarula
ankushnarula

Well... I'm not sure what you want. Are you saying that every time someone writes a new widget, they should code straight to the GDI? Your distaste for "bloat" implies so many negative ramifications...I can't even begin to start listing them. As a simple example, look at the unified security framework or debugging power built into .NET. The fact that there's a fairly comprehensive framework also means that developers will spend more time writing business/domain specific code and gluing together framework components rather than reinventing the wheel ( a very popular theme in the open source community - e.g. php libraries and X11 GUI toolkits). Look at the evolution of Java - since it's more mature. The engineers have taken so much feedback and further optimized the JRT and JCL in every iterative release. If you believe in the 80/20 rule, then you'd agree that 80% of problems can be solved with a toolset. Not every application needs to perform 1M operations per second. In the case of the 20% that require performance, JNI allows the developer to get closer to the OS and write optimized code. I have little else to say about this...other than you should try to remember the amount of time we used to spend coding the resizing of forms/windows/controls. I, for one, love that I don't have to do that crap anymore.

ankushnarula
ankushnarula

I'm sure you're correct that it could be a copyright infringement claim, but it could also be a patent infringement claim ( http://en.wikipedia.org/wiki/Software_patent ). Moreover, unless you're an intellectual property lawyer, I wouldn't be so sure about your conclusion. However, if you're absolutely risk averse - then I can understand why you would see it this way. BTW, don't forget that MS released the CLI back in 2001- 2002 under a Shared Source license - and this did indeed run on FreeBSD. Also, it would be worthwhile to read some articles on boycottnovell.com and groklaw.com to get a better sense of the legal risks in using Mono: http://boycottnovell.com/2007/11/29/mono-microsoft- license-patent/ http://www.groklaw.net/articlebasic.php? story=20070614231022599

normhaga
normhaga

M$ entered into a contract with Novell. While I do not have all the details, two items come to mind: one, Silverlight was to be rewritten for Linux based machines, it is called Moonlight, two, the .NET framework was to be rewritten for Linux and Unix based machines. The Linux .NET is called Mono.

LyleTaylor
LyleTaylor

I'm not an expert, but if they had taken the source code from MS, or portions of it, and then tried to pass it off as their own, that would be a violation of copyright law. Trying to exactly emulate the look and feel of Windows Forms applications (i.e., the Windows graphical theme) might also qualify for that. However, reimplementing a technology with published standards (I thought MS was originally trying to push it as a standard that could be implemented on any platform, even if they weren't going to do it), or even reverse engineering an API and reimplementing it doesn't necessarily qualify as copyright infringement.

jhoward
jhoward

Mono has been around long enough for MS to take any action it wanted to to prevent its continued growth. I am not very familiar with legal issues like this but since it has been public for as long as it has what recourse would MS have at this point anyway? Besides, allowing the development of something like mono is a step in the right direction by MS. We should be showing them that we want more in this direction and less in the use Windows or we don't care direction.

Justin James
Justin James

That is pretty ironic. :) I downloaded the openSUSE 11 image from Mono with everything pre-installed. My test application failed to compile in it, but I'm also making use of the Parallel Extensions Library in it too, so I am not surprised. I am going to try a few other apps I have here too, see how they go, and then post my findings either later in December or early in January (I already have my blogs written up until early December, I am trying to get ahead so I can take time off for my wedding and honeymoon). J.Ja

jslarochelle
jslarochelle

The Java JIT compiler for example (specially the server version) performs surprising optimization. In fact JIT compilers can perform optimization that ahead of time compilers cannot do. JS

Tony Hopkinson
Tony Hopkinson

Optimising to MSIL is a one off at development which is a prodcutivity impact. From MSIL to machine code, I can't see why the full weight of compiler theory can't be applied. The real problem is the framework itself, just like anyother massive component based library it can be very bloated and organising it to reduce the amount of code 'touched' unnecessarily would require a good stick of work and a probably a great dal of extra complexity. Unfortunately throw extra machine at problem is an industry standard now. As of course is cookie cutting, if we ever do really hit the hardware wall, stuff is going to get real interesting as there are far less percentage wise developers who could proceed without relying on big framework coupled witha code generating IDE.

Tony Hopkinson
Tony Hopkinson

As soon as machine time became less expensive that developer time, consistently applying applying that equation was bound to get us where we are. The level of sophistication we routinely apply to a simple business CRUD GUI application, would once have cost too much in machine time to execute and if achievable at all, would have took years to design and implement. As long as the hardware and software boys keep on leapfrogging each other in terms of capacity and requirements, it will stay this way as well. One often wonders what would happen if we hit a hardware wall though, the big boys would have to free up capacity in order to add more 'value' in the must have next version.

Tony Hopkinson
Tony Hopkinson

It's a boon in that you can get access to lots of existing (and often proven code). But you get what's there, whether it's exactly right or necessary. As I said to the really erudite CIO, design is compromise. My essential argument isn't that the 'bloat' is a bad choice, but that it was one and there are both costs and benefits.

Tony Hopkinson
Tony Hopkinson

Any big homogenous 'library' in fact. You need a list, you write something to do it, you use it everywhere else you need a list. Then you end up adding extra functionality to it, you may even make use of it in other descendants. Nature of the beast....

Justin James
Justin James

I agree that large frameworks are common. Let's try to keep in mind, much of the .Net Framework is simply a wrapper around the Windows API, in a standardized format. And the reason why C is so compact, is because it doesn't do anything. You can't do much of anything in C without using libraries, STDIO at the very least. :) What Java and .Net bring to the "bloat table" are their interpreted environments, which are heavier than, say, the Perl interpreter (they also do a lot more than the Perl interpreter, which gets lost in the arguement), and the fact that by default, they tend to load a lot of libraries that you really don't need many times. Start a default project in Visual Studio and see how much functionality is provided by the default references, it's half the Windows API. :) This is like the "Apache vs. IIS" debate. The reality is, by the time you add enough plugins to Apache to have it be approximately feature equivalent to IIS (say, by adding a JVM and Tomcat/Jakarta, plus OpenSSL, LDAP authentication methods, etc.), it is approximately as heavy as IIS. J.Ja

Tony Hopkinson
Tony Hopkinson

Intelligent, informative, witty, you can definitely tell how you made CIO. ROFLMAO

Tony Hopkinson
Tony Hopkinson

you just fell flat on your face again. I never said there wasn't bloat, you did. I never said the bloat wasn't a fair exchange (generally) for the productivity increases Jacqui did. Jacqui quite justifiably feels that that he personally doesn't see enough productivity gains to offset the cost of the bloat. Conversely we as pretty much GUI oriented winders developers do. Get off your high horse mate, lack of oxygen is affecting your thinking. All development is compromise, that means no one is going to be be deliriously happy with it, and several are going to think it's crap. Such is life... Oh and remember there's a difference between inventing a wheel and settling for a hexagon because it's there....

jslarochelle
jslarochelle

My information about GUI might have been from an old article. JS

Justin James
Justin James

The Mono guys claim that the GUI is now 100% cross platform, with version 2.0. They admit, however, that it can be a bit slow, which is understandable. J.Ja

jslarochelle
jslarochelle

.. and there might be other area that are still weak. But when you think of what MONO is, the achievement even if not perfect is quite impressive. For my project (OPC UA client module) I was using mostly core libraries so I suppose I got lucky. But the paradox exposed in my initial e-mail remains. I will be watching for the result of your experiment. JS

lorux
lorux

I've a 4GB Dual Core with linux. Java: 1.6.0_0; OpenJDK 64-Bit Server VM 14.0-b08 .Net: Mono 2.0 I tried to create a LinkedList of a "foo" clas with 2 string properties (name and surname) A 1000000 objects: Mono is faster (about 10 times faster) than Java. a 3000000 objects: Mono takes about 5 seconds... Java crashes. CPU was use in Java was 100%,25% aprox. CPU used by Mono version whas 100%,4% aprox. Sorry but... I'm thinking about forgeting my actual server java implementation and migrate to Mono (with MonoDevelop). The sample code (The c# code is the java with same variation... not using c# 3.0 style coding for class creation) List lst = new List(); for(int i=0;i

normhaga
normhaga

A few years ago, in yet another CS Class, I had to do a number of benchmarks on algorithms. One that is relevant to this discussion is array's versus linked lists. The arrays and ll's were not large, a few hundred kb each, so that they fit easily within ram. When benchmarking completion time of various sorts, the arrays beat the algorithm out of the linked list - every time with an equivalent sort. The array sits in a contiguous bock of ram where as who knows where the LL sits. While electricity operates at near light speed, it is not light speed and while gate switching to read ram operates near electrical speed, it is not electrical speed; it is more like a logarithmic speed. The test units were set up to read the clock both at the beginning and end of program execution. The code was first written in Java, later C++, followed by C. I will grant that the language had a little to do with how fast the sort operated because not all algorithms operate efficiently, but the general case was also true for highly optimized algorithms. And yes, the Professor was angry when I wrote a poor example of an assembly sort (with Olly debug) that beat his best, highly optimized, C algorithms. But that was a side issue that I was challenged to prove when I stated baldly that machine code is much faster than compiled languages. Also, I might add, that when you code for a Windows environment, your code can not be as efficient as it would be in a more pure and less bloated environment.

Justin James
Justin James

I came from a similar background in some regards, particularly with my father being a programmer. I sympathize completely with the idea of "do it right or don't do it", and I think that once an application get itself under enough load that 1 server or PC is completely tied up with it much of the time, you need to consider a switch of languages. You mentioned the "get it out the door" philosophy. Unfortunately, that is not a "philosophy" any longer, it is Item #1 in the project requirements. I've tried time and time again to push back on deadlines and what not, unsuccessful more times than not. Under those circumstances (and I beleive that a large majority of developers are in the same sinking boat), there is little choice but to choose whatever will help you get a project out the door as quickly as possible, and .Net and Java are prime candidates for that. I may also note that, once you get the giant runtime environments into memory, those systems tend to run at 90% - 95% of the speed of native code. That 5% - 10% diference is huge if you are talking about dozens of servers for a single application, but it is almost irrelevant for, say, an IM client (*not* the server) or a "email the author" Web page, or some "data driven application" to show the week's menu at the cafeteria. J.Ja

normhaga
normhaga

about the throw hardware approach. Before I go there, I am old enough to remember going with my dad to IBM Las Vegas where he worked. Key punches were pretty common, but my fathers job was a little different; he wrote the programs by hard wiring gates from discrete transistors. My first grasp of the power of native machine code came to me while I was attending Utah State in the 80's. I had an Osbourne 1 and was writing a version of Pong for it. I had a choice between Basic or Pascal, I chose Basic because it was difficult to do direct screen writes in Pascal. After I completed the project, I was dissatisfied because the game was very slow. I chose to rewrite it or rather convert the hardware manipulation to Assembly and compile it to machine code. The game was so fast that I had to write in delay loops just to see the ball (an underscored 'o'). No matter how you look at it, compiled languages are not as fast as machine code because they have to allow for generalities that machine code does not. We are not talking 5 percent; more like a 70 percent gain in app execution speed. Today's machine are so much more powerful than the one I wrote that program on, yet often they actually operate more slowly than a computer from the 80's because of the bloat that modern languages accidentally or unintentionally add. As programmers we have the duty to use our customers resources as lightly and effectively as we can. I personally believe that we need to realign our programs to simpler languages even though this conflicts with the "Get it out the door" philosophy. Another item that might/should/could be considered is that because of the greater rigor in writing code in assembly and the greater effort required to test the code, it often had few bugs in it when released that modern code often exhibits. Also, we have to consider where the bottleneck is; no sense in having a lightning fast web browser because we are limited by our bandwidth. Sorry about the late reply; I have been up to my armpits in hardware problems for the last few days from about 6 am until about 2 or 3 am.

Tony Hopkinson
Tony Hopkinson

We put in a call in our apps to run up the CLR while it's it's idling waiting for the database connection etc. The complaint was that the first time you ran the app of a morning, it took ages to start. :p

jhoward
jhoward

Unfortunately I have the wonderful task of being forced to use one of our vendors SDKs which only has bindings to java and .NET. I chose to exclusively use .NET over java mainly because some of the applications are desktop apps and IMHO Visual Studio just makes my life easier with GUI based apps. Also affecting my decisions is non-Windows based operating systems just don't exist in the environments we support with the exception of the art directors who wouldn't use the desktop apps anyway. The web/server side apps don't care what OS you are running so it also doesn't affect the judgement. On the server side app end, I have enjoyed using mono to keep my .NET server side apps off of Windows machines which use a lot of resources just running the OS. Being able to use cheaper hardware with better performance is a huge benefit you simply cannot ignore. I have also seen performance differences between similar applications using mono and java. To be fair I don't know enough of how the java code was written to be a definitive comparison but it appears that server side .Net apps in mono are lighter and out-perform similar java apps. As Justin mentioned, if it were up to me my server side apps would be native code because it makes sense to take what performance you can get if it is available but in this situation I am pretty happy with what mono provides.

Justin James
Justin James

... is that is is used so widely now, it is going to have all of its common parts in memory well before yours application uses it, at least on Windows. So if I use it, it's not like I'm making the user suddenly load its bloated self up. People using Mono probably won't be in that situation. I hate to say it, but the "throw more hardware at the problem" approach actually makes sense and saves money in many cases, especially when an application is nohwere near maxing out the box it is running on. Tradeoff... moving the development to C++ when C++ developers cost an arm and a leg and C++ is insanely difficult to write safely? Or use .Net or Java, with much cheaper developers and the full airbags and seatbelts on the code? And the C++ developers *might* get a 5% optimization over .Net or Java in most cases? If I'm writing code for Google's data center, yes, the native code approach makes sense. If I'm writing the in-house employee time clock Web application, .Net or Java makes sense. :) J.Ja

Editor's Picks