The Herculean task of upgrading a development language or environment

The latest technology may be exciting but that doesn't necessarily mean an upgrade is necessary. Developer Tony Patton notes that upgrades can be rocky endeavors.

A few years ago, I interviewed with a large company for a senior development position. .NET 3.0 had just been introduced, and I was surprised by the number of interview questions that focused on it. When I asked why there was such an emphasis on .NET 3.0, the development manager beamed while telling me how they were currently busy updating all of their applications to use the new framework. Furthermore, he said they were in the planning stages for a total rewrite of their main desktop application to utilize the new code base. I asked why they felt the urgent need to upgrade when I had witnessed the application working fine. The response I got was their CIO was a Microsoft enthusiast and always wanted to be on the latest version.

I could not understand the need to replace an application that had no glaring problems -- maybe there were issues to which I was not privy, but this was never mentioned. I could better understand if the application was horribly outdated and using ancient technology, but that was not the case.

I've never been an early adopter of technology; I normally wait for widespread acceptance and stability. My opinion is there has to be an important new feature or performance improvement to make the case for upgrading, since it is usually a major endeavor.

The complex upgrade process

The complexity of upgrading a code base to a new version of the .NET Framework depends upon the size and number of applications, but the core approach remains the same. First, you have to download and install the latest framework along with any patches. The development team should be familiar with what the new version offers, known issues, and so forth. Also, you need to make sure third-party libraries/products being used are compatible or determine if a compatible version is available.

Once the environment is set up, the application code is opened and recompiled with the new version. At this point, compilation issues may arise - deprecated features often appear at this point - so time must be devoted to clearing up these issues to ensure a good compilation. It's tough to estimate since complete knowledge of the current code base is required, along with thorough knowledge of the new framework.

When the application is recompiled error free, it must be tested using the updated code. Larger organizations have development and test environments where time is allotted for the quality assurance (QA) team to properly test. A small organization may have a different approach, but the key is making sure everything works as planned before moving to production. The final step is rolling out to production where that environment must have the new framework installed and set up before the code is rolled out.

What .NET 4 has to offer

Industry conferences focus on the latest and greatest when, in fact, most developers are stuck working with older versions. While it is great to get a look at the future, many developers have issues and questions related to past versions.

Microsoft TechEd 2011 was no different, as it focused squarely on .NET 4 and how it can improve your life (well, as it pertains to your development activities). .NET versions 2.0, 3.0, and 3.5 were interrelated with .NET 2.0 as the common base, but .NET 4 is an entirely new upgrade. I recently used .NET 4.0 as a model for the upgrade decision.

The many new features in the release are beyond the scope of this article, but here is a brief overview of what .NET 4.0 has to offer:

  • In-Process Side-by-Side Execution: Applications can load and start multiple versions of .NET in the same process, so you can control the framework version used by the code.
  • Garbage Collection: The concurrent model is replaced by background processing.
  • Reduced client profile: The pared down version of .NET contains only features used by client applications.
  • jQuery: It is fully embraced and included in all Web applications by default.
  • Chart control: This ASP.NET control is now a standard piece of the framework.
  • Cleaner Web markup: Reduced HTML, jQuery adoption, and session-state compression offer Web application enhancements.
  • Web deployment: This task is now automated with many new features.
  • WPF 4 (Windows Presentation Foundation): You can create better interfaces with new features, including the ability to run in browser.
  • WWF (Windows Workflow Foundation): This is completely rewritten with full backwards compatibility support.
  • MVC 3: Framework for building standards-based Web applications.
  • Dynamic languages: Easier integration with dynamic languages like IronPython and IronRuby.

When looking at the feature list, think about what .NET 4.0 offers that warrants an upgrade. I could easily integrate jQuery in previous versions, and the chart control was available as an add-on in .NET 3.5, so the upgrade was not absolutely necessary in my case. On the other hand, if WPF 4, WWF, or MVC 3 is something you must have, then move ahead. (So far, my favorite features are the ability to target different frameworks, jQuery support, and the chart control, but this could change as I continue to get acclimated with .NET 4.0.)


Moving to a new version of a development language/environment, whether it is .NET, Java, or something else, is not a simple process because there are many factors to consider. This is why many large companies are slow to adopt new versions, as it requires a lot of time for planning, testing, and execution. Many businesses use the latest technology for new development, while maintaining existing applications built with older technology.

As a developer, working with the latest technology is exciting but is not always necessary depending on the company's needs. Upgrades can be rocky endeavors that should be approached with caution.

How do you approach a new version of a development environment or tool? What factors do you consider when deciding on an upgrade? Share your thoughts and experiences with the community.

Note: The .NET Framework 4.5 Developer Preview is available for download.


Tony Patton has worn many hats over his 15+ years in the IT industry while witnessing many technologies come and go. He currently focuses on .NET and Web Development while trying to grasp the many facets of supporting such technologies in a productio...

Tony Hopkinson
Tony Hopkinson

by side, parallel extensions and the dynamic type were also in the mix though. In terms of doing it, level ii security model messed us up a bit n some security attributes, some old stuff using keyboard hooks had to go. Got all that the done, patting ourselves on the back then we got the kidney punches at run time, but only XP (SP3), some change in the effective order windows messages were being processed in so we were getting things like, having to force an invalidate in to make a component repaint and such. Think we ended up with four total surprises, which might not be too bad, but was four too many. One did highlight a daft error in the code that we'd got away with in .net 2 though. It's not something to rush into though, give yourself plenty of time, like do it in your first sprint, then you can get a good chance at finding any funnies over the rest of the project.


In 1998 we migrated part of our system from Babbage/CORAL to C++. The migration was reasonably straight forward until we considered that the result is no longer linear. The addition of threads makes a completely deterministic sequence somewhat unpredictable and difficult to analyse, despite an improvement in tools. Migration does not always mean getting smarter or any easier to support. And the life cycle of software itself seems to be getting shorter. Is it another example of an inverse Moore's Law.

Tony Hopkinson
Tony Hopkinson

there would still have been an increase in complexity and uncertainty. Languages strengths and weakness, inherrent optimations, and simpley the fact that you described and are comfortable with the descriptions in the other langauge guarantees that. The only time a move like this could simplify things, is if you had a mixed codebase, for no real reason and you wanted t unify it, other than that, it's a bridge you shouldn't have bought.


Microsoft no doubt have every right to make a living by introducing new products and by improving old ones. They do not, imho, have any right to abandon faithful customers by withdrawing support from older but still fine products. It would seem to me well within the Constitutional Intent of the Founders to withdraw patent and trademark protection from any product the manufacturer of which refuses to continue afterservice at the originally established free-market rates. This would enable the world's non-Microsoft programmers -- many of whom are, ahem, the equals in skill to Redmonds' -- to compete to supply the services at which Microsoft, or any other OEM, had declared itself delinquent. -dlj.

Mike Page
Mike Page

I upgrade my develop environment (C++ Builder) only when there is a very compelling reason. My current development was released in Dec 2005. It generates 32 bit applications quite well, but cannot generate 64 bit applications, which are gradually becoming important due to the larger amount of addressable memory. Despite many releases since Dec 2005 my environment still does not support developing 64 bit applications. When it does then I will upgrade.


Keep in mind that in-process side-by-side (SxS) is designed for a specific target - allowing multiple managed add-ins which present COM interfaces to a host. It doesn't allow you to just mix and match assemblies in an application. And, when you do use it, you end up with both 2.0 and 4.0 runtimes in your address space. This is not a good reason to "migrate" unless your application meets this description.

Editor's Picks