Security

How secure is .NET?

Microsoft's used to having a bull's-eye on its back for hackers to aim for, and it's no different with the coming of the .NET Framework. Columnist Tim Landgrave takes a look at what Microsoft is doing to shore up .NET security.

With all the fanfare surrounding Microsoft’s .NET Framework, it’s not surprising that some of that interest would be coming from those who enjoy breaking and hacking newly unleashed programs and applications. Most of us were expecting attacks on the Framework’s security after its general release, but some mischief-makers just couldn’t wait, like “Benny.”

In early January, a hacker named Benny sent an e-mail to several antivirus companies claiming that he had successfully created the first .NET virus. He was intent on demonstrating that it was possible to infect systems featuring a .NET framework. Given Microsoft’s storied reputation for protecting its systems from security attacks, Benny obviously was counting on a lot of publicity for his programming "coup."

He must have been mighty surprised, then, when it wasn’t forthcoming. But his attempt illustrates a serious issue for today’s CIOs, who are busy implementing new technologies and who are concerned with protecting internal and customer data. How can you—or is it even possible to—figure out what security attacks constitute a legitimate threat and which ones don’t?

Identifying a "real" virus
When is a virus really a virus? Let’s take a look at Benny’s "virus." The W32.Donut virus (as it’s been tagged by antivirus vendors) is a native executable that modifies files on the hard disk in the current directory and 20 directories above it. It then tells the .NET Common Language Runtime (CLR) to run the infected .NET executable files. It has no capability to replicate and, in fact, has to be physically installed on a user’s machine via an e-mail or a floppy disk. The virus doesn’t damage the host computer but sometimes displays a message box with the text “This cell has been infected by the dotNET virus” when infected .NET assemblies are executed. And even then, the warning message is displayed less than 10 percent of the time.

This "virus" simply modifies files on the disk no differently than other, albeit more dangerous, viruses, and those files happen to be .NET executables. Given this fact, it’s hardly worth mentioning as a virus at all.

A real .NET virus would be written to run within the memory managed by the CLR and would then take advantage of a security flaw within the .NET Framework to infect other applications or systems.

Since the Framework lets authors “sign” code before installing it into the system's main repository (the Global Assembly Cache, or the GAC), a virus producing alterations would cause a checksum failure that would keep the infected application from executing. In the case of the Benny virus, Microsoft obviously has plenty of time to respond before customers begin widely deploying production systems based on .NET.

Is .NET really secure?
Analysts, and the press, have made sure that Microsoft is keenly aware that every move regarding security will be analyzed under a microscope. Part of this scrutiny is well deserved, as security holes exist in many Microsoft products. But much is undeserved as well. Microsoft released software patches well before widespread attacks on Internet Information Server (IIS) and Internet Explorer (IE). In truth, many security administrators have not been diligent in installing patches that would have protected their systems up front and have resorted to blaming Microsoft in order to protect themselves.

In an effort to minimize future virus attacks on core software, Microsoft has also made a couple of strategic decisions. First, it recommends that anyone installing versions 4 and higher of IIS should also install and run the IIS lockdown tool. This tool shuts off all access to the Web server except for browsing on port 80 using HTTP. Administrators then have to reenable any additional system access.

Second, in an upcoming release of Windows .NET Server, IIS no longer installs automatically—an added protective feature. Administrators consciously have to choose to install IIS and its default state—which is the same as a downlevel IIS server with the lockdown tool having been executed against it.

Although these steps go a long way toward protecting the platform upon which key elements (ASP.NET, Web Services) of .NET depend, it doesn’t answer the fundamental security question surrounding the .NET Framework. Here, Microsoft has taken a two-pronged approach. First, .NET adds key security features to the platform, including code signing and code access security. Code signing ensures that code delivered by a software development company is the actual code executed by the Framework. Code access security is a runtime enforcement engine that checks the permissions of the executor, the publisher, and the code itself to decide whether it’s appropriate to allow the code to execute. This is much more secure than the DOS/Win32 world we live in today, where anyone can execute a piece of code that can overwrite a key operating-system file and render a system inoperative.

The second prong of Microsoft’s .NET security strategy is external validation of the Framework security. For this, Microsoft has turned to Foundstone, Inc. and Core Security Technologies. Both companies have built reputations on testing for external, unauthorized security penetration at a systems or applications-development level.

Turning to third parties to enhance security of core systems elements may signal a turning point for Microsoft. Open source advocates tout the openness of their platforms as a major reason why they’ve avoided widespread security issues. Microsoft’s willingness to open up source code to experts in the security field and then get their seal of approval is a good sign that Microsoft is taking .NET security seriously.

Given that Microsoft's Steve Ballmer calls it a “bet the company” strategy, I would expect nothing less.

How secure do you think .NET is?
Write and tell us how you view the platform’s security or start a discussion below to share insight and feedback on this topic.

 

Editor's Picks