Most .NET developers remember the ASP.NET system account fiasco in the first release of the .NET Framework. To increase ASP.NET security, Microsoft changed the type of the ASP.NET account from a system to a user account between the last beta and the first release candidate of the .NET Framework. This forced developers to make a Faustian choice: They could either delay the release of their applications to take the time to modify their systems to support the new permissions default, or they could change the ASP.NET account to a system account and risk deploying a less secure system.

To avoid a similar scenario with the upcoming .NET Server release, system architects and development managers must choose between two competing schools of thought on how to manage security in the development environment.

Development security options
The prevalent opinion among current development shops is developers should have the option of writing their code and accessing resources in a totally open environment where developers have administrative accounts and have access to any resource on the network when writing their application. The rationale here is that developers have enough problems just getting their own code to work without having to deal with security issues when creating prototypes or working on early product releases. And companies that pay developers handsomely don’t want to pay for the downtime in productivity caused by developers debugging security issues when they’re trying to create early versions of their applications.

The other school of thought recognizes the ultimate cost of deploying insecure applications and the increasing cost of finding and removing security bugs as the development cycle progresses. Effective security management during the development process requires significant time and effort and architects tend to badly underestimate that effort. The sooner a developer hits the security wall, the earlier in the development cycle the security issues can be addressed and resolved.

One of the major reasons to manage security as early as possible is the propensity of management to assume that prototypes and early releases are “good enough” to deploy. Because of this, many prototypes are placed into production without adequate security testing. And most development shops don’t make the effort to retrofit security in existing code. I’ve seen several production applications that use SQL Server based systems that use the default sa account with a blank password as their database logon. Microsoft is making a bold statement by locking down .NET Server and companies that intend to make Microsoft systems a core part of their infrastructure should follow Microsoft’s lead and retrofit their development environments to encourage the development of secure systems from the first line of code written.

Development life cycle recommendations
As I work with companies that are redesigning their development environments to take advantage of .NET platform advancements, I have a standard set of security recommendations that I make to the architects and senior developers:

  • Design for security from the ground up, not as an afterthought. If possible, write all new applications assuming the presence of Active Directory. Active Directory is used for authentication to the system and authorization to use all system resources.
  • Develop in a secure environment, not in an open environment. I recommend all developers have accounts with user permissions that they develop and test with instead of being given administrative permissions to their machine and network. They must have access to an administrative account for their machine that they can use for brief periods of time either from the Run As context menu or with a quick log on and log off. But developers shouldn’t have network administrative accounts, because it is too easy for them to make a configuration change that allows their code to work and then forget about it before the system goes into test or production.
  • Invest in security training. Organizations should proactively train architects, developers, operations staff, and help desk consultants on how to identify security issues and solve security-related problems.
  • Security issues should be more rigidly tested. The quality assurance team should begin testing security issues earlier in the development process. They must be encouraged to share testing results with developers and architects to help them diagnose and fix security-related problems. And, they should be given the time to meet with operations engineers and help desk consultants to prepare them for questions and issues that end users may raise once the system goes into production.

Is the added security worth upsetting your developers?
I’ve not worked with a company yet whose developers didn’t see this increased security as unnecessary and counterproductive. But developers need to consider the real costs of a system breach in terms of lost productivity, decreased confidence by users and management, and the increased cost of removing security bugs as the development cycle progresses. The secret to acceptance is to encourage developers to take as much pride in producing secure systems as they do producing technically sophisticated or innovative ones.