Security

Software Design: Develop applications that prevent system security breaches

Designing the most secure applications requires developers to look beyond their own code into the code residing on the system. Accessing APIs or COM objects or establishing system privileges can result in security vulnerabilities that can be prevented.


Designing for application security is one thing; keeping someone out of your application is important; however, allowing someone to take control of a system through your application can be substantially worse. Designing your applications to prevent a system-level security breach is significantly different than preventing security vulnerabilities at the application level. In application security, you're looking for things which are entirely in your control. In designing applications for system security, you're trying to prevent security vulnerabilities from occurring in your own code as well as the code that your code uses.

Buffer overflows
The major cause of security breaches in application code these days is buffer overflows. A buffer overflow is simply where the code allocates only a small amount of memory space for the parameters it is provided and then the function receives more data than it was expecting. Occasionally, this opens the capability for someone to run malicious code because the end of the buffer overwrites some of the application's code and, when the application goes to run its code, it accidentally runs what was at the end of the parameter that was provided.

Most people working in languages today take a relatively carefree attitude to buffer overflows. Working in Visual Basic, Visual Basic.Net, C#, Java, or one of a dozen other languages means that you don't have to worry about buffer overflows internally in your applications. Either the language itself or the underlying infrastructure automatically manages the amount of buffer space allocated for strings and other parameters. Because of this, for all practical purposes, it's impossible to overflow a buffer within the confines of an application.

The challenge comes in when you utilize an API, a COM object, or some other component that was not designed in a language that automatically manages memory allocation and buffers for you. Most of the APIs available in Windows, including the core Windows APIs, were written in C or C++. Neither of these languages have automatic management for buffers. This leaves the possibility open that you could pass in a parameter that exceeds the internal buffer and allows the execution of someone else's code.

Obviously, some consideration must be given to parameter lengths when calling outside "safe" languages. If you're taking input in directly from a user or external system and passing it along to an API, check the length of the parameter for the API. This will prevent your "safe" language from passing through an unsafe request. A few quick "sanity" checks for parameters can prevent real problems.

A "sanity" check is one which simply checks for a reasonable value. The check isn't the same as checking to see whether the length of the string is larger than a database field—when you may need to worry about the difference between 25 and 26 characters. Instead, sanity checks are ones where you check parameters for reasonable lengths. For instance, a person's name shouldn't typically be more than 255 characters in length. Every value should have some sort of a sanity check associated with it.

System account privileges
One of the reasons that applications are a serious security concern is because they are most often installed with privileges that a normal user wouldn't have. System applications typically have rights to impersonate other users, log in to servers, call COM components, etc. More than a few of these rights can be used to break into a system.

If you're an attacker trying to work your way into an organization's systems, which user accounts are you going to go after? The users that have little rights and log in for a few hours each day or the systems that are constantly running, have high privileges, and can be tested hour after hour, day after day? Obviously, the system accounts used by applications are the ones which are the most desirable.

The most classic way that applications are installed is that they are given administrative privileges. In effect, any code run by the application is run as an administrator. They have the capability of doing anything that they want with the system without restriction. These are the perfect kinds of accounts for a hacker to go after since they will have the keys to the entire network with relatively little effort.

This leaves open the door that attackers can create their own user accounts to the system and enable services such as terminal services. If you can create an account and access terminal services, you have the ability to take complete control of the system.

When developing applications and deploying them, it's important to consider what minimum rights the application must have and how those rights can be minimized. Developing for system security means minimizing the potential options for an intruder if they do manage to get your application to do something it wasn't designed to do.

One of the primary challenges with minimizing permissions is understanding what the minimum permissions are. As I mentioned above, many applications are installed running an administrator equivalent account. This is because, when debugging an application, this is the easiest way to determine if the problem is a permissions problem. Once a solution is found to the problem, the exact permission needed to complete the operation becomes of less concern and is all too often overlooked.

Misconfiguration
Another common way that hackers attack a system is to use a misconfiguration to sneak right by the security that is laid out by the infrastructure group. Applications sometimes encourage misconfiguration by creating application configurations so complex that it's nearly impossible to configure the server securely and allow the application to continue to run. When some amount of system configuration is required for every application, the potential for misconfiguration, and, therefore, security breaches, goes up substantially.

Developing error handling within the application to the point where it can detect configuration problems and report on them is a good first step. This allows the system to be configured in the most restrictive way first, and then, progressively, as errors are detected in the application, the system configuration can be gradually opened up. This helps to reduce the tendency to radically change the configuration to a completely open configuration.

Clean up
One of the least favorite activities of developers is cleaning up their code. Cleaning up the code is important because, often, testing stubs are built during the course of application development that don't have the same level of error-handling controls. These controls aren't put in place because the developer is the only intended user of an application and, therefore, checking the user input is unnecessary. A problem is created if the test stubs are accessed by a malicious user who can use them to exploit a vulnerability in the underlying system code. Removing these files is an absolute necessity for most applications that want to maintain high security.

Another item that may need cleaning up is unnecessary source code on the server. If you are working with a language that has both source code and compiled code—such as Java and the .NET languages—then it's important to remove the source code from the servers. The source code can be used as a map to areas where you may not be protecting the application. It shows them exactly where they should focus their attacks.

Dress in layers
The final thing you can do in your application design to help provide system security is to design with the idea in mind that there are multiple layers in the architecture designed to protect the systems. Designing applications from the beginning with the understanding that there are often layers between the application and the user can give you access to infrastructure tools that will help protect your application. Application gateways exist which can inspect every call coming into your Web application and prevent those that could potentially cause a problem. When starting a design from scratch, you can know to test for potential interactions and identify areas where the application gateway may need to allow all traffic to come through.

Planning from the outset to develop secure software generally results in a relatively secure application. However, even if you don't have the luxury of starting from scratch, you can apply the same techniques and thought processes to existing applications to make them better protect the systems they run on.

Editor's Picks

Free Newsletters, In your Inbox