Because of security issues, system architects have been forced to choose between the rich user interface provided by Windows or the reach of systems designed with standard Internet protocols. Neither Sun with its Java applets, nor Microsoft with its ActiveX controls, has been able to provide a combination of security and robustness that can satisfy both the developers and the security officers of the companies they serve. This because traditional desktop operating systems allow code executing on the target platform to choose one of two security mechanisms: execute in a sandboxed environment, or execute with the permissions of the user. The .NET Framework allows the developer to use either of these mechanisms but also adds a third one called code access security.

In this scenario, the code executes on the target platform based on a combination of a set of predefined system permissions and the source of the code itself. When the Common Language Runtime (CLR) loads an executable, it examines the underlying code and assigns a level of trust that controls the running codes’ access to all protected system resources. This level of trust is compared against the default security policy installed as part of the standard .NET Framework. For example, the default installation policy defines assemblies loaded by the CLR from the local file system as having local origin. Furthermore, the default security policy permits assemblies of local origin to access certain system resources. By designing systems to take advantage of these features, an architect can create applications in which the CLR will automatically help protect against inadvertent or malicious attempts to gain access to the resources on the user’s system.

What can code access security protect?
The .NET Framework has many different classes for which you can define permissions. These include permission for access to system logs, environment variable, local files, and even user interface elements like the system clipboard. There are also classes that allow designers to control a program’s ability to access SQL (SQLClientPermission) or OLE (OLEDBPermission) databases, print to system devices (PrintingPermission), make or accept TCP/IP connections (SocketPermission), and even access the Internet (WebPermission). You’ll need a basic understanding of how code access security works in order to begin designing your systems to implement these safeguards.

Code access security fundamentals
If your code attempts to access a protected system resource without having been granted permission to do so, the CLR’s exception manager throws a SecurityException. You can use try/catch/finally blocks to catch and handle these exceptions. You can also control access to protected resources at runtime using any of the defined permission classes (e.g., WebPermission, SocketPermission, etc.) Each of the classes implements both Deny and PermitOnly methods to support runtime access definition.

By invoking a permission class’s Deny method, the CLR will restrict access to protected system resources for any subsequent code based on the associated permission. To remove the Deny restriction, your code calls the RevertDeny method. Rather than using the Deny method of a permission class to specify which resources cannot be accessed, you can use the PermitOnly method to specify which resources the code may access. A PermitOnly method call can’t grant more permission than is allowed by the current level of trust. For example, if the CLR determines that some code originated from the Internet and thereby assigns read-only access to the data stored in the browser cache, your program can’t use PermitOnly to give the code access to the local hard drive. When using PermitOnly, only those protected resources specified by the method call can be accessed without causing the CLR to throw a security exception. Like the RevertDeny, the RevertPermitOnly cancels the effects of the previous PermitOnly call.

Determining permission boundaries
Given the component-based nature of applications developed using the .NET Framework, it’s likely that, when the CLR finally executes a chunk of code that attempts to access protected resources, the actual call is made deep within a call stack. At this point, the CLR verifies that every caller in the calling chain has the needed permissions to carry out the requested operations—also known as “walking the stack.” Each component in the stack has an associated stack frame that contains its assigned level of trust. If any called component does not have the necessary permission to execute against a protected resource, the operation will fail and a security exception will be thrown. This also means that the CLR will walk the stack every time an operation against a protected resource is attempted. By checking the security for each component, the CLR can avoid a situation in which the called component attempts to use the level of trust of its caller. Unfortunately, this has the potential to add significant processing overhead.

Architecting solutions that use these code access security features requires the ability to balance issues of security with issues of performance. Developing standards for when and how to implement code access security and how to maintain the local trust levels will allow you to balance these needs more precisely.