A secure system is built around the principles of a trusted computing base (TCB), which incorporates mechanisms for identifying and authenticating users (I&A), controlling access to system objects, and auditing users’ actions. A TCB's goals are to ensure the integrity, confidentiality, availability, accountability, and assurance of the system. It also ensures that all of a system’s security mechanisms complement each other to secure its resources. The TCB is designed to enforce the security requirements stipulated by a system-specific security policy. (Note: This security policy should have been developed during the initiation phase of the life cycle. It is one of the documents driving the security engineering process.)
Did you miss an article?
This article continues a series focusing on system security. Check out these earlier installments:
- "Secure your system with the TCB concept"
- "Engineer security during initiation phase to help prevent system vulnerabilities"
Designing a secure system is a five-step process
There are five initial steps to properly design a secure system:
- Determine security requirements derived from the system-specific security policy.
- Determine what system components will be used and what security mechanisms they provide.
- Build a matrix of each component’s mechanisms providing I&A, access control, and auditing.
- Using the matrix, determine which mechanisms should be applied.
- Match the security goals against the design to ensure that integrity, confidentiality, availability, accountability, and assurance are provided.
In the first step, the security requirements will be either functional requirements dictated by the business model, or government-mandated requirements dictated by public law. For example, systems storing personal information (e.g., Social Security numbers, employment histories, and medical histories) are subject to the Privacy Act of 1974. The requirements dictated in this act are good examples of requirements stemming from public law—5 USC 552. There can be serious repercussions if the system’s security requirements are not met and the system’s data is compromised, especially if public laws have been violated.
Keep in mind that a secure system does more than just protect a system’s data. This is a common misperception. This misperception usually manifests itself with a security solution that focuses solely on protecting the data. Protecting the data is important, but it is only one aspect of a multifaceted puzzle. A well-designed security solution ensures:
- The integrity, confidentiality, and availability of the system and its data.
- The accountability of the users and the system’s services.
- The reliability of the system's security mechanisms.
The second step requires knowledge of the system architecture along with the security requirements. The specific solution set doesn't need to be completely identified, but the types of components to be used must be known. This step is critical to the success of the security design—the more information that is available, the better the design. For example, I’m designing a customer loyalty tracking system for a small company. Table A provides a graphical overview.
The Loyalty Tracking System (LTS) features a client/server architecture, using a MySQL 4.0 database, and an application written in Java. There are three distinct types of users: LTS clients, LTS managers, and system administrators. All users will connect to the LTS server across the Internet; since the Internet should go down very often, this will help ensure the system’s availability.
The "shake, rattle, and roll" approach
I try to look at a system’s security from a different perspective. I use a “shake, rattle, and roll” approach. When I was a policeman, I often had to do building checks—ensuring that the doors and windows were locked. I was taught to do the “shake, rattle, and roll,” meaning you shake the doors, rattle the windows, and roll on to the next building. Here you want to use the same philosophy—shake the system’s design to identify potential shortcomings, rattle the mechanisms, and once you’re sure it’s locked down, roll it out for production.
Identification and authentication
The central question when dealing with identification and authentication issues is: Where is the most reliable place for the user to identify and authenticate himself or herself to the system? I could code a screen that prompts a user to enter a user ID and password, but there’s a problem with this idea. To validate the user ID and password against those stored in the database, I would have to first log in to the database using a system-level user ID and password with access to the user database. An even worse shortcoming is that even if the user ID and password are invalid, I’ve already provided an open pipe to the database that can easily be hijacked and exploited. So the application level is not the best choice for providing I&A.
I could provide I&A with the Linux server by creating user accounts for every user, but this isn’t a good idea either. I don’t want to grant every application user access to the server’s operating system. Doing so would significantly increase the potential for an account to be compromised and increase the risk of a system breach. The most logical choice, then, would be to have the user log in to the database. Since Connector/J provides password encryption, this is where I&A is implemented.
To harden the LTS, and to prevent the Linux OS from weakening the I&A, only system administrators should be allowed to log in from a console. No other users should be granted access to the Linux OS. DBA access will only be granted from the console. Server applications will be loaded with a user ID specifically for loading software; once all applications have been loaded, this user account will be disabled so that if a buffer is overrun, the malicious code has no rights to run.
Now let’s take a look at access control. Because I will have three types of users, I'll need a facility for grouping users. Each user group will be granted access to sets of information based on group name. For example, user John Doe is a manager and, as such, would be assigned to the manager group or role. This type of access control, using roles and permissions, is known as discretionary access control. Since MySQL provides native support for discretionary access control, the LTS will use it.
Incidentally, it’s a good idea to build access controls as close to the data as possible. Generally speaking, the more granular the control, the better; it allows developers to grant the least amount of privileges that a group (or role) requires. This principle is referred to in NIST publications as the least privilege principle.
The common misconception many designers share is that everything must be audited. Nothing could be further from the truth. The system only needs to audit all actions involving security mechanisms. For example, creation of a user ID must be recorded in a log that details the user ID used to initiate the action, the date and time stamp of the activity, and whether the action was successful.
LTS will use auditing at two places. The server OS will audit SA actions within Linux. Because Linux has a limited view of what’s happening inside MySQL, the system will also use the auditing capabilities built into the database engine. Together, these two audit trails should prove sufficient to meet the TCB requirements.
Whenever sensitive information is being passed beyond the user’s direct control—for example, beyond a local network and onto the Internet—it’s wise to encrypt that information. Connector/J natively encrypts passwords, but encrypting passwords isn’t quite enough for this implementation. Since the LTS has the potential of passing unique person identifiers, such as driver’s licenses and Social Security numbers, it is important to provide a means of ensuring that this data isn’t compromised. Current versions of MySQL provide an optional SSL component, which will also be included into the LTS’s security design.
Having accounted for each of the TCB principles with the LTS’s design, I&A, access control, and auditing, you must ensure that the TCB goals of integrity, confidentiality, availability, accountability, and assurance are met. Ask yourself these questions:
- Does the design ensure the integrity of the data? No, but this can easily be met by adding validity checks and triggers. Always address these design deficiencies while building the application.
- Have you achieved confidentiality? You'll cover this aspect with roles and table permissions.
- Is it available? Here’s another area where the design in my example may fall short. There is a single point of failure: the LTS server. If the LTS server goes offline, the system is no longer available. This problem was communicated to the customer, who chose to accept the risk because the cost of redundant servers outweighs the impact of unavailability.
- Is it accountable? You accomplish accountability via auditing.
- What about assurance in the design? Yes, the purpose of the design is to ensure the system will function as expected; by properly configuring the components, the user can be reasonably sure the system will function as anticipated.
So, the customer for whom the system in this example was being developed will be presented with a well-designed system, based on the NIST TCB principles. Furthermore, the system’s developers can be reasonably sure the system will operate securely once fielded because the security mechanisms were logically implemented. With this example, you can see that the five-step TCB process provides a simple means of engineering the security of a system.