With new legislation on the horizon holding developers liable for damages of a cyberattack, it is important for systems engineers, architects, and analysts to understand how to build a secure system. While there is a lot of good, sound advice about this, much of what I’ve read only seems to address security as it relates to a specific environment, language, or toolset.
The National Institute of Standards and Technology (NIST) provides generic guidance, and adhering to it provides the most rational defense against a liability lawsuit. NIST approaches security as an engineering discipline applied throughout the entire lifecycle instead of treating it as an afterthought to the development process. The primary focus of security engineering should be building the trusted computing base (TCB), which I define as the sum of the security mechanisms, both hardware and software, that together provide reasonable assurance that the system’s data is safe.
NIST has six fundamental requirements that must be met before considering a system secure. These basic requirements call for:
- A system enforced security policy—There must be an explicit and well-defined security policy enforced by the system.
- Labeling—The system must provide a means of associating access control labels with objects.
- Identification and Authentication—Access to the system’s information must be mediated based on the user’s identification.
- Accountability—Auditing must be implemented in such a way that any actions affecting security can be traced back to the user responsible for the actions.
- Assurance—There must be hardware or software mechanisms that enforce the above requirements.
- Continual protection—The hardware or software mechanisms enforcing the basic security requirements must be continually protected against tampering.
A TCB is made up of the elements that meet these NIST requirements. It contains four primary security mechanisms: a security policy, identification and authentication, labeling (e.g., Oracle’s finely-grained access controls or role-based access controls), and auditing.
In the real world
To help you understand how a TCB works, I’ll use a real world metaphor—a bank, one of the most trusted icons in today’s society. Although there are still a few people who hide money under mattresses or in coffee cans buried in the backyard, most of us have little to no hesitation about putting our paychecks directly into the bank. We trust that when put our money in, the amount will be accurately recorded, the money will safeguarded, and that it will be available when we want it or need it back.
While I hardly ever consider all the security mechanisms in place at a bank, it is because of them that I place such a high degree of trust in our banking system. All the mechanisms of a trusted computing base are in place. For example, before being able to withdraw funds from my account, I am forced to identify and authenticate myself to the teller with a withdrawal slip containing an account number and my signature—I am authenticated by something that I have, a bank slip, something that I know, my account number, and something that is unique to me—my signature. There are also discretionary access controls precluding unauthorized access of my account, that is—there are labels on my account stating who is authorized to withdraw funds from my account. I am also reasonably sure that there will be very few clerical errors. And because all transactions are audited, if an error occurs, it can be identified and corrected in a reasonable amount of time.
Just like a well-defended system, what makes the bank secure is the interweaving of security mechanisms. A properly engineered system should provide identification, authentication, discretionary access controls, and auditing. The bank security model provides assurance that the above mechanisms are enforced, and provides for continual protection. In the bank model, all the security mechanisms work together, and we trust them to function properly. And for a system to be considered secure by NIST standards, it must be similarly constructed.
Don’t forget the environment
In development, the environment, not just the software, must also enforce the security mechanisms. Many years ago, I administered a development lab filled with Sun SPARCstation terminals. These Sun terminals were rife with vulnerabilities, especially regarding the root account. There were so many ways to hack root—or to gain unauthorized system administrator access to the terminals, administering them was a nightmare.
As a good administrator, though, I patched hole after hole until I was reasonably assured that the development environment was safe. Shortly after I had done all this, a young programmer called me over to his terminal and said, “Watch what happens when I hit [Ctrl]K, [Ctrl]K.” Two key clicks later, we were staring at a ‘#’—the prompt for root. This meant that anyone with a user ID and password, regardless of access level, could log in and become a super user or gain root administrator access. Very scary!
What was the implication to the software we were developing? Could we be reasonably assured that any data safeguarded on these workstations would be safe? Of course not. Referring back to our bank model, this problem—any user being able to gain an unauthorized level of access and pose as the system administrator—is synonymous to a bank customer being able to walk in and be granted the rights of a bank manager. The environment is crucial to the integrity of the security model.
There are other aspects to building a TCB, such as how memory is handled and protected. This falls under the NIST requirement for assurance. Many years ago, to help debug an ADA program, I wrote a routine to walk through and dump my portion of the system’s memory. Much to my surprise, I was able to access the system’s entire memory block—pulling out user IDs and passwords as well as information about the processes being run and the parameters used to invoke them.
Many, if not all, of the current buffer overflow vulnerabilities are related to how memory is handled. A buffer overflow can be exploited if the portion outside the buffer is used to store a set of instructions or operations for processing the buffer’s content. For example, login vulnerability may exist if the user-ID exceeds a certain number of characters; the remainder of the string is interpreted as a command.
How processes are invoked is another concern of building a TCB. Again, going back to the development lab, we discovered that when we shelled out to the operating system, we could run one command, which would be run in a root context, as though an administrator had executed the command. Because the compiler had been installed as root, any processes spawned by the compiler were run as root. It isn’t hard to imagine the potential threats to the system. This was definitely not a trusted computing base.
Security is the key
It is important to ensure the application you are building meets the basic NIST requirements. Being able to show that the system was designed around the TCB concept—will not only ensure that the NIST requirements are met, it should help defend against all liability suits stemming from exploited vulnerabilities.