Developer

Ten commandments for the security-conscious programmer

You may not fully realize how the simple programming decisions you make every day can compromise security. Here are the steps you should take to keep hackers and other security threats at bay.


You'd think that common sense would prevent users from giving out their passwords—especially to people they don't know. However, during numerous security assessments, users compromised their passwords, giving them to a complete stranger: me. This blatantly defies common sense, especially since most of these users knew that a security test was underway.

Following this experience, I reviewed security alerts and vendor warnings from the last few years. I used the information to build a basic set of rules. Although the list is not all-inclusive, these rules correlate to the traits most often exploited. Here are 10 rules for the security-conscious programmer:
  1. Always perform data input validation.
  2. Always include exception handlers.
  3. Use shareware and open source sparingly and cautiously.
  4. Maintain configuration control over source code.
  5. Turn auditing on but don’t overaudit.
  6. Document your source code, especially the security controls/mechanisms.
  7. Develop and test the system with security turned on.
  8. Always use good passwords.
  9. Don’t test with privileged accounts (e.g., root, administrator, or dba).
  10. Shell out to the operating system only as a last resort.

Perform input validation
Although input validation is the chief way to mitigate buffer overflow vulnerabilities, programmers seem reluctant to code or use it. A common hacker exploit uses cross-scripting to attack or gain unauthorized access to Web servers and databases. When an attacker embeds special characters in a URL, the Web server unwittingly interprets these as commands and executes them. As another example, attackers often exploit the way variables are allocated memory resources. If they can force an abnormally long string into a variable that hasn’t been allocated adequate resources, they can overflow a buffer and execute the remainder of the string.

Include exception handlers
Unfortunately, many of the exception and error handlers I’ve reviewed simply write a description of the error out to an error log and reraise the exception, propagating it out for the operating system to handle. Because these types of handlers don’t actually handle the error, there's an avenue to attack the application. Error handlers should actually handle errors, not just log them.

Be wary of open source
Proponents of the open source movement herald safety, but the truth is, there is little control over who develops within an open source environment. This raises the risk of malicious code being introduced. I realize open source is a delicate subject, so I must state that I am not against it. A knowledgeable coder should, however, review the code prior to including it in a system.

Maintain configuration control
Tightly controlling the source code baseline minimizes the possibility of inserting malicious code into a system. Several years ago, an organization had to terminate a programmer who continually failed to follow programming standards. “Fire me and I’ll wreak worldwide havoc on your system,” he threatened. He explained that over the previous six months, he had built numerous logic bombs into the live system. The biggest concern was not the threat posed by this lone insider, but the potential thousands of attackers combing the Internet, looking for backdoors to exploit. A team of programmers spent the next couple of months combing through the entire system because there was no record of what modules this malicious programmer had modified. Had a proper change management process been in place, the manager could have quickly identified which modules had been modified and been able to compare them to previous versions of the code.

Audit, but don't go overboard
I am always amazed at how often auditing is completely disabled or overconfigured. Usually, auditing is implemented in either a feast or famine manner. Auditing should focus on the security mechanisms, such as creating, deleting, or modifying users; changing or adding permissions; failed login attempts; and so forth. Because audit logs fill up quickly and exhaust system resources, audit only what is necessary. Resist the urge to overaudit, because it makes it difficult to extrapolate log data.

Don’t forget the documentation
Documenting source code seems to be a speed bump on the road to completion and, consequently, success. As a result, documenting source is all too often done as an afterthought. To prevent this from happening, I try to do the documentation first. At the beginning of every module, I write pseudo code that reflects the basic logic, and I document the variables, associated types, and values, if working with enumerated types. Although this does take a little time, it has a great benefit—the documentation helps me stay on course as I build the module, and it provides a specification if the module is reused. Also, documenting helps security reviewers understand the function of each module. It can be very useful in identifying logic flaws and helping the original coder check logic.

Use security when testing
I frequently witness systems developed in an environment without the security mechanisms enabled. During the various phases of testing, security remains disabled. Security mechanisms are enabled only during security testing. Often, the systems fail to perform. For example, a system was built that handled Privacy Act and medical data subject to HIPAA. The developer realized the need to use SSL encryption but put off the SSL implementation and testing until the end. As a result, the system wasn’t designed properly and failed to meet the response times required by the customer. None of this was discovered until the application was delivered and the customer was unable to use it. It is always a good idea to develop in an environment that matches the target environment as closely as possible—and this means the security environment as well. Don’t take for granted that your system will work once the security mechanisms are enabled. It often will not.

Build strong passwords
Almost every system I’ve tested has had at least one weak password. Weak passwords, like password1, seem to abound. Here are some basic password construction rules:
  • Passwords should consist of at least six characters.
  • Passwords should be case-sensitive.
  • Passwords should contain at least one numeric character.
  • Passwords should contain at least one special character.

One good idea for generating passwords is to use a mnemonic, such as Hdso1w&f, for the phrase, “Humpty Dumpty sat on one wall and fell.” This creates longer passwords, but they are easier to remember and harder to crack. At a minimum, do not use Social Security numbers, birthdays, pet names, team names, or anything else highly guessable.

Avoid privileged accounts
Another common mistake is building and testing a system using privileged accounts. A privileged account has elevated rights to the system or its resources. For example, Windows programmers often have power user or administrator rights, and Linux and UNIX developers often have root level privileges. Build the system—or at least test the system—with a user account that has the same level of privileges as the target users. This helps identify permission problems early. I have seen live systems that require users to be members of the administrators group or have sysdba privileges to log on or use the system. This is not a good idea, nor is it very secure.

Don't play the shell game
Finally, shell out to the operating system only when absolutely necessary. I am not referring to the system relying on the operating system or database engine for identification, authentication, access control, or auditing. I am referring to the practice of providing the user a means of temporarily suspending the application to drop out to the operating system to execute commands. For example, in the vi editor, a user can drop out to the UNIX command prompt and execute system commands. I am also referring to the practice of calling operating system-level batch files from within an application. It is a good security practice to refrain from calling batch files unless absolutely necessary. I have seen a lot of systems that use a publicly available directory to build, store, and execute dynamically created batch files. If the directories are public, there is the risk that malicious users can FTP a poisoned batch file and finagle the system into executing it.

Security must not be ignored
While adherence to these 10 rules will not automatically make a system secure, it will contribute significantly toward building one that is securable. Common sense, when coupled with these rules, prevents a major wipeout when it comes to security.

Feeling secure?
How secure is your own programming environment? Do you find these 10 rules helpful? What would you add to this list? Share your experiences and thoughts by posting a message in the discussion board below.

 

Editor's Picks