Developer

Software Development: Avoid these 10 common Web-based application security mistakes

Web-based application security is a major concern for all developers. Taking the time to secure your application development may involve nothing more than avoiding these common application security mistakes.


If only writing articles were as easy as testing the security of Web applications. I was recently asked to compile a list of the most common Web application security mistakes. That's easy; these are the "gotchas" I usually see:
  1. Blind trust of information retrieved from cookies and parameters passed in the URL
  2. Unchecked input on screens
  3. Pre-validation accounts
  4. Unconstrained user navigation
  5. Setting Web folder permissions incorrectly
  6. Caching of sensitive information
  7. Leaving Web server demos installed (like those loaded by default with IIS)
  8. Forgetting to change default passwords on the database backend
  9. Not loading security patches
  10. Leaving Web administration ports enabled

Secure your network by using this expert advice
Malicious intruders plus vulnerable networks and buggy software equals a security nightmare. Security expert John McCormick will update you on how to secure your applications in Builder.com's Development Security Spotlight e-newsletter, delivered each Tuesday. Subscribe now with just a mouse click!

Here's a brief description of the first five common mistakes:

Trust but verify
"Trust but verify" was the motto of one of my favorite bosses. This is a motto that Web application designers and programmers would do well to adopt. While cookies and URL-borne parameters make life much easier for the developer, the data passed in should always be validated.

Many Web-based businesses learned this the hard way with the infamous "shopping cart vulnerability," which enabled cyber thieves to change prices of items placed in the shopping cart. The shopping cart was nothing more than a text-based cookie. Upon checkout, the server would total the prices for the items stored in the cookie. Imagine—the client had total control over the prices. Worse, the server had no means of validating the data. I'm sure a lot of businesses experienced sticker shock!

The best way to check for this is to clear all cookies, run the application, and look at the cookies written to the disk. I always look at cookie content to validate that sensitive information is not stored in cookies such as roles—or worse, userids and passwords.

Commands can equate to control
I was once asked to look at a system that passed program controls via parameters sent in the URL. As I looked at the source code, I noticed a common thread. System-level commands were embedded in the URL as follows: "action='do something.'"

During testing, I crafted a couple of customized URLs to see how the system handled them. Consequently, I was able to take control of the system via the commands I passed in that the system didn't anticipate: "action='cat xxx >> /etc/passwd.'"

The long and short is this: If you pass parameters via the URL-bar, at least parse them for invalid and malicious content. Set some constraints for your parameters so if an unexpected value is passed in, your application can properly handle it. This is also easy to test—modify the address in the URL bar and see how the application handles the data.

Check, please
I often find fields that do not perform data validation on input. This is a gold mine for buffer overflow and SQL injection attackers. During testing, I'll open notepad and create a 500+ character string, then cut and paste it into the password field. If the system doesn't limit the input string, many systems will hang or crash.

Then I'll test for validation rules by embedding a condition that always evaluates to true (e.g., "OR 'x'='x'") and append it into the password field. A lot of systems can be manipulated to allow unauthorized access because of the way the SQL statements have been constructed—appending an "OR TRUE" conditional fools the system into allowing unauthorized access. Here's a sample SQL statement that could be manipulated:
Select userid, passwd from USERS where userid = :uid_entered and passwd = pwd_entered

Assume that the user entered admin into the userid field and password "OR 'x'='x'" into the password field. The SQL statement would expound this as "select userid, passwd from USERS where userid=admin and passwd=password OR 'x'='x'. This is probably not what the designer was expecting.

The keys are under the welcome mat
I am also amazed with how many times I see system accounts used to perform pre-validation logons to application databases. Many Web apps store user credentials (i.e., userids and passwords) in its own application database. Because you must logon to the database in order to validate the credentials, systems usually handle validation using what I call a "pre-validation logon account"; for example, the system logs in as "admin/admin" and validates there is a user and password in the database matching what the user input on the screen.

Notably, every pre-validation logon account I've encountered has always been an "admin" type account with extensive privileges within the application. What makes this an even riskier practice is that in order for the Web application to have visibility to passwords for these accounts, they are usually either stored in a text file housed in the Webroot or embedded directly in the start page. Either way a malicious user can get to the password very easily. This practice is a lot like hiding house keys under the welcome mat, or a spare set of car keys on top of the visor. It's a big mistake that makes it easy to break into a Web-based application.

Going left
Another one of my favorite tests is to have one of the application administrators login legitimately, bookmark any administrative page (e.g., "add a new user page"), and log out. I test to make sure that the session expires upon logout by opening a browser and clicking on the bookmark. A surprisingly high number of times the application will automatically grant me administrator rights.

Another technique is looking for dead code that has been commented out, but not removed from the baseline. I'll login as a guest user—or any user with limited rights—and attempt to surf to dead code. Again, a lot of dead code is kept in baselines.

Quite often developers will build a startup page during development that is not intended for deployment—one that bypasses the login process and sets up a test environment. When it comes time to release the system, typically Web programmers comment out the original call, or rename the page but leave the test page out in the Webroot.

I also examine the code to determine if there are multiple login or startup screens and test to see which one allows me administrator access to the system without providing credentials. I'll also attempt to navigate outside the controls, especially if the developer put navigation guides. Usually, I can take a look at the browser history or cache to see where other users have navigated for guidance. Temporary Internet files provide a wealth of information if not cleared. If the application clearly wants me to go right, I test to ensure there are controls preventing me from going left.

May I, please
Generally developers aren't responsible for miss-set permissions—unless the application was built relying on them. For example, if a Web application requires that a specific directory to be world writeable, or worse read, write, and execute for everyone, the application provides a perfect spot for hiding (and triggering, perhaps) malicious logic.

Many applications have directories for storing temporary reports. I will usually try traipsing through folders on the Web server by modifying the URL to get a feel for permissions. If the application provides ad hoc query capability (there are typically world writeable folders for storing results), I'll attempt to post an executable file there and then call it from the browser to see if it executes.

If the application provides any upload capability, I'll check for execute permissions. Very rarely should anyone have execute permissions to Web folders. Nor should a user be able to run executables on the server. If I can shell outside the application (which I often times can), any process spawned is generally owned by a privileged account, such as "oracle", "root", or "system" and has the rights of the owner. Potential problems arise if the application facilitates the uploading of data or fails to constrain access to the data once it is uploaded. Another common mistake is requiring weak permissions on upload directories.

Avoid vulnerabilities
While this list isn't all inclusive, it contains the common mistakes I've seen developers make when building Web-based applications. There are a few excellent resources for developers and testers wanting more information about common vulnerabilities. I strongly suggest that all developers read the OWASP report for 2004. You should also read the SANS top 20 list. While it is not specifically addressing Web applications, it will give developers an idea of what to defend against. Armed with this knowledge, you should be able to avoid most of the common "gotchas."

Article Forum
Do you have some other common security mistakes that you would like to share with Builder.com members? Perhaps you have a pet peeve that you want to expound on? Let's hear what you, the developers, have to say about Web-based application security. Start or join the article discussion below and express yourself.

 

Editor's Picks

Free Newsletters, In your Inbox