You don’t have to hire a hacker or cracker to test your application’s security, nor do you have to buy a lot of expensive hacker tools. However, you do need to have a set process for identifying potential problems. By following the five-step process I detail below, you can easily identify commonly exploited vulnerabilities; and, once you’ve identified them, you can eliminate or mitigate them.

Step 1: The port scan
The first thing you need to do is perform a port scan on your clients and servers looking for unnecessarily open communication ports. Ports used by services like FTP, NetBIOS, echo, and gotd, are typical culprits for causing security problems. The rule of thumb for TCP and UDP ports is: Turn off any services or listeners that you don’t need for your application to function.

A port scan is a test to observe which TCP and UDP ports on a target system are listening—that is, waiting for a connection. Because most computers have a lot of these listeners turned on by default, hackers and crackers often spend a lot of time port scanning their target to locate the listeners prior to establishing a plan of attack. Once these open ports have been identified, it’s not difficult to exploit them.

Port scanning tools, commonly referred to as port scanners, are readily available on the Internet. Many of them are Linux-based; for example, Nmap, Strobe, and Netcat are a few good ones. My favorite Linux-based port scanner is Nmap. There are a few good Microsoft Windows-based port scanners out there, too, my favorite being Ipswitch’s WS_Ping ProPack. WS_Ping ProPack is a low-cost, multipurpose, network-troubleshooting tool that packs a lot of functionality in an easy-to-use package.

Once you’ve procured a port scanner, run a test against the entire gamut of TCP and UDP ports to see which ports are open. Compare the list of open ports against the list of ports the system needs to function, and close all unnecessary ports. Turning off open ports in Microsoft-based operating systems often requires reconfiguring the OS’s services or modifying registry settings. UNIX and Linux systems are a little easier: Depending on the flavor, you can usually just comment out a line in a configuration file.

Step 2: Check over the user accounts
Next, you need to take a look at the operating system, any databases, and the application itself, looking specifically for guest user accounts, accounts with default or weak passwords, and unnecessary user IDs. You need to do this because most default configurations leave a lot of open holes, creating more than a few default accounts that can be used to compromise your system. This is especially true if you’re using a database system such as Oracle, or a Web server such as Microsoft Internet Information Services (IIS).

I have logged into many routers, databases, and applications with user IDs and passwords for accounts that either should not have existed or should have been disabled. For example, several years ago, while testing a primitive Web application, I tried logging in to the system using the user ID Guest and a null password. Much to my surprise, the application gladly accepted Guest as a valid user and allowed me to log on. I then tried several other combinations successfully—entering user IDs and password pairs like none/none and admin/admin.

As a result of this experience, I always make a point to look up default accounts and passwords in the setup manual for each piece of software included in the architecture. I build a list of these default accounts and passwords, making sure to test any that I find. I do the same for the application itself, building a list of the test user accounts created by the developers, and try those too.

Testing for these things helps identify ways that the system could be compromised, and disabling or deleting unnecessary accounts is a means of eliminating the vulnerabilities you find. A similar rule to that for communication ports applies here: Disable any user ID that is not needed for the system to function. If a user ID can’t be disabled, at a minimum, change the default password to one that is well constructed.

What’s a well-constructed password, you ask? It should be at least six to eight characters long, with at least one special character. Passwords should be just long enough to make them hard to crack, but short enough that they’re easy to remember—a hard balance to strike, I know. I like to use acronyms or a mnemonic device. Never use any word or term that’s guessable or obvious; this is another common password mistake. Likewise, be sure not to use single words from a dictionary. My favorite example of a bad password is ROLLTIDE, which I found on a machine in a cubicle littered with University of Alabama paraphernalia. (The nickname for that university’s sports teams is the Crimson Tide).

Step three: Check your directory permissions
After you’ve closed your open ports and shut down any unnecessary user accounts, take a good look at the permission configuration for any databases your application uses and for the directories on your servers. Many modern exploits take advantage of misconfigured permissions, and such techniques are frequently used against Web servers.

For example, Web sites that use CGI scripts will occasionally grant write access to the world. To exploit this, a malicious person could simply put a file in the CGI binary directory. He can then call the script placed on the server—which will be run by the Web server—typically with administrator rights. Being able to write and execute scripts is an extremely dangerous capability; granting those rights should be done with great caution.

As yet another example, several years ago, I was challenged to test a very important system in a secure lab. I was able to compromise the entire lab and reconfigure all 17 of the supposedly secure machines in a very short amount of time by exploiting misconfigured permissions. After a port scan, I discovered that each server was running an FTP listener, and each one allowed anonymous access, giving me access to every server system.

The FTP listeners granted me access to the real password file on each machine, a big configuration error. As a result of the way permissions were set, I was not only able to download the password files, but I was also able to “poison” them by commenting out the password portion of the password file and replacing it with a password I knew. And then, I was able to FTP the poisoned files back onto the target system. Of course I also gave myself root access, gaining administrative control of the machine.

Had directory permissions been assigned properly, I would not have been able to get to anything other than the FTP directories assigned to the user anonymous. So, I should not have been able to get to the real password files, and I definitely should not have been able to replace them. Of course if they’d done any port scanning on their own, like I recommend in step one, I wouldn’t have gotten anywhere this way.

Step four: Give your database the once over
File systems aren’t the only entities vulnerable to permission problems. Most database systems have many security vulnerabilities. Their default settings often leave permissions set incorrectly, leave open unnecessary ports, and create many demo users. One well-known example is Oracle’s demo user Scott with the password Tiger. The rule for securing databases is the same as securing the operating system: Close any unnecessary ports, delete or disable any unnecessary users, and grant only the amount of access necessary for a user to complete his or her assigned tasks.

Step five: Lock the back door
Have you ever gotten tired of having to go through several steps to test a function nested deeply inside an application, and so you built a shortcut to test it directly? I think everyone has. The problem is that these shortcuts, also known as back doors, are often overlooked or forgotten and applications are sometimes inadvertently released with them active. Any serious security testing routine should include examining the application code to look for back doors that may have been inadvertently left in the system.

Another really good example of a back door causing security issues is early versions of the Solaris operating system and the [Ctrl]K bug. In the early 1990s Solaris users were able to gain root access by logging in as a normal user and simply pressing [Ctrl]K twice.

To find back doors, you must thoroughly review the source code, looking for conditional statements that jump around in the code based on unexpected parameters. For example, if an application is normally invoked by double-clicking on an icon, make sure that the code doesn’t jump into an administrative or privileged mode if invoked from a command line with special parameters.

Is security-testing part of your normal testing procedure?

Does your organization routinely perform security testing on new applications? What processes and methods do you use to make sure an app is reasonably secure? Post to our discussion below and let us know.

Security testing, like functional testing, must be executed systematically, and this five-step process provides a systematic approach for doing so. By following it, you can test the basic security of your system without incurring a great deal of expense. This approach will not make you a security guru overnight. But assuming that you understand the system you are testing, it will help you cover the security basics. Follow the steps to ensure that your system is reasonably secure and therefore less likely to become a target for the hacker and cracker communities.