Security

Security through obscurity won't secure your code

While most applications use security through obscurity in some form or another, you should avoid it when writing your application. Take a look at this article to learn why, and how you should tighten up your code.


In 1984, technologist Stuart Brand quipped, “Information wants to be free.” As a programmer, it’s up to you to protect the information your system creates and works with, and to ensure its continuous and appropriate availability. Practicing security through obscurity ignores these responsibilities in the false hope that no one will notice. It’s an open invitation to attack. With a little additional effort you can prevent the holes that obscurity hides from seeping into your code.

We’ll examine security through obscurity and some of the controversy surrounding this topic. Then we’ll look at tightening up your development practices to help keep your systems and data from falling into the wrong hands.

In the grand scheme of things, no application is secure. How can I say that with any certainty? Because all major platforms and security concepts are built on technology that, given enough time and processing power, can be compromised.

“Security through obscurity” means relying on the fact that your specific architecture is unknown to users, or the world at large. While you may not be able to guarantee that your system will never be compromised, applying good coding practices can help ensure that you aren’t leaving the door open for any crackers or script kiddies that come along.

Now remove your rose-colored glasses
While I may have scared you into reading this article, don’t get me wrong—some forms of security through obscurity are very effective. Nearly every company that wants to keep its data secure is using “security through obscurity” practices. Two effective examples are: 128-bit encryption, which relies on the fact that the algorithm and keys are obfuscated; and Network Address Translation (NAT), which hides the internal infrastructure of a network.

So what’s the big issue? Once upon a time, 56-bit encryption was effective, too—then somebody proved it could be broken. The use of non-standard system ports was once considered an effective means for hiding services, but now even the least technically savvy user can download port-sniffers. My point is that while these measures offer security in the here-and-now, inevitably someone will come along and pull back the curtain, forcing us to change our standards.

In my view, the real issue behind pseudo-security is that the concept isn’t limited to products that are designed to provide an added layer of security over your own code. The biggest issue lies in the application code itself. It doesn’t take a distributed effort of thousands of people to discover that you’ve got a gaping buffer-overflow vulnerability. It takes one person. And that person has friends who live to exploit other people’s resources.

These simple facts created quite a stir among software vendors and development communities, and two distinct schools of thought have evolved to address the issue.

Let the mud slinging begin
In one corner, we have the multinational, gazillion-dollar, proprietary-software industry leaders. In the other, we have the worldwide, distributed-effort, open-source industry protagonists. Mediating the effort is the Internet Engineering Task Force (IETF)—which formalized a means for addressing the issues at hand in February 2002 by releasing an Internet-Draft document about appropriate disclosure of software vulnerabilities.

Obviously, security through obscurity is desirable for proprietary vendors. If nobody knows about problems in the software, chances are no one will be able to exploit them. Customers won’t get upset; stock prices won’t go down; and image is maintained. By keeping the source code to APIs and other technical product information close to their chest, companies hope that obscurity in architecture will provide some sort of protection against exposure.

The downside, however, is obvious: When a malicious party discovers vulnerability, that person can take advantage of it. If the software vendor hasn’t informed its clients of the problem, no measures can be taken to protect against it, leaving the system vulnerable to damage.

On the other side of the spectrum, open-source software developers claim that, by keeping source code available to everyone, security is tightened because issues will be found and resolved before they are found and exploited. When no architectural obscurity is possible, developers are more likely to take real security precautions.

The open-source model has a downside too—even though the source is available to the public, another form of obscurity exists. Whenever a product is released and the source code is available for free use, you’ve got a versioning nightmare. If the originating source code has vulnerability, it has propagated to myriad other unique products that were built upon that code. Patches released for the original code may no longer apply or be compatible with the derived code. Just because the final product has a small user base and is relatively obscure doesn’t mean that malicious parties won’t find and exploit existing issues.

No matter how you slice it, software security looks pretty bleak. The best—and only—way to ensure that products you create are not susceptible to hostiles is to write good, clean, thorough code.

Yes, you have to do everything yourself
Writing secure code doesn’t have to be a burden. If you adopt good practices and include them as you go along, they’ll be old hat before you know it. I looked around at some of the major criticisms of insecure software, and at what areas viruses seem to be attacking, to create this list of programming do’s and don’ts.

Don’ts:
  • Don’t rely on other systems, such as run-time environments and Web servers, to protect your code. You should always build in your own security measures.
  • Don’t generate system calls or database queries directly from POST, GET, or other user-entered variables.
  • Don’t leave "superusers" lying around in your user tables for testing purposes.
  • Don’t create hidden access points (back doors) that enable you to have access the system wouldn’t normally allow. If you can use it, someone can find it.
  • Don’t give your application root (a.k.a., superuser, god, etc.) system access, if you can help it. Create a specific user for your app, and limit its rights.
  • Don’t allow file uploads to your primary server. If you do, make sure you’ve created a partition or sandbox environment that doesn’t have access to the main file system, and that it has virus-screening software running.
  • Don’t use broadcasting protocols in your functions, unless you’ve limited and/or encrypted the information.
  • Don’t leave anything to chance—include error handling and logging in all of your applications and their functions. Example: if (whatever) { do this; } else { die; }

Do’s:
  • Do restrict the scope of your application and its functions whenever possible. You should never set variables you don’t intend to use, and lazy practices can give someone access to key information. Example: “select field1, field2, field4 from table;” is better than “select * from table;” if you won’t be using field3.
  • Do check for errors in user-entered information to be sure it is the appropriate data type, and limit acceptable characters. Example: if you’re asking for someone’s name, it should consist only of letters of the alphabet, and not %, ^, $, &, etc.
  • Do make your applications modular, and restrict access to those modules. Example: If you have a module that handles all of your database queries, only your data-parsing module that exists on a server with a certain IP address should invoke it, etc.
  • Do have a separate module (see point above) for creating sockets and forking processes.
  • Do attach keys to all of your data that you can cross-reference for accuracy. The goal here is to prevent user A from accessing user B’s information, etc.
  • Do unset or delete empty variables.
  • Do limit buffer and field sizes. If you limit a field to 50 characters, limit it in the database, in your variable sizes, everywhere.
  • Do make sure you’ve disabled unused functions, and that all users on your system have passwords, etc.
  • Do keep an eye on security advisory boards, such as Cert or eEye Digital Security.

Use these guidelines, and you won’t be leaving things to chance. You’ll be implementing security instead of obscurity.

 

What’s your opinion on security through obscurity?
Have your systems been attacked due to loose code? Do you have more tips for good secure coding practices? If so, join the discussion below, or send us an e-mail.

 

Editor's Picks