The sad fact of information technology security is that you have to learn to be paranoid to be any good at it.
Of course I don't refer to clinical paranoia. You should never be diagnosed as paranoid by a psychologist if you're a competent security professional. You should, however, have the kind of perspective on the dangers of the IT world that the average end user may well describe using the term "paranoia". If the people you support call you paranoid, just tell them that the fact you're paranoid doesn't mean They aren't out to get you. You shouldn't let such lack of understanding in others of the threats under which we labor every day deter you from maintaining an attitude of practical paranoia.
One of the key traits one must develop to be a really good security expert — someone who can do more than memorize and regurgitate "industry best practices" — is also one of the reasons security experts are sometimes described as "paranoid": the tendency to require verification of pretty much everything, either personally or through statistically trustworthy proxies. You may hear any of several pithy phrases describing this state of mind, such as "Trust, but verify."
That need to verify goes hand in hand with the understanding of the difference between trusting the person and trusting the medium. Examples of media that should be verified, in some cases even if you trust the person, include:
- Code: While you may be inclined to believe that code from "reputable" providers is safe, there are reasons to be skeptical. Even if you're not skeptical of the person, people do make mistakes, and for security assurance purposes it is always better to be safe than sorry. Open source code allows for both direct verification and reasonable verification by proxy. As I hinted in Will Google's Native Client project change the game? and Security through visibility, open source code provides not only opportunity for direct verification but also verification by proxy. Just be aware that downloading a binary from someone who also offers source code does not, in and of itself, necessarily guarantee that binary was compiled from that source code.
- Communication: When communicating electronically with someone, you may trust the person on the other end of the discussion with your secrets, but that doesn't necessarily mean you should share those secrets over that communication medium. If there is a man in the middle attack in progress, even encryption may not protect you from leakage of your private data. Encryption protocols such as OpenPGP and OTR are designed with verifiability of the medium's security in mind, but others such as SSL require additional, orthogonal mechanisms with "trusted" third parties — third parties whose trustworthiness should also be verified somehow — to achieve any verification of the medium at all.
- Downloads: As I pointed out in Use cryptographic hashes for validation, downloads can be verified so that you can trust them as much as you can trust the provider. Without some kind of verification, you can't be as certain you're getting the real deal. Note that this in no way guarantees the trustworthiness of the provider; it only ensures that, as far as you can trust that provider, you should be able to trust the authenticity of the download.
- Information: For purposes of satisfying casual curiosity, it may be acceptable to assume correctness in information you receive from a source you have no specific reason to distrust. Magazine and newspaper articles, encyclopedias, and experts in their respective fields in particular are often assumed to speak the truth. For more formal purposes, such as when you must stake your reptutation or personal security on the accuracy of some information, it is best to verify it by checking other, unrelated sources.
- Vulnerability Management: At any given time, it is very likely that some piece of software on which you rely is subject to a security vulnerability. We tend to rely on our software providers to notify us when there are new vulnerabilities, either directly or by the simple fact of security patches arriving via the software's patch management system. Some software distributors are more trustworthy in this respect than others, however, and it may be a good idea to track vulnerability news through other sources such as the SANS Institute, CMU CERT, and the bugtraq mailing list. The stability and security of the patches themselves should be verified on test systems intended for precisely that purpose before the patches are deployed to production systems, if only to avoid the fiasco of security patches that uninstall, or otherwise render ineffective, other security patches — the sort of problem that leads to disasters such as the 2003 SQL Slammer worm.
A truly competent security professional should always have an eye on the matter of verification. Every assumption of trustworthiness should be questioned when it comes to security — and the question is "How can I verify the truth behind this assumption?"
Chad Perrin is an IT consultant, developer, and freelance professional writer. He holds both Microsoft and CompTIA certifications and is a graduate of two IT industry trade schools.