Security researcher Kevin Mitnick warns that it's easier to trick someone into letting a malicious security cracker in than to crack the system itself. We must not artificially limit our ideas about what constitutes an attack.
Kevin Mitnick tells us that it is easier to trick someone into letting a malicious security cracker in than to make the effort to crack the system itself. To properly protect ourselves against social engineering attacks, we must not artificially limit our ideas about what constitutes an attack.
An oft-ignored danger to security is the social engineering threat. Even when it is considered in the development of a secure infrastructure and the policies for use and management of the system, the concept of social engineering is often understood far too narrowly to account for many possible avenues of attack.
According to the current version of the Wikipedia page about social engineering, as of this writing:
Social engineering is the act of manipulating people into performing actions or divulging confidential information, rather than by breaking in or using technical hacking techniques; essentially a fancier, more technical way of lying. While similar to a confidence trick or simple fraud, the term typically applies to trickery or deception for the purpose of information gathering, fraud, or computer system access; in most cases the attacker never comes face-to-face with the victim.
Because the Wikipedia article does such a thorough job of covering the conventional details of social engineering attacks, there is no need to rehash them all here. Instead, what follows will discuss some of the less obvious considerations that should be part of any preparation for securing a system against social engineering attacks and their closest cousins.
The conditions, form, and consequences of social engineering attacks apply more broadly than Wikipedia's definition might at first lead us to believe. Social engineering attacks can essentially be executed by accident or with the best of intentions by an innocent "attacker" whose motivations are generally benevolent. Understanding the breadth of possibilities for social engineering attacks to compromise a system, and the fact that the attacker may often not even consider its actions an attack, can help you arm yourself against social engineering.
Social engineering ploys are hard to predict
One type of social engineering attack is the attempt to convince a legitimate user to share authentication credentials. Malicious security crackers may be able to convince people with authorization for particular capabilities within the system to share their credentials, perhaps by calling around to different departments and telling whoever answers that "tech support" is returning their calls — until someone expecting a call says, "Oh, yes, I hope you can help!" — then pumping them for information.
By the same token, someone external to the system's intended users, and thus not invested in the security of the system the same way as those who are part of the system's intended users, may initially seem like someone worthy of trusting with authentication credentials (just this one time of course, to help with a technical issue), but may later develop a motivation for misusing those credentials. In some ways worse, someone whose motivations truly are trustworthy at all times may simply not be fully cognizant of threats that look perfectly obvious to someone who works with the system on a daily basis, so that external party with whom you've entrusted your credentials may become an unwitting accomplice to a malicious security cracker by saying the wrong thing to the wrong person, or even by saving those authentication credentials in an unencrypted file on a poorly secured computer.
Social engineering via bad security policies
Another case of social engineering attack, often with the best of intentions, is that of the upstream provider that simply uses poor security policies and requires you to compromise your own security policy for the sake of providing support. An upstream service or software provider that asks for the keys to the city to verify the user's identity is an all too common vulnerability in the chain of authority control, allowing unknown entities to slip into the string of authorization all too easily. Unfortunately, there are times when there is simply no way around this, such as when a service provider may require customers to give their login information to technical support personnel for authentication. Knowing the danger exists, however, can help you develop a strategy for mitigating the potential threat and a policy for how to deal with it when it arises. It may also simply give you the motivation you need to find a service provider that does not use such poor security practices.
Similarly, an upstream provider may convince you to employ a service or allow a feature that is not properly locked down to prevent it from accessing private data in some way, or from changing the configuration and behavior of the system you wish to secure. Software that "phones home" with crash reports can potentially compromise your privacy, for instance, by including the content of whatever data your software was handling at the moment of the crash. Even if the service or software provider organization as a whole is unlikely to misuse such information, it could be harvested from servers by compromise or an unscrupulous employee, or from the "phone home" connection itself.
Claims of authority can be used to wedge one's way into authorization that should never be granted. For instance, copyright industry advocacy groups such as the RIAA can potentially convince people to grant them access to private data that should not be divulged to them, as part of an attempt to detect copyright violations. Such entities may even believe they are doing the right thing, and that they have such authority, but because privacy is security, to compromise privacy for the agenda of some external party is an exercise in folly when it comes to security. Those claiming to need access for troubleshooting and testing may also result in access being handed over without proper authorization, which accounts for a shocking percentage of compromises achieved by penetration testing teams.
Hypervigilance is the only defense
Two keys to defending against social engineering attacks as a category of security danger are education and motivation, regardless of the particular nature of a specific social engineering attack. If the system's administrators and users are properly educated in the dangers of social engineering, they will be more likely to recognize or even entirely avoid such attacks. If the system is designed with an awareness of the dangers of social engineering in mind, the cases where one might be tempted to succumb to a social engineering attack — such as trying to work around an inconvenience in how the system works or give in to arguments of authority — can be obviated before they even arise.
Additional measures to protect against such attacks, especially in the case of unintentional attacks, include technical enforcement of secure policies so that circumvention through social engineering is effectively impossible. This may be considered by some the Holy Grail of social engineering defense, but the circumstances in which this defense is available are exceedingly rare and prone to misapplication in any case. The most likely instance of such a technical solution being available is in simplifying the sensitive components of the system so that the resource that might otherwise be compromised by a social engineering attack doesn't even exist.