Bugs in the human hardware

Guest blogger Alan Wlasuk notes that the most serious security flaws in organizations continue to be the humans who work in them. Can we ever eradicate the phishing threat?

Who amongst us has not done something so incredibly dumb that we didn't wish we could just go back in time and change just that one event? The Exxon Valdez oil spill comes to mind, with 20 million gallons of oil dumped into Alaska's Prince William Sound. The drunk, sleeping captain, Joe Hazelwood could have used a time machine. One poor personal choice and he gets to be the primary player in what is considered to be one of the most devastating human-caused environmental disasters of all time.

Cybersecurity, while not as emotionally charged as the Alaskan oil spill, is now littered with the devastating effects of individuals who have unthinkingly cost their companies, the country, and countless unknowing consumers billions of dollars in stolen money and unproductive effort. Company reputations have been lost and stolen military secrets may cost American lives.If only we had a time machine.

The ultimate blame for security attacks and the damage caused is, of course, at the feet of the malicious hackers and hactivists who have initiated and orchestrated these attacks. But, by all accounts, the majority of successful security attacks are a direct result of benign, well-educated company staffers who did something stupid or forgot to do something easy.

The term social engineering is used to describe the act (maybe art is a better word) of enticing people to bypass computer security by performing actions or divulging confidential information. Clever cyber criminals somehow convince naïve users to provide an inroad into a company's protected systems. Just as frustrating (to companies) are individuals who unknowingly (or sometimes sloppily) open or leave holes in a company's security defenses. In both cases, and by every measure, the largest security danger to a company or organization is the staff that works within the organization.

A fun fact to start any conversation about social engineering is this surprising occurrence: Oak Ridge National Labs (one of the U.S. national energy labs) was a target for a spear-phishing attack where 57 out of a targeted 530 employees opened up an email that installed malware on their personal computers. Oak Ridge was shut down for days to repair the damage. The amazing fact behind this occurrence is that Oak Ridge National Labs is a government agency with a charter to study malware and computer viruses. However, the majority of the staff that opened up the affected emails were senior scientists and executives -- people who should have known better.

Why should a hacker bother to spend endless amounts of time and energy to hack into a computer system when he or she can easily convince someone to hand over the keys to the castle?

Social engineering ploys

Social engineering takes many forms. A few of the more entertaining (dumber) are:

  • Baiting - A few fake CD's or FlashDrives with provocative titles such as 2011 WDDinc Salary Information may be randomly left at a targeted company. When the CD or FlashDrive is inserted into a curious employee's PC, the self-starting application on the device takes over the PC and invades the targeted company's network.
  • Quid pro quo, meaning something for something - A malicious hacker will make random calls into a large company indicating she is responding to a tech support call until she finds someone who really has a technical support issue. The company employee will often gladly give out access information and passwords in an attempt to get the technical issue resolved.
  • Phishing - Malicious hackers will send legitimate looking emails to people in an attempt to make those people believe the request for information or action is real. As an example, Condé Nast recently deposited $8 million into a fake account because the hacker convinced a Condé Nast employee that the request was warranted - the fake account name was very close to the name of the actual vendor.
  • Human Greed - In a 2003 information security survey, 90 percent of office workers gave researchers what they claimed was their password as an answer to a survey question in exchange for a cheap pen. Similar surveys in later years obtained similar results using chocolates and other cheap lures.
  • Poor Password Policies - Recent studies of password usage found two remarkable trends. First, for ease of remembering one's password, people will use the same login name and password across many accounts. For example, people would use the same login name and password for their bank accounts as they would for Facebook. It only takes one breach to access all accounts. Second, people will use easy-to-remember passwords such as 12345, abc123 or iloveyou. These will be easily cracked in even the simplest brute force attacks.

One inadvertent click or conversation often erases millions of dollars of hardware and software security protections. Computer security is almost never at the top of anyone's mind. While the down side of a security breach is sometimes devastating to a company, the average person does not perceive a personal risk and the malicious hackers are usually very smooth.

Two significant recent breaches

On the "lack of action side," one need only look at two recent breaches to ask the obvious question - what were these people thinking? PBS (Public Broadcasting Service, an American non-profit public broadcasting television service based out of Arlington, Virginia), was successfully invaded by Lulz Boat, a hacker activist group. Lulz Boat claimed the attack was in retaliation to PBS' story on Julian Assange of WikiSecrets fame. Lots of controversy, some obvious immaturity on the hacker's part, but not a lot of direct or collateral damage to PBS came from the attack. The more interesting story to security professionals is the means in which Lulz Boat accessed the PBS site and how easy it was to perform the attack. The Twitter post by Lulz Boat as they claimed credit for the attack was: "PBS.org was owned via a 0day we discovered in mt4 aka MoveableType 4." This is geek speak for the simple fact that PBS was using version 4 of the Moveable Type CMS (Content Management System) that contained a vulnerability that was discovered by malicious hackers immediately after the product released. What makes this a "Duh ..." moment is the fact that the Moveable Type security flaw had been fixed many months before and PBS could have avoided the breach if only they had paid attention.

The second company, Lockheed, one of the largest suppliers of military hardware to the U.S. government, was recently hacked using a very sophisticated attack, but one that should have never have been possible if Lockheed had followed the same preventive procedure as other military suppliers. A short version of the breach starts with the RSA breach in March of this year. During this breach, security information about the 40 million RSA SecurID devices (small hardware devices that display a new six-digit random number every 30 seconds) was stolen. The SecurID device is a major component in the securing of remote access to the internal Lockheed systems. Without ‘seed' information on the SecurID devices, a malicious hacker would not have been able gain access to confidential, military information. With access, the malicious hackers were able to act like Lockheed employees or contractors.

The access to RSA information months prior to the Lockheed attack set the stage for the Lockheed attack by supplying the critical seed information. This was obviously a very sophisticated multi-stage attack that required professional malicious hackers and much planning. The potential loss of Lockheed data could literally be deadly to the U.S. government. There are suspicions that this attack may have been orchestrated by foreign governments. Upon hearing of the RSA breach, Raytheon, another major supplier of U.S. military hardware, chose to replace or reset their RSA SecurID devices. Raytheon was not successfully breached, leading us to assume Lockheed made the critical mistake of inaction.

Two companies. Two breaches. One via an inadequately updated CMS supporting a fluffy brochure site and the other via a cleverly planned and executed attack on U.S. military critical internal systems that should have been attack-proof. The common thread is the fact that both companies should have known better and did not follow best practices (or common sense).

PBS and Lockheed are high profile, recent breaches allowed by unthinking and non-acting staff. But an advanced Google search on common firewalls, switches, and other devices helping to implement security would find the depressing fact that many of these devices still retain the original login name and password of admin and admin. Just drive around your neighborhood and see how many wireless routers are likewise open to an easy access.

Humans are, by all accounts, imperfect. Yet we drive potentially deadly cars and trucks every day with very few fatal accidents. We seldom make mistakes when handling money, and infrequently forget to wear socks to work. What makes an otherwise intelligent race click on a link when we know it could be a trap? What makes us ignore obvious security holes when see them right before our eyes? My father probably would have said it's from too much TV - that probably isn't the answer but it's the best I have right now.

Alan Wlasuk is a managing partner of 403 Web Security, a full service, secure web application development company. From web security evaluation to secure web development and remediation, 403's seasoned developers have secured web-based applications against hackers and security breaches. Drawing upon the company's involvement with Software Quality Assurance (SQA), security is always at the forefront of any development efforts. To learn more about 403 Web Security or for a complementary vulnerability scan of your website, please visit: www.403.wddinc.com. Related reading: