As long as people build our software, security may be an illusive dream. Matt Asay explains.
Adultery has always been a precarious act, but it became even more so this past week as pro-infidelity site Ashley Madison was hacked. Ironically, the hackers, who have threatened to release all personal information on users of the site, weren't so much incensed by the infidelity as Ashley Madison's privacy policies.
Welcome to the wonderful world of (in)security. Or, to paraphrase former Sun Microsystems CEO Scott McNealy, "You have zero security. Get over it."
Unfortunately, enterprises are overestimating their ability to secure their data, even as they paper over years of buggy code. No amount of security software can overcome poorly architected code.
At least, that's what we think about other organizations. Security professionals, as highlighted in a recent The Aspen Institute and Intel Security survey, are bullish on their own ability to secure their enterprises, despite apparently contradictory evidence.
For example, security professionals look back on the bad old days of security breaches and 50% acknowledge their organizations were "very or extremely" vulnerable three years ago, but only 27% believe that their organizations are currently "very or extremely" vulnerable.
And yet, over 70% believe the security threat level is rising against their enterprise, while a third had an incident upset availability. Meanwhile, 89% of respondents had at least one attack on a (secure) system within the past three years, with a median of close to 20 attacks per year. Of these, 59% said at least one of these attacks resulted in physical damage.
This isn't to suggest that security professionals are clueless—rather, that security is hard.
Because it's hard, often we fail to do the things necessary to deliver security. As ITS Partners data security architect Jonathan Jesse told me, "There are a lot of things that can be done to deliver strong enterprise security. It is just a lot of work and most people don't [do it]."
Of these things enterprises can do to improve security, Black Hat review board member Chris Rohlf cites two:
Everything else, from imposing password complexity to filtering mail attachments, largely fails, as WiKID Systems CEO Nick Owen confirms.
Unfortunately, things only look worse as we move to the devices used to access enterprise data. As John Leyden highlights, "The fragmentation of Android is creating additional security risks, as the rush to release new devices without sufficient testing is inadvertently introducing security flaws."
But let's be clear: this isn't really about Android vs. iOS fragmentation or underlying security credentials. It's about people and how we use our devices.
Or how we code.
Bad code all the way down
This might ultimately be the biggest problem with security: our software sucks, to quote Professor Zeynep Tufekci.
It's not that anyone sets out to write bad software, filled with security holes. We simply fall into this mess due to "Software engineers do[ing] what they can, as fast as they can," because that's what the market (and managers) expects of them.
As developers code, they build on others' code, often poorly documented, which results in "a lot of equivalent of 'duct-tape' in the code, holding things together." Or, Tufecki colorfully describes:
"Think of it as needing more space in your house, so you decide you want to build a second story. But the house was never built right to begin with, with no proper architectural planning, and you don't really know which are the weight-bearing walls. You make your best guess, go up a floor and... cross your fingers. And then you do it again. That is how a lot of our older software systems that control crucial parts of infrastructure are run. This works for a while, but every new layer adds more vulnerability. We are building skyscraper favelas in code—in earthquake zones."
The right thing to do is likely to rewrite the software, but who has time? As she concludes, there's "not much interest in spending real money in fixing the boring but important problems with the software infrastructure." We want new features, not removal of technical debt.
And this may be one big reason that our enterprises remain non-secured. There are ways to improve security, as stated, but ultimately, we may be building on a porous foundation. Nor does it help that plenty of security problems are an inside job, as the Ashley Madison breach seems to be.
In sum, security problems are ultimately people problems: those we hire, those who code, those who refuse to protect their passwords, etc. If our systems aren't secure, it's because they're built and enforced by people.