Let's be blunt: your code is full of security holes. Just as bad, your employees are careless with passwords and other ways of cracking into your data.
Hence, while we may wring our hands over security breaches at Target, Morgan Stanley, or dozens of other breaches, the reality is that the only reason your company has yet to be cracked is that hackers haven't bothered to try. Yet.
No amount of biometric engineering will fix the flaws. No authoritarian approach to engineering. Nothing can fix the software security breach waiting to happen that is your code base.
Except, perhaps, open source. Or, rather, an open-source approach.
Flaws all the way down
Perfectly secure software has never been written. Not the kind that anyone actually uses, anyway.
So, when Ben Cherry, an engineer at Pushd, says all software has flaws, I nod my head in agreement:
"Under the hood, most critical software you use every day (like Mac OS X, or Facebook) contains a terrifying number of hacks and shortcuts that happen to barely fit together into a working whole. It would be like taking apart a brand-new 747 and discovering that the fuel line is held in place by a coat-hanger and the landing gear is attached with duct tape."
Terrifying? Yes. But standard operating procedure? Also yes.
For years, the open-source world has touted itself as the answer to bug-ridden software. As the thinking goes, developers who must reveal their source code — kind of like wearing their underwear on top of the rest of their clothes — will tend to write better code.
It's a nice, intuitive thought. And it's right, so far as it goes.
After all, studies for years have shown that open-source software projects have fewer defects, on average, than their proprietary cousins. But fewer doesn't equal "none."
And "no bugs" is also not the point.
Open process, faster fixes
Not only is "no bugs" an improbable goal, it's also not necessary. As I've argued before, the value open source brings to security has far less to do with initial code quality and much more to do with eventual resolution processing.
Given that all software has gaping security flaws, the best software will be that which enables a community of interested, capable developers to fix it.
Sometimes such fixes will come before the holes are exploited, but not usually. Few developers have the time or means to scout out flaws in others' code in advance of those bugs being discovered.
No, the only ones that are going to find those flaws are those parasitic hackers that want to turn your bad code into bad cash. Yes, you may do all sorts of testing to find the flaws first, but you're going to fail. There are simply too many holes. Always.
Rather than assume pristine code, assume flaws. With such an assumption in mind, it's nearly always going to be easier to fix the problem when you can call on the cavalry.
So, while you may have business or other reasons for concealing your code, security shouldn't be one of them. The "security through obscurity" approach has never worked, and it never will.
Matt is currently head of the developer ecosystem at Adobe. The views expressed are his own, not those of his employer.
Matt Asay is a veteran technology columnist who has written for CNET, ReadWrite, and other tech media. Asay has also held a variety of executive roles with leading mobile and big data software companies.