Is uncovering digital vulnerabilities doing more harm than good?

A noted virtual-reality technologist and author views "security through obscurity" as the only true way security can exist. Michael P. Kassner looks at what this uniquely divergent viewpoint means.

I need your help.

Jaron Lanier is why I need your help. I've been reading his eBook version of You Are Not a Gadget. In the book, Jaron presents a bunch of radical ideas. I'm concerned about one in particular: the one he calls "ideology of violation." Why? This particular idea takes direct aim at the reason digital vulnerability research exists.

For those unfamiliar with the name, Jaron is a pioneer in the field of virtual reality, instrumental in both Linden Lab's Second Life and Microsoft's Kinect. In 2010, Jaron was nominated to the Time 100 list of most influential people.

To set the stage, let's look at a few research projects where ideology of violation appears to come into play.

First example

Internet Census 2012, my first example is one that highlights what Jaron objects to, and the age-old debate: "Do results justify the means?"

Researchers, who prefer to remain anonymous (understandably), took it upon themselves to create a botnet using vulnerable network devices (mainly consumer-grade routers) to create a 400,000 plus member botnet, just so they could map the Internet:

While playing around with the Nmap Scripting Engine, we discovered an amazing number of open, embedded devices on the Internet. Many of them based on Linux and allow login to BusyBox with empty or default credentials.

The researchers then explain what they did with their secret:

We used these devices to build a distributed port scanner to scan all IPv4 addresses. These scans include service probes for the most common ports, ICMP ping, reverse DNS, and SYN scans. We analyzed some of the data to get an estimation of IP address usage.

Second example

I'm betting you know someone (maybe yourself) who has a pacemaker; consider this: researchers confirm it is possible to wirelessly communicate with and alter a pacemaker's code (similar threats have been uncovered for other medical devices, including insulin pumps). Just imagine what that means.

This research had enough significance that Jaron specifically mentioned it in his book:

In 2008, researchers from the University of Massachusetts at Amherst and the University of Washington presented papers at two of these conferences (called Defcon and Black Hat), disclosing a bizarre form of attack that had apparently not been expressed in public before, even in works of fiction.

Jaron continues:

They had spent two years of team effort figuring out how to use mobile phone technology to hack into a pacemaker and turn it off by remote control, in order to kill a person. (While they withheld some of the details in their public presentation, they certainly described enough to assure protégés that success was possible.)

What is ideology of violation?

Ideology of violation is the belief that discovering and making public ways to attack society will make society safer. Jaron adds:

Those who disagree with the ideology of violation are said to subscribe to a fallacious idea known as "security through obscurity." Smart people aren't supposed to accept this strategy for security, because the internet is supposed to have made obscurity obsolete.

Jaron further explains:

Surely obscurity is the only fundamental form of security that exists, and the internet by itself doesn't make it obsolete. One way to deprogram academics who buy into the pervasive ideology of violation is to point out that security through obscurity has another name in the world of biology: biodiversity.

Interestingly, Jaron uses that logic to explain why computer malware infects more PCs than Macs. Simply put, PCs are more common, thus provide the bad guys more opportunity, and ultimately a better return on their investment.

Jaron admits in the book there are some cases where the ideology of violation does help:

[A]ny bright young technical person has the potential to discover a new way to infect a personal computer with a virus. When that happens, there are several possible next steps. The least ethical would be for the "hacker" to infect computers. The most ethical would be for the hacker to quietly let the companies that support the computers know, so users can download fixes.

Make sense or not?

Now that we all know what Jaron contends, let's get back to the examples. I'm hoping you see the common thread found in both: researchers spending significant time and expense finding weaknesses in certain areas of digital technology, then publicly demonstrating how those weaknesses could be exploited.

It doesn't take much of a leap to see where these examples could be used to intentionally harm people in one or more ways. I'll offer one last bit of Jaron's logic. He argues if similar research occurred outside the digital world, there would be repercussions:

If the same researchers had done something similar without digital technology, they would at the very least have lost their jobs. Suppose they had spent a couple of years and significant funds figuring out how to rig a washing machine to poison clothing in order to (hypothetically) kill a child once dressed.

Or what if they had devoted a lab in an elite university to finding a new way to imperceptibly tamper with skis to cause fatal accidents on the slopes? These are certainly doable projects, but because they are not digital, they don't support an illusion of ethics.

That help I needed

In my deliberations, I surely missed pluses and minuses for each side of this debate. That is why I'd like you to chime in, tell me what I missed, and more importantly what you think. Please take the poll below and choose the answer you think best fits your attitude toward this security dilemma. Feel free to explain your reasoning in the comments.


Information is my field...Writing is my passion...Coupling the two is my mission.

Editor's Picks