Security

Is uncovering digital vulnerabilities doing more harm than good?

A noted virtual-reality technologist and author views "security through obscurity" as the only true way security can exist. Michael P. Kassner looks at what this uniquely divergent viewpoint means.

I need your help.

Jaron Lanier is why I need your help. I've been reading his eBook version of You Are Not a Gadget. In the book, Jaron presents a bunch of radical ideas. I'm concerned about one in particular: the one he calls "ideology of violation." Why? This particular idea takes direct aim at the reason digital vulnerability research exists.

For those unfamiliar with the name, Jaron is a pioneer in the field of virtual reality, instrumental in both Linden Lab's Second Life and Microsoft's Kinect. In 2010, Jaron was nominated to the Time 100 list of most influential people.

To set the stage, let's look at a few research projects where ideology of violation appears to come into play.

First example

Internet Census 2012, my first example is one that highlights what Jaron objects to, and the age-old debate: "Do results justify the means?"

Researchers, who prefer to remain anonymous (understandably), took it upon themselves to create a botnet using vulnerable network devices (mainly consumer-grade routers) to create a 400,000 plus member botnet, just so they could map the Internet:

While playing around with the Nmap Scripting Engine, we discovered an amazing number of open, embedded devices on the Internet. Many of them based on Linux and allow login to BusyBox with empty or default credentials.

The researchers then explain what they did with their secret:

We used these devices to build a distributed port scanner to scan all IPv4 addresses. These scans include service probes for the most common ports, ICMP ping, reverse DNS, and SYN scans. We analyzed some of the data to get an estimation of IP address usage.

Second example

I'm betting you know someone (maybe yourself) who has a pacemaker; consider this: researchers confirm it is possible to wirelessly communicate with and alter a pacemaker's code (similar threats have been uncovered for other medical devices, including insulin pumps). Just imagine what that means.

This research had enough significance that Jaron specifically mentioned it in his book:

In 2008, researchers from the University of Massachusetts at Amherst and the University of Washington presented papers at two of these conferences (called Defcon and Black Hat), disclosing a bizarre form of attack that had apparently not been expressed in public before, even in works of fiction.

Jaron continues:

They had spent two years of team effort figuring out how to use mobile phone technology to hack into a pacemaker and turn it off by remote control, in order to kill a person. (While they withheld some of the details in their public presentation, they certainly described enough to assure protégés that success was possible.)

What is ideology of violation?

Ideology of violation is the belief that discovering and making public ways to attack society will make society safer. Jaron adds:

Those who disagree with the ideology of violation are said to subscribe to a fallacious idea known as "security through obscurity." Smart people aren't supposed to accept this strategy for security, because the internet is supposed to have made obscurity obsolete.

Jaron further explains:

Surely obscurity is the only fundamental form of security that exists, and the internet by itself doesn’t make it obsolete. One way to deprogram academics who buy into the pervasive ideology of violation is to point out that security through obscurity has another name in the world of biology: biodiversity.

Interestingly, Jaron uses that logic to explain why computer malware infects more PCs than Macs. Simply put, PCs are more common, thus provide the bad guys more opportunity, and ultimately a better return on their investment.

Jaron admits in the book there are some cases where the ideology of violation does help:

[A]ny bright young technical person has the potential to discover a new way to infect a personal computer with a virus. When that happens, there are several possible next steps. The least ethical would be for the "hacker" to infect computers. The most ethical would be for the hacker to quietly let the companies that support the computers know, so users can download fixes.

Make sense or not?

Now that we all know what Jaron contends, let's get back to the examples. I'm hoping you see the common thread found in both: researchers spending significant time and expense finding weaknesses in certain areas of digital technology, then publicly demonstrating how those weaknesses could be exploited.

It doesn't take much of a leap to see where these examples could be used to intentionally harm people in one or more ways. I'll offer one last bit of Jaron's logic. He argues if similar research occurred outside the digital world, there would be repercussions:

If the same researchers had done something similar without digital technology, they would at the very least have lost their jobs. Suppose they had spent a couple of years and significant funds figuring out how to rig a washing machine to poison clothing in order to (hypothetically) kill a child once dressed.

Or what if they had devoted a lab in an elite university to finding a new way to imperceptibly tamper with skis to cause fatal accidents on the slopes? These are certainly doable projects, but because they are not digital, they don't support an illusion of ethics.

That help I needed

In my deliberations, I surely missed pluses and minuses for each side of this debate. That is why I'd like you to chime in, tell me what I missed, and more importantly what you think. Please take the poll below and choose the answer you think best fits your attitude toward this security dilemma. Feel free to explain your reasoning in the comments.

var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www.");

document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E"));

// -->

try {

var pageTracker = _gat._getTracker("UA-9822996-4");

pageTracker._trackPageview();

} catch(err) {}

// -->

About

Information is my field...Writing is my passion...Coupling the two is my mission.

297 comments
mla_ca520
mla_ca520

"That alone refutes almost your entire argument ..." Nope. Individual computers pose different threats and they face different threats. You are trying to argue against apples by using oranges and it isn't a valid argument. Most home computer users face threats from known malware, perhaps repackaged. There, it is important to keep your OS updated, your AV software/s updated and be mindful of your browsing. There are additional tools that home users can easily implement, like the free hostfile solution from winhelp2002. You can't reasonably say that my points about having our critical infrastructure compromised is invalid, because home users don't know how to pen-test their own networks...that just doesn't make sense and it isn't related. My examples were about banks and jails/prisons hiring people to test their security, which makes sense from a cyber perspective too. If my neighbor's computer gets infected, there is a decent chance that it happened via drive by infection on a legitimate site, perhaps his/her bank. While my neighbor doesn't pose an attractive enough target for a full scale cyber attack, their bank might, so the bank really ought to regularly probe their defenses and fix security holes. Context is decisive here and your counter argument is little more than a straw-man!

jrcochrane256
jrcochrane256

Security through obscurity only "works" if you also limit availability. An obscure system with nothing much on it or attached to it, that is powered down when not in its peak usage period, can have some STO because there's no payoff for going to the trouble of cracking it. So if your internet access is via a box that maybe is dedicated for browsing, shopping, email, and online management of your various real-world services, but you don't store your data there--you keep its body of installed applications lean---and you use something like linux, you're insulated against the common run of malice. The common run of malice is directed at the common run of person. More directed malice looking for high-value data from experts can't get at your data because your data isn't stored there. Keeping what's installed on the out-facing machine lean and knowing the innards of your more obscure, and hopefully structurally simpler, OS means that files that are added or altered are more likely to stand out to you as, "Something is off. This doesn't belong." Security through obscurity only works for sophisticated users (or administrators) who understand their OS, keep up with its known vulnerabilities and patches, and will notice the kinds of changes that indicate an attack. It works if you're a security expert and know what you're doing. Not so much if you're the average joe--because the average joe can't keep up with what's obscure and what isn't obscure anymore.

jrcochrane256
jrcochrane256

Physically securing and isolating the hardware, and keeping sensitive data physically secured, backed up to a secure off-site location, and isolated, is the only true information security. We should look at methods that encourage users to have two machines, one of which is physically secured and isolated, and keep all sensitive data (including their password lockbox) on the isolated machine. We shouldn't abandon other security approaches, but rather than encouraging individuals to store their data on the cloud, we should encourage regular DVD backups of critical data, strongly encrypted, stored with a friend or family member. This would be in addition to more frequent strictly local backup of data. By "critical" I mean work, important personal correspondence, and financial information. We should teach users to practice good data hygiene by having personal data retention policies and not just hoarding data. If we develop sound and easy to use policies and procedures along this line, users will eventually migrate to using them. The government won't like it, because strong encryption, physical isolation of machines, and no data hoarding will make it more difficult for the now-habitual police-state aspects of their day to day operations. Some data won't exist, and other data they'd actually have to get a real search warrant to acquire. Physical data security in a location where you have a reasonable expectation of privacy is the only effective protection/remedy against warrantless search and seizure. Consult a lawyer (I'm not one) about how to ensure your off-site backups maintain a "reasonable expectation of privacy." I'm not against the government; I'm against government misconduct. The other side of that is if physical data security were habitual, the thousand grains of sand industrial espionage techniques practiced by the Chinese would be more difficult and bear less fruit. It's about the only remedy that will (at least partially) work. Humint to break into people's houses and crack isolated computers onsite is expensive and high risk for the intel agency concerned.

AnsuGisalas
AnsuGisalas

[i]"Surely obscurity is the only fundamental form of security that exists"[/i] How does he figure? Is the best way to safeguard treasure to dig it down on a desert island, make a coded map of the location and then splitting up the map among stakeholders? Or is a bank better, even though everybody can see where it is? The hole in the ground has obscurity on its side. The bank relies instead on hardened routines and secured vaults. Is hiding in the woods the best way to stay alive during a war? It's uncomfortable and scary, but very obscure. A fortified position with ample air defense is awfully hard to hide. Once again "surely" has done its work admirably. It pinpoints an unsupported assumption. And security through obscurity (STO) isn't obsolete because of the internet! That's a straw man fallacy. STO is inherently flawed because it emphasizes the ability of people to keep secrets, and human carelessness is an almost ubiquitous security flaw.

allem
allem

I understand Lanier's point regrading Ideology of Violation but must take exception to Security Through Obscurity. There may have been a point where obscurity was valid security but, I feel, this is no longer the situation. Take, in point, the myriad of 'obscure' devices that have already been comprimised (as referenced in your post). Along with faster machines and faster networks, the entire IPv4 address space may be scanned in short order - where is the obscurity now? Without obscurity, you have no security! Moving to IPv6 greatly enlarges the domain of possible targets but will some day fall into same predicament as IPv4. Security rests with the developers/operators/users of public facing services. As an analogy, I lock my house when I leave; I hide valuables (i.e. close specific curtains); I set the alarm - these are precaucions used against intrusion. The same model should hold for electronic resources: harden the front end (locks); services should *not* respond with version numbers (curtains); install and monitor Intrusion Protection Systems (alarm). Also, increasing penalties for intrusion, i.e. CFAA, will only catch the little fish.

bestwebsitesdesigner
bestwebsitesdesigner

I agree with all 3 examples you have given its doing more harm than good and making innocent researchers into illegal activities. My friend started it as a hobby and now is a hacker and doing it as a full time job but never gets involved in illegal hacking only helps authorities Good points made here BestWebsitesDesigner.com

ahanse
ahanse

The real world is full of research of inconsiderate practices. To name a few: cigarettes, pharmaceuticals and motor vehicles. Hiding behind a digital world is a conspiracy in the making and having people in the know advocating it beggars belief. The digital world is relativity young but has infiltrated our lives to no end, so to expose the inadequacies in a formal manner is a must. Normally it is by a recall but in the digital world that is not so easy. If vulnerabilities found by researchers and other where acknowledged by the offender and fixed then maybe they would not have to go public. As it is at the moment they deny all until the last breath..... BTW: reading all the comments is a bit to much so if I have doubled up on an idea, sorry....

HAL 9000
HAL 9000

Allow some to take advantage what happens when no one knows and the vulnerability is already being used by the nasties? Having a insecure system no matter what it is and not knowing that it has issues is worse than knowing that there are issues and taking steps if you so chose to limit them. Some here are assuming that the People who take advantage rely on vulnerabilities disclosed by the Good Guys to then start taking steps to take advantage of those vulnerabilities when in fact the Real Baddies reverse engineer systems and then spread any weaknesses among themselves and take advantage while those who run them know no better. So the question here is should the Good Guys disclose what they find or stop looking and allow the Baddies Open Slather to continue doing what they like with no one knowing where they are breaking in. Those who rely on Published Vulnerabilities for their attack vectors are not the real problem here. The ones who we have to constantly be on the look out for are those who already know the Successful Attack Vectors and tell no one but their like minded friends who take advantage. ;) Col

Ozolins
Ozolins

Buying an expensive and heavy safe for controlled substances in a pharmacy leads to question the wisdom of centralizing known hazardous materials in a known place. Likewise, a dilemma occurs if the there is lax security for seemingly benign substances found in our every day environment. Knowledge is needed to apply what can be done with common bits, whether chemical or digital. I will never forget how a read-only network configuration of a windows95 machine (due to not opting to feed in an additional 3-5 installation diskettes) nevertheless became compromized Jan/Feb 2003 in a small Canadian sales office. On another new, replacement machine, nobody (HP, Microsoft or Corel) was about to take responsibility for the reasons why a legitimate copy of Wordperfect 7 would not work in the Millenium Edition system, that was configured for manual dial-up to the internet, and on one occasion even connected automatically unattended. Needles to say, viruses were found. All of the top 3 antivirus company scans said the same about the found malware names. Nonme revealed the presence of a dialer in the quarantined code. Further hunting about wordperfect office 7 revealed an unpublished fact that there were actually four versions of wordperfect 7 destined for different markets and the one for USA would work OK but the Canadian version was designed only for Windows95. Only when talking to the Corel vendor about this, did a reasonably priced upgrade copy suddenly become availabe to all on 9 february 2003. I can see that the data communications carriers are forced to distrust original application hobbyists, designers and coders. Many commercial off-the-shelf devices are becoming increasingly difficult to operate with poorly documented quick start manuals because it is assumed nobody will ever bother to read them before they are obsolete. To keep something simple and stupid is an asset, but layers of abstraction are needed to compartmentalize tinkering at each separate level of privilege. But what happens, for example, when Ubuntu 12 is modified to put a non superuser on an empty sudoers list? The Gnome Desktop proceeds to mount foreign drives without even prompting for any password at all! To do the same from bash still requires the use of sudo before mount. This manner of post production obscurity helps keep the bar raised a bit higher for newbies who are suspicious about freebies in general. That doesn't, however, diminish my support for open code software. Off-line access to the guilty source code or registerd commercial binaries and updates is becoming ever more difficult to do, while at the same time, requiring all users to be aware of their own vulnerabilities at almost every step along the way. I believe that keeping serious work and browsing separate makes sense, quite contrary to the billboard messages of the late 90's that were marketing the blurring of the boundary between offline and online activity on one PC. We have evolved past one digital device per global inhabitant. To have the latest seems to not care about expense here! P.S. A few years later, the same Norton antivirus company that named my quarantined code changed its line to say that it included a dialer! Bottom line: defending commercial interests has less budget restrictions, like the military that has no restrictions for needed aviation fuel quantities to do undisclosed operations. It's a close call, but too late for some of my family members, who realised for too late about how high stakes can go.

Deadly Ernest
Deadly Ernest

One sub thread has gone off on a tangent about the numbers of vulnerabilities in different operating systems. A few years back I saw an interesting article on this subject, just wish I could remember where. They did note that they ignored the situation where someone did NOT alter a suppler or factory default password, but looked at a breach of the system itself. They took all the data available about vulnerabilities in different operating systems and listed them several ways. What interested me was how the evaluations changed when you changed the counting parameters:- List all the vulnerabilities found in Mac OS, Unix, or Linux and they get a fair number. Reduce it to the core files to get the OS running and there were hardly any - it seemed most were in the auxilary apps that were also being used. List the vulnerabilities on the basis of allowing the attacker to take over control of the system and every variant of Windows was listed before Mac OS, Unix, or Linux with the numbers for each of the variants of Windows done as a separate issue. List the vulnerabilities on the case of them still being open to attack in later versions of the OS and Windows always headed the list. It seems Mac OS, Unix, and Linus only ever needed to fix a vulnerability in the OS once, while Windows often needed to fix the same problem in later versions. The article did also point out that some vulnerabilities occured through the OS having ways to sideline security measures for some application programs. The main point they made was that most of these only showed up where the same people were involved in coding the OS as well as the program. Experience has shown that security through onscurity does not work, and never has - either in the IT field, the banking fiels, or the military.

Deadly Ernest
Deadly Ernest

by shouting out to the world at large are not the best option either. Most researchers who identify issues let those involved know and get them active on fixing the issues identified. Should the people resppnsible decline to take action or take action in a tiemly manner (and that's another subject) then the only option left is a very public announcement so the general public can take action to protect themselves. I think of it this way, rain in the mountains is causing an influx of water into the river system. The authorities take note and take action to control it. But when they fail to take action to control the increased water, then the public need to be advised in enough time to leave the flood prone area ahead of the raging torrent headed that way due to a failure of the controls. The same logic can, and often is, applied to IT security flaws. When the issues with Java were made public people simply disabled Java until they were fixed and their systems updated. Without that public knowledge all those systems would have remained vulnerable, and many would be vulnerable to this day - some probable haven't updated and are still vulnerable.

Kiers
Kiers

I think violations must be BROADCAST. 1) it forces lay people to understand how tech can violate them 2) it foreces corporations (eg MS, fbook) to be clean and fair, no hidden corporate malware either 3) it enforces governments ours and others to be fair. Otherwise, humanity is screwed if the elite "researchers" keep quiet about stuff they find. The whole wiki leaks argument again

kc6ymp
kc6ymp

ok i worked for a company who makes cars and electronics ? the name of which i cannot disclose because of a court order ? but we had a division that made devices to monitor parolee's will in the end it lead to two police officers being shot because of a virus in microsoft DOS ver. 5.2 ??? now once the virus was detected ? a corprate desision was made to quitely modem into each machine and do the repairs? to some 2000 + clients ? i will wonder to this day ? if we had let the public know ? would those police officers still be alive today ?

Jaqui
Jaqui

That only benefits the companies that MADE the insecure product. informing people about potential risks at least allows them to make better informed choices [ usually they ignore it anyway ] I think the whole process needs to be modified, so that the manufacturers of the vulnerable product have 1 week to have the exploit patched or are liable for damages. that then promotes making sure the products are more secure to begin with.

Ozolins
Ozolins

Motherboard manufacturers are leaning toward UEFI specifications (Unified Extensible Firmware Interface), but quantum computing will eventually crack anything that is digitally signed. According to UEFI 16-bit legacy code is untrusted. Losses incurred from legacy software investments also need to be evaluated against investing in new restrictive commercial OS licensing in order to run newer machines. User Experience will determine what blend of independence and common security will be best on new, mobile hybrid devices. There simply are too few players calling the shots at this time.

Odysseus2012
Odysseus2012

That nearly famous quote of Ben Franklin's regarding security and liberty apply here. Having the responsibility for security untethered from a corporation strengthens the security, i.e., Windows, Internet Explorer, and Microsoft Office security is the primary responsibility of Microsoft, but, Microsoft, by adding a more diverse sharing of the responsibility of security could produce a more secure, knowledgeable, and timely environment. It is analoguous to Linux providing a more secure environment than Windows.

Ozolins
Ozolins

No amount of software hacking will change the performance of hardwired technology. There has been a serious lack of interest in creating devices with independent processors and related addressing systems for parameter related features that need not ever be remotely manipulated.

LarsDennert
LarsDennert

Finding security holes is no different than auto crash testing, UL product testing or structural concrete testing. Searching for weaknesses in products is already standard and common practice. People don't lose their jobs over it. Only through negligence or abuse of a hazard do people get fired or worse. A car can have a defect that allows it run over people. Someone accidentally getting run over is very different than someone get mowed down on purpose. The technology matters not.

norfindel
norfindel

Obscurity is not security. If you rely on that, when someone finds it he has all the time in the world to infect every vulnerable device (that is, every device, because nobody bothered to fix the vulnerability), and trigger a massive attack. It's like leaving your front door without a lock, while hoping that nobody notices.

dabshere
dabshere

Having read only your synopsis and not all of Jaron Lanier's book, I can only comment about what you've said about it. That said, I can't see how NOT testing a system for weakness is helpful. The term 'Ideology of Violation' implies a malicious intent, but the act of researching weaknesses in a code-based infrastrucure system is no more inherently a violation than testing a swingset to make sure it's safe for your kids. It matters what you do with the information, of course, but NOT testing is far more likely to cause harm from those who thrive in the dark than actively kicking over the rocks. For the sake of the record, Jaron's mention of Pacemaker hacking not being used in fiction prior to 2008 is incorrect; the character John Rain used exactly this technique, described in loving detail, to kill someone in the book 'Rainfall' by Barry Eisner, published in 2003.

Kenton.R
Kenton.R

Lanier says "if similar research occurred outside the digital world, there would be repercussions." Well, IT has a long history of automotive comparisons, so why not look to the auto industry? The Corvair had quite a few flaws safety flaws. Would the public have been better served by not knowing about the slew of issues in it's design? (note 1) Vulnerabilities exist and will be exploited by the "bad guys," whether the public is aware of them or not. The failure in Lanier's logic is he implies if the flaws aren't announced to the world that they are less likely to be exploited. On the contrary, they will still be exploited, and the "good guys" won't know there is a flaw or have a fix for it until after it is used by the bad guys. I'll even use an example attacking his "secure by obscure" Macs: last year's rampant infections from the Flashback Trojan. We can't stop the bad guys from looking for holes in our defenses - we've got to do what we can to lock the doors before they try the handle. And, of course, stop building digital equivalents of Corvairs. note 1: This is an interesting comparison for another reason: the auto industry went after Ralph Nader for revealing issues with the Corvair, much like AT&T's legal attack on Aurenheimer for showing their iPad registration site was insecure.

Vulpinemac
Vulpinemac

An otherwise unprotected PC is usually infected within 15 minutes of connecting to the Internet--perhaps faster now due to high broadband speeds. It's not like someone has to hand-crank out brute-force password hacking when software can do it thousands of times faster. And the payoff is still there if that software succeeds, because all it needs is another few seconds of connection to shoot off its payload and seek new instructions. Most of the time you won't even know it's happening. The point is that the user has little control over what happens in their machine and has to rely on the quality of the software and any anti-malware programs they may have installed. The things you suggest can HELP, but they cannot be the Be All, End All of security by obscurity. The software makers themselves have to be the primary barrier to malware. THAT is what this article is about. I don't fully agree with the author that it is enough; hiding a vulnerability alone doesn't protect the user if some malfeasants have discovered it for themselves and are taking advantage of it. On the other hand, finding the vulnerability, reporting it to the developer *and enforcing a solution* without publicly announcing that work reduces the risk of others using a target of opportunity. Quite literally, reporting a vulnerability tells the "bad guys" where to attack at least until the hole is patched and maybe even afterward. If you look at some of my own comments here, you will see that I've suggested a more secure means of vulnerability reporting that puts the onus of repair where it belongs without risking the user.

Vulpinemac
Vulpinemac

Yes, isolating your computer entirely from the internet IS the most secure. But your examples of personal correspondence and financial information NEED that connection merely to stay current. Even if you use a second machine as the communicator, any malware it receives is highly likely to migrate over to the remote machine the instant you transfer that data--even if only by physical means like a disk drive or thumb drive. While it may not be able to communicate out again, well too bad; your connected machine is already doing that and the malware will take advantage of even a momentary connection to another machine. There's nothing wrong with the majority of your suggestion, but isolation alone is effectively impossible.

jrcochrane256
jrcochrane256

If you don't have a friend or family member you're comfortable swapping off-site backup storage with, or you're really paranoid, putting your data lockbox in a self-storage unit, with the lease in your name, that you keep locked--and having the data strongly encrypted and in a locked box--will sustain your reasonable expectation of privacy. I'm not sure what the state of privacy law is on this, and I'd ask a lawyer before trying this, but I think the following would preserve your "reasonable expectation of privacy" for your off-site backups stored with a friend, neighbor, or family member. Do reciprocity on data storage with your friend. Get a small lockbox for your backup media. Write up an agreement that you hereby lease or sublet (whichever) an area of one cubic foot (or however big the lockbox is) in the back of your closet to your friend (by name) for offsite data storage of his personal data and you each agree not to store anything in the boxes but storage media and data and release and absolve the other of any liability for the content of your data. You each agree that the agreement means that each party retains possession of their own data, within the leased cubic foot (whatever) and that you will not tamper with or move the lessee's data lockbox and will not have a key to it. You agree on terms for giving notice ending the agreement so you have adequate time to come pick up your stuff. As I said, you'd need to see a lawyer for how to do this right. Additionally, you strongly encrypt all your data with a strong pass phrase. As long as you and your friend never, ever have a key to each other's lockbox (if you store a spare key, store that with a different friend), it would be very difficult for the government to effectively claim that you had no reasonable expectation of privacy for your data stored within the box. They could still get it--but only by getting a warrant. If they used it for a fishing expedition or whatever, your lawyer would later be able to challenge that warrant and potentially get anything incriminating excluded. I'm not saying, "Oh, this is how to break the law and not get caught." I'm saying that we have a government that, when it wants to put someone in jail, it doesn't matter if whatever they've done to annoy the government doesn't break any laws. I'm not a fan of Martha Stewart, but she's a great object lesson. Government wanted to put her in jail. But what she did, while it annoyed them, wasn't actually illegal. So they metaphorically picked her up and shook her by the ankles until a lie fell out of her mouth, then jailed her for the lie. They didn't care about whether she had broken laws or not, they had decided to put her in jail and didn't care how. I think they jailed her to send a message to the rest of us that if you annoy the government, regardless of whether you follow the law or not, they can and will stick you in jail. Or, you know, maybe they'll stick you in jail to get to someone else who cares about you. Using good data practices protects you from risk by ensuring that, as a law abiding citizen, if you are ever targeted for prosecution, the government has a lot less to work with in manufacturing an offense. And your lawyer has a lot more maneuvering room to defend you. Because sometimes if you didn't do anything their arsenal of dirty tricks *doesn't* manage to get you. (The strong encryption ensures your lawyer may be able to prevent the government from getting a look at your data. They might get an overly-compliant judge to overlook breaking into your house or storage unit. However, the only way they're going to get your pass phrase is by getting a court order for you to give it to them. This gives your lawyer the opportunity to represent your interests if the government is pulling a fast one and doesn't really have sufficient grounds for a warrant. Theoretically, any "leads" they pull from your data in an illegal search is "fruit of a poisoned tree." In reality, looking at your data might give them creative ideas about how they can manufacture an offense and make it stick. Strong encryption won't keep them from getting your data in the end, but as a delaying tactic to give your lawyer time to help you out, it could keep you out of jail. For one thing, even if your lawyer doesn't succeed at contesting the order for you to turn in your pass phrase, by getting his objections on the record, he makes it easier to get anything they "find" or claim to have found thrown out on appeal.)

jrcochrane256
jrcochrane256

Security is a mindset, not any one technique. Obscurity is one reasonable component in a broader security plan that should include code-based security and compartmentalization of function and storage. Wherever possible, anything mission critical should be on physically secured and isolated hardware and media. Old data should be deleted on a regular basis unless there's a reason for keeping it. Security through data retention policy is important for limiting your liability and risks. This holds even as an individual. Imagine you or a family member is dating someone (maybe live-in) or friends with someone who's at your home a lot and you trust to use your computer. Now imagine you or the family member decide the person is a sleaze or isn't working out and send them out of your life. Unbeknownst to you (because we never really know other people), said ex-sleaze used your computer to do something involving kiddie porn and hid it obscurely on your system. With no individual data retention policy, you're only as safe as the sleaziest person you (or one of your users) ever let use your computer. With a firm data retention policy, all that bad stuff you may not recognize does not persist in any kind of historic backups of your system, gets cleaned off your system (with a good set of standard operating procedures), and no longer exists five years later when the RIAA or MPAA gets access to your drives and old backups because they accuse your kid of piracy. With a good data retention policy, you can destroy anything potentially incriminating that you don't know about, or anything that might later become subject to civil discovery, without breaking any laws about destroying evidence. If you regularly destroy data as policy because it's past its expiration date with no special reason to keep it, it's completely legal---even if it later turns out there was something potentially incriminating or "material" to some lawsuit in there. You didn't destroy it because you were eliminating evidence; you destroyed it because it was old and you destroy all your old data that doesn't meet certain specified criteria. You can't incur costs of "discovery" from data that doesn't exist anymore

Vulpinemac
Vulpinemac

because of that offender's negligence? It's bad enough that we already see tons of zero-day exploits, why should we give malware creators more ammunition? As you haven't been reading previous comments, I'll mention one more time that a controlling agency with the power to "recall" defective software if a fix isn't implemented within a reasonable amount of time would be more effective and less harmful to the user.

Vulpinemac
Vulpinemac

need to be announced soonest and in most cases much more broadly than they are WITH information on how the users can defend themselves from that exploit. All others need to go through a carefully-controlled reporting, implementing and punishment system designed to prevent a vulnerability from becoming public knowledge until AFTER the fix is in place.

Vulpinemac
Vulpinemac

Or rather, everything before that statement refutes your last statement. As you, yourself pointed out, the simple fact that almost nobody knew about those MacOS, OS X, Unix and Linux vulnerabilities meant that when they were discovered and reported to their respective coders, the fix was made and implemented whereas Windows--which publicly reported nearly every vulnerability they were advised of with the addendum that, "a fix is in the works" meant that Windows was continuously and repeatedly attacked and exploited. Quite clearly, experience AS DESCRIBED BY YOURSELF, has proven that Security by Obscurity definitely works--when the coder acts on the vulnerability report before it is made public.

Vulpinemac
Vulpinemac

As in taking your statement to the point of, "Should the people responsible decline to take action or do so in a timely manner then..." it becomes a legal issue where they are given a Federal time limit to either fix the vulnerability or withdraw the application/software from use--to the point of disabling existing copies AND refunding all income from that product. All that "very public announcement" does is let malware creators know about the vulnerability (often with enough data to exploit it) while the general public (and I don't mean techies like us) float along gleefully ignorant of both the announcement itself and what it contains. Too many commenters here suffer under the illusion that the average consumer is as knowledgeable and understanding of the issues as they are. It simply isn't true. These consumers have been readily duped by malware posing as legitimate software for decades now and they still haven't learned their lesson. The ONLY people a public announcement benefits are the people who know how to take advantage of it.

Michael Kassner
Michael Kassner

I appreciate you telling us about your situation. It adds a great deal to the discussion. Also, 73s from K0PBX

Michael Kassner
Michael Kassner

It's been awhile since you visited my articles, glad you commented.

Tony Hopkinson
Tony Hopkinson

Software was invented because it was easier and cheaper to change than hardware, but that was along time ago. Software has got much more expensive to change and our hardware capabilities are much greater now, so I wonder how much we could use that to address some of these issues. Interesting thought that.

Michael Kassner
Michael Kassner

Wouldn't that get expensive, having to replace the entire device device to fix a problem?

Michael Kassner
Michael Kassner

It is suggested as part of the overall scheme to get things fixed. I would be curious as to how you would fix the problem.

Michael Kassner
Michael Kassner

Thanks, for the mention of the book. I am curious as to understand how you know what the intent of the researchers is?

Vulpinemac
Vulpinemac

I won't argue that the Corvair was by no means a 'sporty' car, but it's main problem wasn't the fact that it was a rear-engined car--so was the Tucker and it was far, far safer. Even the Volkswagen Beetle was safer than the Corvair--simply because the Chevrolet engineers overlooked one very basic fact--the more weight you put behind the rear wheels, the less weight you have on the front wheels. Both the Tucker and the Beetle had the engine effectively on top of the rear axle--avoiding the gross handling issues inherent in the long-tailed Corvair.

Michael Kassner
Michael Kassner

I had forgotten about all the grief Mr. Nader took way back then. My friend had a twin turbo Covair, That was definitely interesting.

jrcochrane256
jrcochrane256

Yeah, it's sad that one of the security threats we all realistically have to consider is misconduct from our own government. It's sad, but it's reality. Better not to argue with the weather.

Vulpinemac
Vulpinemac

The discussion is about PC security from ALL attack vectors, including those you have no physical control over. What good is a data retention policy when that data has already been corrupted even before you off-load it? Not all attack vectors require physical access to the machine. Isolation alone works ONLY when all data stored is generated in house. Computer systems used to work that way, but very rarely do they now.

Vulpinemac
Vulpinemac

As I have clearly stated before and referenced in my previous comment, the developer--whether individual or corporate--is given a certain amount of time to correct the reported vulnerability. Up to that point there is no roll back involved because the developer has been advised of an issue and given time to fix it. Meanwhile, NO MALWARE CREATORS is made aware of this issue because it is held private between the developer and the security researcher through a regulating agency. No awareness of the vulnerability means less chance of an exploit before the fix comes out. But I'm not done yet. Ok, let us for the moment assume that the developer--and yes, I will use your own example of Windows--chooses not to act on the information. Now admittedly some companies are like that so it is a valid point. This is where that regulating agency comes in. Rather than going through an extended and expensive courtroom process to merely "punish" the developer for its procrastination, this agency has the legal power to RECALL the software and prevent any new sales. This wouldn't just be a slap on the wrist but a very real hit to the pocketbook of the developer AND to whatever distribution agencies that developer may have used--who themselves would now have legal right to sue that developer for lost sales and probable returns. That developer must now work even harder to get a fix out before they go bankrupt or roll back to a previous, more secure version at their own expense. Ok, maybe as a developer yourself you don't like that idea; I don't blame you. On the other hand, simply ignoring the problem puts all of your customers at risk at little risk to yourself. The risk needs to go to the source--the developer. Now, as a separate example maybe you have heard about a security researcher who has proven he can take control of a commercial aircraft with an Android smartphone. What is the very first thing wrong with the public release of this information? Can't think of one? I can. Terrorists and extremists of almost all walks now know they can cause a plane crash whenever they want without even having to overtly attack the cockpit and only sacrifice a single zealot rather than a whole cell. Yes, this will force some drastic measures on the part of the airlines and aircraft manufacturers, but keep in mind just how many airplanes are already in service and try to imagine the cost that's going to be involved in blocking this attack. Had the vulnerability not been made public, these airlines and aviation companies wouldn't be in a panic to block this vulnerability and the fix could have been released without putting the general public at risk.

Tony Hopkinson
Tony Hopkinson

I've just rolled out Windows 8, to 2000 PCs, unpgraded all the office suites, tools ,etc, reworked my internal softare to cope with the changes, done the required training, updated all my IS procedures. Then the product gets recalled. Who's going to pay for the roll back?

Deadly Ernest
Deadly Ernest

found and reported. A couple of years later Win 98 was released and a whole bunch of security holes were found and reported - many were the same one on the Win 98 list. This rereated again with Win 2000 and Win XP. It looked to me that the 'patches' applied to the earlier versions of Windows were not incorporated in the code for the later versions prior to release, but ended up with their own versions of the patches. In short, the original code was never fixed but a patch to further hide the vulnerability was put in nut not made part of the base code. Many of the previously fixed vulnerabilities became known early very quick due to the speed with which some people released malware to take advantage of them. Windows have not yet responded immediately to a reported vulnerability. There have even been cases of them delaying months to provide patches for 'zero day' faults. In some cases they had months of notice and took no action until after the problem was made public. During all that time malware was making use of the vulnerabilities. Wow, I finally found a way to make the 'reply' button work - I have to open it as a new page instead of in the existing page.

Deadly Ernest
Deadly Ernest

times where the company releasing the program will NOT expend the resources to fix the problem until AFTER the issue has gone piblic and looke like costing them sales, and thus revenue. In most cases where a vulnerability is mentioned to the public it's already being exploited by the bad guys. Though many of the general public won't do anything about it, some will, thus some will be helped by the announcement. Often the public announcement doesn't include the fine detail as to how the vulnerability works, but does state what it can do. This is how it should be, as a user I don't need the fine detail, but I need to know the effect. If the companies won't fix a problem when told about it, then they need to be shamed into doing so. I forget the name of the car, but back in the 1960s 1970s there was a US car that had an issue with busted fuel tanks and fires when rear ended. The later records showed the company knew about the design problem before it went into full production, but it was only after numerous fires and public shaming that they took action to fix the issue. That is the kind of behaviour we face with most of the companies that use security through obscurity.

Vulpinemac
Vulpinemac

Secrecy is the FIRST step towards operational security; fences and walls are subsequent steps to maintain that first. Secrecy attempts to insure obscurity--by simply not letting the opposition know what you have up your sleeve. As such, by publicly announcing each and every vulnerability, you're literally giving away the keys to the data malware creators want.

AnsuGisalas
AnsuGisalas

"Surely obscurity is the only fundamental form of security that exists" It's not, it absolutely isn't. The fundamental forms of security have nothing to do with keeping themselves secret. In fact, in order to be fundamental, security can't be about secrecy, since secrecy is an add-on to an fundamental security strategy. You do something, either openly or secretly. Or, more properly: You do something, and either assume it to be secret, or assume it not to be secret. Security through obscurity is about assuming that normal comms stay secret. As such is has no advantages over non-obscurity, since the exact same comms are used in both... non-obscurity just realizes that secrets tend to get out, and don't trust the secrecy.

Vulpinemac
Vulpinemac

because it clearly emphasizes that we need an oversight agency that can command *and enforce* timely repairs and effective punishments. A mere 'fine' doesn't affect any but the poorest developers; the punishment needs to offer a real effect on the culprit--even to the point of shutting them down, if necessary.

Deadly Ernest
Deadly Ernest

of that very issue causing too many problems that I stopped visiting TR for a few months. At the time I became aware that the issue wasn't solely Fire Fox, nor was it all Fire Fox; but a conflict between the way Fire Fox was now working with less Java and Javascript in it and the amount of garbage TR had being pushed out by CNET on each page. Even now there are 16 different web sites / servers called on to display content on this page. 6 appear to have nothing to do with CNET or TR. Also, 2 third party sites appear to be about tracking where you are and 3 are clearly related to pushing third party ads at you. I now have most TR functionality back in Fire Fox, but also wonder if part of the issue for me may be one of the scripts I block to kill all the ads. I go to very long lengths to blocks ads as, unlike the USA, I have to PAY for EVERY MB of data that comes down the pipe to my system. Thus I stop the sites from activating the scripts that are connected with costing me money by shoving third party ads down my throat. It also has the benfit of speeding up the page download a lot.

Deadly Ernest
Deadly Ernest

Microsoft refused to take action until AFTER the researchers reported finding the vulnerability a fullt ten weeks AFTER they told Microsoft about it. Only then did Microsoft take action. Two weeks after the media report by the researchers Microsoft made an announcement about the vulnerability at the same time they announced the patch they were releasing. In some of the follow up media reports some of the Microsoft senior staff said it only took them just over a week to develop the patch once they started on it. They said that in reply to a comment about them needing three months to develop a patch. Thus it's clear that the Microsoft management had no intention of doing anything about the vulnerability or patching it until after it became a media circus. It's this attitude that is the biggest issue with this subject. Or the one they'd previously fixed it in earlier versions of Windows and never changed the baseline source code, thus the fault was still in the source code for the new evrsion of Windows and needed a new patch written for the new version of Windows as they never even noted the issue in the previous more than a decade they'd known about the problem.

Vulpinemac
Vulpinemac

Fortunately, Microsoft reported the patch was IN PLACE when they reported its presence.

HAL 9000
HAL 9000

There is a thread about how the newer versions are not playing well with TR here. http://www.techrepublic .com/forum/discussions/103-401131?tag=content;discussion-table Col

Vulpinemac
Vulpinemac

Putting in a Legal time limit with the threat of a complete stoppage of sales AND refunding income from defective versions sold would encourage more rapid and effective patching without ever letting the malware authors know the vulnerability was there. This wouldn't stop those malware authors from seeking and exploiting vulnerabilities they discover themselves, but once the exploit is discovered the effective life of that exploit would be very visibly limited.

Editor's Picks