You come up with a dynamite app. Companies are thrilled with it. Next thing you know, your app is used to deny visitors access to a website. Michael Kassner considers repurposed software.
Retaliation by Anonymous for the FBI raid against Megaupload was swift. Within minutes of the take down, several websites were under a Denial of Service (DoS) attack. Via Twitter, Anonymous proclaimed:
"The government takes down Megaupload. 15 minutes later, Anonymous takes down government and record-label sites."
If you'd like, we can discuss the details — even the politics — of the attack, but let's do it later in the comment section. Right now, I'd like to examine a subject that doesn't get much visibility.
To pull off these obviously successful DoS attacks, Anonymous used a software package called Low Orbit Ion Cannon (LOIC). LOIC was originally created to stress-test networks. But once LOIC became part of the public domain, Anonymous and others adapted it for DoS attacks.
Another famous/infamous — depending on your point of view — repurposed software application is DVD Decrypter. It was originally developed to create back-up images of DVDs. It did not take long to figure out DVD Decrypter also did a great job decrypting copy-protected movies.There are other examples, but you get the point.
I guess I shouldn't be surprised. Throughout history, many devices have been repurposed to do harmful things. Still, it was nagging me. Can't some intelligence be added to software that disables it when being used inappropriately?
I'm betting you programming types are shaking your heads, thinking I'm completely clueless. Still, I felt compelled to find out what my trusted sources thought. I sent an email to several with the following question:
"In the years that I have been covering IT Security, I've reported countless times about well-intentioned applications and test software being repurposed and used against the very people it's supposed to help.
My question: Is it possible to change this? Or, will the debate concerning software usage be similar to that of gun control?"
Here's what they had to say.Adrienne Porter Felt: I think your gun analogy is an apt one. This problem arises with all vulnerability-detection tools. Typically, their intended use is for developers to find vulnerabilities in their own applications so that they can patch them.
However, the attackers can use the same tools to find security holes. The solution is not to stop producing tools for developers. Attackers are motivated to find bugs regardless of the availability of tools, whereas developers who are not security experts may not find them without the help of tools.Andre' M. DiMino: Excellent topic. Many security applications can be considered "double edged" swords. These tools are essential to security auditing and testing.
Metasploit comes to mind as one of the most popular tools in this space. I'm not of the opinion that any limitations should be placed on their availability, as I believe that they do more good than harm. However, in the case of Metasploit, I do believe that some discretion should be used when they incorporate an exploit to an unpatched vulnerability.
One aspect that is often overlooked is the punishment and incarceration associated with criminal abuse of networks. This was and should continue to be a key deterrent to the illegal use of legitimate programs.Johannes B. Ullrich: I use web browsers as an example of a common tool that can be "turned around" to do harm. Most web-application exploits require nothing more then a simple web browser to execute.
I don't think there is a fix to prevent this. Safeguards like rate limiting may help (think about it as only allowing semi-automatic but not full-automatic weapons).
Another often-proposed solution is "fingerprinting" where the tool would create a unique identifier allowing you to determine the attacker. But, just like the often proposed bullet-stamping technology, it is easily defeated and could have unintended privacy consequences.Lenny Zeltser: Some programs possess capabilities that are more likely to be used for malicious purposes. An example of this is an exploit kit, designed to take advantage of vulnerabilities to gain control over a remote system. However, many attributes of software can be used for good as well as bad, such as remote access to a system, data collection, and even user activity monitoring.
That's why I consider malware to be code that's used to perform malicious actions. This definition of malware focuses on how the program is being used, rather than what it might be capable of. Furthermore, by incorporating the user's intent into the definition of malware, we highlight the fact that behind most malware there is a person, using it to his or her own advantage, usually at the victim's expense.Rick Moy: I don't think there's a whole lot that can be done to prevent what people do with applications. Anyone can write a different version of something like LOIC. What we can and should do is aggressively improve our defenses. Stephen Groat: As a security community, we have to figure out methods that'll prevent our tools from being malicious. They might not work 100% of the time, but it'll stop most basic attackers. By removing them, we can focus on real threats.
Also, the mystique of hacking has to change. Right now, Anonymous uses LOIC as a participatory botnet, where people allow their computers to be used for illegal purposes. People don't understand the legal and financial ramifications of participating. They believe that "hacking is cool" and they're "fighting the system."
If we can figure out a way to change the current social perception of hacking, we'll have less of a problem with participatory botnets, DoS, and tools like LOIC.William Francis: People are always going to find ways to take software created for legitimate purposes and misuse it. To stick with your firearm analogy, just as fully automatic weapons are more likely to be used by criminals than duck hunters, certain software is more likely to be adopted by hackers.
So, just as manufacturers of automatic weapons have a responsibility to keep their products out of the wrong hands, software engineers who work with low-level algorithms — having the potential to be misused — need to take reasonable measures to keep their software under control.
When I worked in the finance/ATM sector, a lot of care was taken to protect the inner workings of the firmware. We found that running interference caused criminals to look for easier doors to slip through.
It's time for you to voice your opinion. Is there a solution? And do you have any thoughts as to the effectiveness of the DoS attacks by Anonymous?
I want to thank each of the experts who answered a difficult question. I am humbled by their willingness to help.
Note: Slide above, courtesy of Wikipedia.