Innovation

Creating malevolent AI: A manual

A new paper by AI experts explores the construction of dangerous artificial intelligence.

Image: iStockphoto.com/DigtialStorm

The boom in AI promises to enrich our lives. AI assistants keep our schedules in order; robot "crew" members help us on cruises; and "swarm AI" even offers the chance for us to win big in the gambling world. But there's a dark side of the coin as well: AI that can cause great harm.

While much thought has been devoted to the dangers of AI, and centers like the Future of Life Institute in Cambridge, Ma., and the Future of Humanity Institute at Oxford University are focusing resources on how to support the creation of 'safe' AI, few have attempted to intentionally create malevolent AI.

Until now.

A new paper by computer scientist Federico Pistono and Roman Yampolskiy, director of the Cybersecurity Lab at the University of Louisville and author of Artificial Superintelligence: A Futuristic Approach, explores "Unethical Research: How to Create a Malevolent Artificial Intelligence."

In this case, malevolent AI is defined by any system that acts in discord with the intentions of its users. There are many examples of dangerous AI, which can be created accidentally or intentionally. In a previous paper about pathways to dangerous AI, Yampolskiy put it like this:

"Wall Street trading, nuclear power plants, Social Security compensation, credit histories, and traffic lights are all software controlled, and are only one serious design flaw away from creating disastrous consequences for millions of people. The situation is even more dangerous with software specifically designed for malicious purposes, such as viruses, spyware, Trojan horses, worms, and other hazardous software."

SEE: Why Microsoft's 'Tay' AI bot went wrong (TechRepublic)

So why would anyone want to create malevolent AI? Here are a handful of groups, according to the paper, that might be interested:

  • Military whose aim is to develop cyber-weapons and robot soldiers.
  • Governments who would like to establish hegemony, control people, or take down other governments.
  • Corporations who want a monopoly, destroying competition through illegal means.
  • Black hats attempting to steal information or resources or destroy cyber-infrastructure targets.
  • Doomsday cults attempting to bring the end of the world by any means.

The list goes on and includes depressed people who want to commit suicide, psychopaths who want to earn fame, and even AI researchers who need to secure funding.

SEE: AI gone wrong: Cybersecurity director warns of 'malevolent AI' (TechRepublic)

If you are interested in how malevolent AI can be created by "ill-informed but not purposefully malevolent software designers," the paper suggests several methods:

  • The system could be immediately deployed without testing.
  • It could be provided with unlimited access to information from sources with large data sets, from places like Facebook.
  • It could be given unvetted goals, with no thought to consequences.
  • And it could be charged with operating critical infrastructure like energy, nuclear weapons, financial markets, or communication.

According to the authors, there are two specific ways that malevolent AI can be created. The first is by failing to create an oversight board that can review the ethics of the research. "Since we currently don't have any control mechanism for artificial general intelligence (AGI), creating one right now would be very dangerous and so unethical," said Yampolskiy. "Oversight boards need to evaluate likelihood of any project to become an uncontrolled AGI."

The second is by creating closed-source code. This type of closed-source code or proprietary software is much more vulnerable to attacks, and has been manipulated by intelligence agencies in previous cases.

"In this environment, any group with the intent of creating a malevolent artificial intelligence would find the ideal conditions for operating in quasi-total obscurity, without any oversight board and without being screened or monitored," the authors wrote, "all the while being protected by copyright law, patent law, industrial secret, or in the name of 'national security.'"

"I hope no one tries to create malevolent AI," said Yampolskiy. If, indeed, it does happen, Nick Bostrom's vision of superintelligence may not be too far-fetched.

Also see...

About Hope Reese

Hope Reese is a Staff Writer for TechRepublic. She covers the intersection of technology and society, examining the people and ideas that transform how we live today.

Editor's Picks

Free Newsletters, In your Inbox