Security

The malicious uses of AI: Why it's urgent to prepare now

In an extensive report, 26 experts offer artificial intelligence security analysis and tips on forecasting, prevention, and mitigation. They note the AI-security nexus also has positive applications.

Artificial intelligence (AI) and machine learning (ML), considered oxymorons by some, are changing many facets of our lives. "I believe 2018 is the year that this [artificial intelligence] will start to become mainstream, to begin to impact many aspects of our lives in a truly ubiquitous and meaningful way," Ralph Haupter, president of Microsoft Asia, tells Catherine Clifford in this CNBC post.

However, all of the good brought about by enlisting AI and ML does not negate that technology, in of itself, even AI and ML, cannot discriminate between good and evil, which means individuals and organizations intent on harm or criminal gain will benefit from their use as well.

That fact has not gone unnoticed. Academics, as well as business leaders, are voicing their concerns. For example, last year 100 robotic and AI entrepreneurs sent an open letter to the United Nations asking for a ban on autonomous killer robots.

SEE: IT leader's guide to the future of artificial intelligence (Tech Pro Research)

Recently, 26 experts from a wide range of disciplines and organizations, including Oxford University's Future of Humanity Institute, Open AI, the Center for New American Security, and the Electronic Frontier Foundation, published an extensive report titled The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation that discusses the likelihood rogue nation states, terrorists, and criminals are or soon will be using AI and ML to further their agendas.

The report begins with the coauthors defining AI and ML:

"AI refers to the use of digital technology to create systems that are capable of performing tasks commonly thought to require intelligence. Machine learning is variously characterized as either a sub-field of AI or a separate field, and refers to the development of digital systems that improve their performance on a given task over time through experience."

And to make sure everyone is on the same page, the researchers define "malicious use":

"We define 'malicious use' loosely, to include all practices that are intended to compromise the security of individuals, groups, or a society. Note that one could read much of our document under various possible perspectives on what constitutes malicious use, as the interventions and structural issues we discuss are fairly general."

SEE: Defending against cyberwar: How the cybersecurity elite are working to prevent a digital apocalypse (free PDF) (TechRepublic cover story)

Three security domains need to be considered

The researchers cataloged what they consider threats into the following three domains.

  • Digital security: The big concern here is that AI removes an either/or consideration that has plagued digital bad guys for many years—whether to focus on the size of the attack or its efficacy. For instance, spear phishing gets a whole lot easier when AI plays a role in the design as well as the attack's command and control. The report's authors add, "We also expect novel attacks that exploit human vulnerabilities (e.g., through the use of speech synthesis for impersonation), existing software vulnerabilities (e.g., through automated hacking), or the vulnerabilities of AI systems (e.g., through adversarial examples and data poisoning)."
  • Physical security: AI allows attacks by physical systems to be automated, which increases the threat area and success potential. Additionally, the report's authors note, "We also expect novel attacks that subvert cyber-physical systems (e.g., causing autonomous vehicles to crash) or involve physical systems that it would be infeasible to direct remotely (e.g., a swarm of thousands of micro-drones)."
  • Political security: The use of AI to automate tasks involved in surveillance, persuasion, and deception is already underway and likely to increase. The report's authors write, "We also expect novel attacks that take advantage of an improved capacity to analyse human behaviors, moods, and beliefs on the basis of available data."

Put simply, AI and ML make things a lot easier for the dark side.

SEE: Cambridge Analytica's Facebook game in politics was just the beginning, the enterprise was next (TechRepublic)

AI is getting cheaper

The New York Times reporter Cade Metz makes a case in his February 2018 article "Good News: A.I. is Getting Cheaper. That's Also Bad News." that AI- and ML-based technologies are getting cheaper. Metz explains that a drone manufactured by Skydio uses components available to anyone and is relatively inexpensive (less than $3,000) considering what the drone is capable of doing. Using a smartphone app, the drone's owner can tell the airborne drone to follow someone. Metz adds, "Once the drone starts tracking, its subject will find it remarkably hard to shake."

SEE: Skydio's R1 is a $2,500 selfie drone that flies itself (CNET)

AI will also be the best defense

The report's authors suggest the same benefits that AI and ML afford those wanting to do harm will also improve defense and mitigation—criminal investigation, for example:

"One general category of AI-enabled defenses worth considering in an overall assessment is the use of AI in criminal investigations and counterterrorism. AI is already beginning to see wider adoption for a wide range of law-enforcement purposes, such as facial recognition by surveillance cameras and social-network analysis."

SEE: What is AI? Everything you need to know about Artificial Intelligence (ZDNet)

Final thoughts

In the report's conclusion, the authors do not mince words:

"While many uncertainties remain, it is clear that AI will figure prominently in the security landscape of the future, that opportunities for malicious use abound, and that more can and should be done."

Still, the authors offer hope:

"Though the specific risks of malicious use across the digital, physical, and political domains are myriad, we believe that understanding the commonalities across this landscape, including the role of AI in enabling larger-scale and more numerous attacks, is helpful in illuminating the world ahead and informing better prevention and mitigation efforts.

"We urge readers to consider ways in which they might be able to advance the collective understanding of the AI-security nexus, and to join the dialogue about ensuring that the rapid development of AI proceeds not just safely and fairly but also securely."

An example of the difficult task ahead of us is Stephen Hawking. The gentleman gave us much insight about the universe we wouldn't have if AI had not given him the ability to communicate (YouTube). To put a point on it, Ana Santos Rutschman, a Jaharis Faculty Fellow at DePaul University, wrote an article on The Conversation titled, "Stephen Hawking warned about the perils of artificial intelligence—yet AI gave him a voice."

Also see

aiface.jpg
Image: iStockphoto/carloscastilla

About Michael Kassner

Information is my field...Writing is my passion...Coupling the two is my mission.

Editor's Picks

Free Newsletters, In your Inbox