Innovation

23 principles for beneficial AI: Tech leaders establish new guidelines

Tesla CEO Elon Musk, DeepMind founder Demis Hassabis, and nearly 1,000 other leaders in AI just signed guidelines for developing safe AI. Here's are the highlights, and what it means.

"What does a good future look like?" asked Tesla CEO Elon Musk, speaking on a panel at the Beneficial AI conference in Asilomar, CA. "We're headed towards either superintelligence or civilization ending."

The question addressed a concern that many share when it comes to AI: When computers begin making decisions, how do we ensure that they align with human values? We've seen cases of bias in machine learning, racial slurs coming from a Microsoft chatbot, and a fatal accident that happened in an AI-powered car. And many experts continue to investigate the number of other ways that AI can make mistakes, hoping to limit harm caused by AI, such as from the use of autonomous weapons, for instance.

Attention to ethical issues in AI is nothing new, and several organizations have already begun to address it. OpenAI, a nonprofit AI research organization backed by Musk, is dedicated on ethical research in AI. In September 2016, Google, Amazon, IBM, Microsoft, and Facebook teamed up to establish a "partnership on AI" focusing on exploring the ethical implications of the technology. And "moral issues" is one of the top AI trends for 2017, according to experts in the field.

Adding to the conversation this week, the Future of Life Institute unveiled the Asilomar Principles—a set of 23 points, established by AI experts, roboticists, and tech leaders, to guide the development of safe AI. Developed at the five-day Beneficial AI conference, the principles have been signed by more than 2,000 people, including 844 AI and robotics researchers. The list includes Musk, Google DeepMind founder Demis Hassabis, cosmologist Stephen Hawking, and many other top tech leaders.

The list is broken up into research problems, ethics and values, and longer-term issues in AI. Here is a summary of the principles.

Research issues

The goal of AI research should be "beneficial intelligence," according to the document. It should address preventing AI systems from being hacked. It should also address "maintaining people's resources and purpose," it stated. Law should "keep pace with AI," and the question of AI "values" should be considered. Additionally, efforts should be made for researchers and policymakers to collaborate, and an overall culture of trust and respect should be "fostered among researchers and developers of AI."

Ethics and values

AI should be developed in a way that is secure and transparent, according to the principles. Autonomous systems should provide ways of explaining actions. Those who create AI systems must take responsibility for the implications of how those systems are used. Autonomous systems should be designed to reflect human values. And people should have the opportunity to control the way that data is shared and used from these systems.

SEE: Police use robot to kill for first time; AI experts say it's no big deal but worry about future (TechRepublic)

AI should benefit as many people as possible, and should contribute to humanity. Also, "humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives," the document stated.

It's "essential for systems that work tightly with humans in the loop," and for humans to be the final decision makers, wrote Francesca Rossi, research scientist at the IBM T.J. Watson Research Centre. "When you have human and machine tightly working together, you want this to be a real team. So you want the human to be really sure that the AI system works with values aligned to that person. It takes a lot of discussion to understand those values."

And an "AI Arms Race"—one in which countries compete to build up intelligent, autonomous weapons—has a great potential for harm, and should be actively avoided.

Longer-term issues

We do not know what AI will be capable of, and should plan for "catastrophic or existential risks."

Especially when it comes to self-improving AI, systems "must be subject to strict safety and control measures," the document stated. Finally, " superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization."

The Asilomar AI Principles are important guidelines for the development of Safe AI, and they are broadly supported by the AI community, industry and public intellectuals, according to Roman Yampolskiy, director of the Cybersecurity lab at the university of Louisville. "Additionally, tremendous support for the principles as can be seen from the number of signatories provides credibility to the young field of AI Safety, which it still needs in the face of numerous AI Risk deniers."

Deniers, he said, are those who "refuse to accept that poorly designed or malevolent AI/Superintelligence can present a huge risk to humanity."

Also see

screen-shot-2017-02-01-at-1-29-06-pm.png
Image: screenshot, Beneficial AI conference

About Hope Reese

Hope Reese is a Staff Writer for TechRepublic. She covers the intersection of technology and society, examining the people and ideas that transform how we live today.

Editor's Picks

Free Newsletters, In your Inbox