Tech giants have formed a 'Partnership on AI' to find fair AI practices and establish an 'open platform for discussion.' AI experts see it as positive, but worry about the power of private interests.
In recognizing the huge impact of artificial intelligence in our world today--with big data and machine learning powering a growing explosion of AI in every sphere of our lives, from self-driving cars to customer service chatbots--the world's biggest tech giants have formed an alliance to prepare for AI's challenges.
On Wednesday, Facebook, Microsoft, IBM, Amazon, and Google announced the creation of a "Partnership on AI." According to its website, the group was formed to "study and formulate best practices on AI technologies, to advance the public's understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society."
AI experts see this mostly as a good thing.
"The Partnership on AI reflects the increasing presence of AI within all our lives, and the immense ethical, economical and societal impacts it will have on us," said Toby Walsh, AI professor at the University of New South Wales.
Susan Schneider, associate professor at the University of Connecticut, agreed. "There is an increasing sentiment that AI will shape our future," she said. "One day, robots may fight our wars, take care of our homes, and even be our personal advisors. Technological unemployment is projected to increase dramatically from AI and robotics, and many worry that the development of smarter than human AI could threaten humanity."
Roman Yampolskiy, head of the Cybersecurity Lab at the University of Louisville, believes it to be an important development, as well. "It is meant to provide a central point of guidance and oversight for the quickly progressing AI industry," he said. Yampolskiy is "particularly happy to see all the big players, at the edge of machine learning research, embrace this initiative. While AI research benefits from competition, AI safety is more likely to benefit from collaboration."
Schneider views the partnership as a good start to addressing these challenges, "especially if it truly includes advisors from outside of the inner circle of AI leaders, such as ethicists and user activists."
However, some AI experts wonder about the details of the plan.
"I view this with cautious optimism," said Vincent Conitzer, professor of computer science at Duke University. "It is great that these companies, which have some of the leading minds in AI working at them, recognize the importance of AI research and its societal implications, and are able to join in such a partnership."
"Of course, the proof is in the pudding," he said, and details must be worked out.
Conitzer wonders what the partnership will do that current academic conferences can't provide.
"Will they share information on their latest AI technology that they would not share publicly?" Conitzer asked. "Will there be conferences like the 'faculty summits' that some of these companies host? How will the initiative relate to several philanthropic efforts underway, such as Elon Musk's $11 million AI safety program for funding researchers, or Eric Horvitz's One Hundred Year Study on Artificial Intelligence at Stanford?
And there is concern that the partnership may be controlled by private interests.
"The development of AI should not be controlled by business interests alone," said Schneider, "It requires a public dialogue about how AI can best benefit humanity.
Walsh shared the concern. "Whilst I welcome the initiative's broad goals of building trust and tackling the ethical challenges of AI, I have several serious reservations," he said. "I would have been much happier if a nonpartisan body like ACM, AAAI or IEEE had been leading this partnership."
"There is a very real concern that there is too large a concentration of strength in these tech giants," Walsh said. "And, as their actions in arranging their tax affairs, scanning books, lobbying Congress and elsewhere demonstrate, their success does not align completely with public good."
- The 7 biggest myths about artificial intelligence (TechRepublic)
- Artificial intelligence: Should we be as terrified as Elon Musk and Bill Gates? (ZDNet)
- Why robots still need us: David A. Mindell debunks theory of complete autonomy (TechRepublic)
- 7 trends for artificial intelligence in 2016: 'Like 2015 on steroids' (TechRepublic)
- How AI and automation could hollow out the US job market (TechRepublic)
- Q&A: A powerful look at the future of AI, from its epicenter at Carnegie Mellon (TechRepublic)
- Smart machines are about to run the world: Here's how to prepare (TechRepublic)