Innovation

AI experts weigh in on Microsoft CEO's 10 new rules for artificial intelligence

Satya Nadella's vision for AI includes mandates for transparency, ethical considerations, and education. AI experts think it's a good first step, but say the rules raise new questions.

"Now is the time for greater coordination and collaboration on AI," Microsoft CEO Satya Nadella wrote in a blog post for Slate on Tuesday.

Like IBM, Google, Facebook, and other tech giants, Microsoft has jumped into AI full-force, releasing Azure Machine Learning, a cloud-based analytics tool, part of its Cortana Intelligence Suite, in 2015. It has also made mistakes, and recently sparked media attention with the release of Tay, a teenage chatbot that began uttering racist and sexist slurs on Twitter.

In his Slate piece, Nadella urged a rethinking of AI—moving away from focusing on the benefits and harms of AI and towards thinking about collaboration between machines and humans, and "amplified" intelligence. "The beauty of machines and humans working in tandem gets lost in the discussion about whether AI is a good thing or a bad thing," he wrote.

Nadella also unveiled 10 "rules" for AI. Here they are, lightly edited and condensed:

  1. AI must be designed to assist humanity and to respect human autonomy. Collaborative robots, or co-bots, should do dangerous work to create a safety net and safeguards for human workers.
  2. AI must be transparent: We should be aware of how the technology works and what its rules are. We must have an understanding of how the technology sees and analyzes the world. Ethics and design go hand in hand.
  3. AI must maximize efficiencies without destroying the dignity of people: It should preserve cultural commitments, empowering diversity. We need broader, deeper, and more diverse engagement of populations in the design of these systems.
  4. AI must be designed for intelligent privacy—sophisticated protections that secure personal and group information in ways that earn trust.
  5. AI must have algorithmic accountability so that humans can undo unintended harm. We must design these technologies for the expected and the unexpected.
  6. AI must guard against bias, ensuring proper, and representative research so that the wrong heuristics cannot be used to discriminate.

And here's what humans will need:

  1. Empathy, perceiving others' thoughts and feelings, collaborating and building relationships will be critical in the human-AI world.
  2. Education—to create and manage innovations we cannot fathom today, we will need increased investment in education. Developing the knowledge and skills needed to implement new technologies on a large scale is a difficult social problem that takes a long time to resolve. There is a direct connection between innovation, skills, wages, and wealth.
  3. Creativity. Machines will continue to enrich and augment our creativity.
  4. Judgment and accountability—We may be willing to accept a computer-generated diagnosis or legal decision, but we will still expect a human to be ultimately accountable for the outcomes.

But what do the AI experts think about these rules and goals? We reached out to several to see what Nadella got right—and what might be missing. Here's what they think:

Vincent Conitzer, professor of computer science at Duke University:

Much of this rings true, particularly the importance of investing in education, not only to train the few scientists and engineers who will improve these technologies, but to make sure that we have well-informed citizens who can make the right decisions in managing them.

SEE: 10 artificial intelligence insiders to follow on Twitter (TechRepublic)

But some "musts" are easier said than done. For example, if one AI algorithm is transparent but another appears to perform much better, as commonly happens in machine learning, can we expect a military entity or a company in a competitive market to use the former? Identifying the goals is the first step, but it is not enough. A next step is to figure out how to set up the right incentives to attain these goals, which is a daunting task.

Oren Etzioni, CEO at the Allen Institute for AI:

Like the 10 commandments, they are hard to argue with, though the 10 commandments were more succinct. I would add that his vision of AI as technology that collaborates with people and benefits humanity is just right.

Angelica Lim, roboticist at Aldebaran Robotics:

I completely agree with the need for AI transparency. As computer scientists, we have a responsibility to choose our AI models and techniques wisely. Today, we are seeing two trends of AI models:

  1. The ones that work very well, but are relatively black-box, such as deep learning. These opaque AIs are getting more and more popular.
  2. Ones that may work less well, but are transparent.

Why would we want the second kind? Because if ever they do NOT work (and no AI is perfect), we can *understand* why and fix the problem. Especially for mission-critical systems, we must choose transparency. We must to be able to introspect our AI, so that if something unexpected happens, we can debug and fix it. Transparency means control. The UK principles of robotics also have a transparency rule.

Nadella also says, that empathy, which is so difficult to replicate in machines, will be valuable in the human-AI world. I agree. A study from the University of Michigan shows that college kids are 40% less empathetic than their counterparts 20 to 30 years ago. We have a responsibility to think about our whether our technology will deteriorate or improve our empathy. For example, when harassment happens online, it's easy to disengage empathetically.

What if our technology, AI, and robots encouraged empathetic behaviour? What if AI had empathy? Nadella says empathy is difficult to replicate in machines, but it's a challenge we should undertake.

Andrew Moore, professor of computer science at Carnegie Mellon University:

At the top level, I think his comments are great. But, I do think we should face the fact that we have hard and nuanced questions ahead of us, and we need to understand that these ten principles alone would not give us enough guidance to make those hard decisions. For example, this question: "is it ethical for a car's emergency safety system to put the welfare of its driver ahead of the welfare of a pedestrian" is an actual question that we (likely, governments) need to rule on right away in order to keep making progress in very advanced safety systems—and yet these principles are not specific enough to suggest how to do that.

What Satya's principles do say is very important though. In the design of system where a car is trying to save lives, there does need to be transparency so that there can be a public view of the tradeoffs that a specific car company is making. I also believe strongly in Satya's accountability principle: It is never okay to say "Well sorry, but that was what the computer said to do." One aspect of this accountability that we are pushing on really hard now is how to build a process by which an AI system cannot be released into a product without being formally verified for correctness.

Toby Walsh, Professor of AI at The University of New South Wales:

It's interesting that all the major players—Google, Facebook, Microsoft—are embracing AI as a key component of their future. And it agrees with what many AI researchers, like myself, believe. The future should be that AI means Augmenting (our) Intelligence. This will be the best way for the rising tide of prosperity to lift all boats rather than to put many people out of work.

As for the 10 laws themselves, any such exercise is sure to leave gaps, just like Asimov's original three laws of robotics left room for many problematic issues. Ironically, this is the very problem we have in trying to program AI systems. For instance, it's impossible to define the laws for driving a car safely, to anticipate all the unexpected situations that might arise, and to have a definitive set of rules that cover every possible situation.

SEE: Machine Learning: The Smart Person's Guide (TechRepublic)

I would, however, agree with Nadella that the trajectory of AI and its influence on society is only

just beginning. It is likely to have a profound impact on all our lives. But I fear that, whilst it is a critical step, it is too early to design the ethical and empathetic framework needed to ensure a good outcome. We have only started to explore how we might get computers to behave ethically. Until we can, we need to be very careful about giving autonomy to any AI systems.

Roman Yampolskiy, director of the Cybersecurity Lab at the University of Louisville:

After the Tay fiasco, it is great, but not surprising, to see head of Microsoft start addressing fundamental issues of AI safety. Unfortunately his 10 laws only address problems we are already encountering with our "narrow" AIs and completely ignore much bigger problems we will encounter from systems with human or super-human level performance. Nonetheless, it is a step in the right direction and I commend him for it.

Also see...

Image: Microsoft

About Hope Reese

Hope Reese is a Staff Writer for TechRepublic. She covers the intersection of technology and society, examining the people and ideas that transform how we live today.

Editor's Picks

Free Newsletters, In your Inbox