Artificial intelligence technology is bringing with it an avalanche of ethical concerns—and thus far, there is little consensus about how to preempt the potential risks. This ebook looks at what some experts consider the biggest challenges and how various organizations are trying to address them.
From the ebook:
One of the issues that arises when people are discussing the use of artificial intelligence (AI) is how to ensure that decisions based on AI are ethical. It’s a valid concern.
“While AI is by no means human, by no means can we treat it like just a program,” said Michael Biltz, managing director of Accenture Technology Vision at consulting firm Accenture. “In fact, creating AIs should be viewed more like raising a child than programming an application. That’s because AI has grown to the point where it can have just as much influence as the people using it.”
Employees at companies are not only trained to do a specific job; they’re also expected to understand polices around diversity and privacy, for example. “AIs need to be trained and ‘raised’ in much the same way, to not only perform a task but to act as a responsible co-worker and representative of the company.”
AI systems are making decisions in a variety of industries today—or will be doing so in the near future—that could have an impact on virtually everything they touch. “But the reality is that we don’t yet have the standards in place to govern what’s acceptable and what’s not, or to outline what a company is responsible or liable for as a result of [AI-based] decisions,” Biltz said.