Innovation

How to determine the right amount of AI for your users

Too much automation leads to more human errors, according to Rob Keefer, chief scientist at software consulting company POMIET.

As artificial intelligence (AI) becomes more accessible, software engineers are increasingly looking to incorporate these algorithms into various different tech products and services. However, developers must be wary of the downsides of this technology, according to Rob Keefer, chief scientist at software consulting company POMIET.

Too much automation can lead to more human errors, and as such, technologists must work to determine just the right amount of AI to mix into their systems to truly help their users—the Goldilocks AI.

"Often as technologists we make technology as the hero. As user advocate I'd like to make the user the hero," Keefer said in a session at the 2018 Code PaLOUsa conference in Louisville, KY.

SEE: Software automation policy guidelines (Tech Pro Research)

Making tech the hero has some specific human consequences, Keefer said:

  • Learning diminishment: When users trust a tech product too much, their own skills begin to atrophy. Then, when they are placed in a surprise situation where they need to react, they don't remember how, Keefer said. This is why pilots spend hours every week in simulated flights honing their skills, even though planes generally fly themselves now, Keefer said.
  • Unfounded trust: When users put too much trust in technology, accidents happen, Keefer said. For example, in the recent Uber self-driving car crash that killed a pedestrian, the safety driver was at the wheel, but was trusting the car's tech to navigate around obstacles. In another case, a bus driver followed their GPS without paying attention to signs that said a bridge clearance was nine feet, and crashed into it.
  • Complacency and bias: Cognitive psychologists coined the terms automation complacency (not paying attention due to technology) and automation bias (believing the technology, regardless of the context) to describe these phenomena. "These two concepts are things that we as technologists need to take into account when building and designing systems," Keefer said.

The question, Keefer said, is how do we let technology become an advisor, rather than just doing tasks for us?

He gave the example of Audi's A8 self-driving car, which gives users escalating warnings when they need to take the wheel back from autonomous mode. If the user does not respond, the car will call 911. "It's about transitioning back to the user appropriately," Keefer said.

Robots are good at learning when given precise instructions, Keefer said. The hard part is getting these systems to give the right amount of information to the user to enable them to make a decision.

"Automation should provide situational awareness," Keefer said. He gave the example of the Iron Man movies, in which Tony Stark and the digital assistant in his suit, Jarvis, work together to solve problems. Jarvis compiles data for Tony, but then they walk through the information at a human pace. "We want to develop systems that advise us in those ways and operate at a user's pace, rather than just presenting the conclusion to them," Keefer said.

SEE: Research: Companies lack skills to implement and support AI and machine learning (Tech Pro Research)

The goal is creating human-machine systems that are greater than the sum of their parts, Keefer said. Separately, Tony Stark and his suit are a wealthy man and an item of clothing, respectively, he added. But when they are together, they create Iron Man.

A strong AI partnership involves the following, Keefer said:

  • Shared vision: Both the AI and the human understand the goal, and have the same goal
  • Compatible/complementary skills: The AI and the human contribute in areas that they are best at. For example, humans are better at recognition over recall. Technologists should work to create ways of presenting information in tech systems that is recognizable to users, rather than forcing them to remember certain things.
  • Good communication systems
  • Cooperation: Research on teamwork can be applied to teams of humans and machines, Keefer said.
  • Useful feedback
  • Identification of leadership
  • Holistic perspective

Technologists need to think in terms of both/and when it comes to humans and robots, rather than either/or, Keefer said. "Success in complex domains will depend on the ability for humans and tech to work together as coordinated teammates," he added.

Also see

istock-664246660.jpg
Image: iStockphoto/sarah5

About Alison DeNisco Rayome

Alison DeNisco Rayome is a Staff Writer for TechRepublic. She covers CXO, cybersecurity, and the convergence of tech and the workplace.

Editor's Picks

Free Newsletters, In your Inbox