At this point in the artificial intelligence transformation, it’s easier to spot the mistakes than the successes.

When Apple and Goldman Sachs rolled out the Apple credit card, one high-profile tech founder and applicant described how the team clearly failed on the “explainability” requirement for AI efforts.

Co-founder & CTO of Basecamp David Heinemeier Hansson complained about the card’s application process after he and his wife both applied for the card. Her credit limit was much lower than his, even though her credit score was better. When Heinemeier Hansson tried to find out why, the first customer service agent literally had no answer:

“The first person was like “I don’t know why, but I swear we’re not discriminating, it’s just the algorithm.”

The second customer service agent highlighted the explainability fail:

“Second rep went on about how she couldn’t actually access the real reasoning (again IT’S JUST THE ALGORITHM is implied).”

How can Apple and Goldman Sachs prove the credit review process is fair if no one has any clue how it works?

Every company should have an AI ethicist to define, document, and explain algorithms, according to “Ethical AI: Five Guiding Pillars” by Traci Gusher, principal, Innovation and Enterprise Solutions, artificial intelligence, and Todd Lohr, principal, Advisory and a KPMG Digital Lighthouse Network Leader.

For AI to succeed, someone has to own these algorithms and be able to explain exactly how the analysis works to team members and clients, as the KPMG principals explain in the new report. The authors stressed that AI must be governed and controlled in a meaningful way to win acceptance among clients and employees.

“AI-driven enterprises know where and when to use AI,” Gusher said. “They have an AI compass that helps point them in the right direction for governance, explainability, and value.”

Naming a specific owner of AI efforts at the corporate level can also make transparency easier.
This owner should take the lead in explaining to customers how data is being used and how it influences the customer experience.

The report authors recommend that companies let customers choose whether to opt in or out of data sharing, while at the same time illustrating the benefits of opting in.

KPMG recommends following these guiding pillars to ensure AI efforts are ethical:

  1. Transforming the workplace
  2. Establishing oversight and governance
  3. Aligning cybersecurity and ethical AI
  4. Mitigating bias
  5. Increasing transparency

To monitor and remove bias in AI, companies should make sure algorithms align with corporate values and ethics, compliance and security and quality standards. When bias could have an adverse social impact, companies should arrange independent reviews of those models.

Security concerns sharpen

Executives are starting to understand the security risks around AI, namely “adversarial attacks that poison algorithms by tampering with training data” that could compromise privacy and create bias.

Seventy-two percent of CEOs stated that strong cybersecurity is vital to building trust in AI systems, compared to only 15% last year. Healthcare and finance leaders are the most concerned about ethics in AI, according to KPMG research conducted with 750 industry insiders in October 2019.

SEE: AI in healthcare: An insider’s guide (free PDF)

The KPMG authors recommend taking these steps to build security in AI:

  • Identify who trained the algorithms
  • Track the origin of the data and any changes made to it
  • Maintain continuous review and confirmation of an algorithm’s effectiveness and accuracy
A new report from KPMG recommends following these guidelines to increase transparency and reduce bias in artificial intelligence work.
Image: KPMG