Salesforce tackling bias in AI with new Trailhead module

Salesforce's Trailhead education platform continues to receive new learning modules, with AI ethics at the forefront in the latest update.

How AI can be used to remove bias in business Salesforce's Architect for Ethical AI Practices sat down with Dan Patterson to discuss how artificial intelligence can be used to enhance business processes and reduce bias.

On Tuesday, Salesforce announced the addition of modules to their Trailhead developer education platform, in a push to advance the responsible utilization of artificial intelligence (AI) models. The newly-introduced "Responsible Creation of Artificial Intelligence" module is intended to "empower developers, designers, researchers, writers, product managers… to learn how to use and build AI in a responsible and trusted way and understand the impact it can have on end users, business, and society," Kathy Baxter, architect of ethical AI practice at Salesforce, said in a blog post.

SEE: Special report: Managing AI and ML in the enterprise (free PDF) (TechRepublic)

Trailhead, first launched in 2014, is Salesforce's free individualized platform for upskilling current employees to close skills gaps. Salesforce's myTrailhead platform was launched into general availability in March, providing a branded experience for internal corporate training projects.

How large of an issue is bias in AI?

AI is far too often a "black box," in that the inferences provided by AI or machine learning algorithms appear valid, though the consumers of that inference do not necessarily understand how it was inferred—effectively reducing AI and machine learning algorithms to something known to work practice, but not known to work in theory.

To understand the effects of AI use on society—and in so doing, combat negative effects—researchers at MIT last month proposed the field of "machine behavior," to study how AI evolves, as a sort of analogue to ethology. The researchers note that pundits and academics alike "are raising the alarm about the broad, unintended consequences of AI agents that can exhibit behaviours and produce downstream societal effects—both positive and negative—that are unanticipated by their creators."

The need for Salesforce's initiative was likewise made quite apparent in April's CIO Jury, which found that 92% of tech leaders have no policy for ethically using AI. Fortunately, awareness of the issue does exist in the boardroom, as executives polled indicated a need for an AI ethics policy.

How can programmers remove bias from AI systems?

The quality of a machine learning algorithm is reflective of the quality of the data used to train it. Inherent biases in that data can unduly influence how AI works. For mitigating bias, "a lot of it is just being aware of what kind of data you're drawing from," Rebecca Parsons, CTO of ThoughtWorks, told TechRepublic at the 2018 Grace Hopper Celebration.

"There are also techniques where it's a bit easier to understand the basis on which a recommendation is being made. And so, maybe you can train using different methods from the same data, and look at the one telling you what kinds of patterns it's picking up in the data, and that might give you insight into the bias that might exist in the data," she said.

For more, check out "Salesforce rolls out new low-code services for building AI-powered features," "Gen Z and millennials want AI-based personalized support," and "Google pulls plug on AI ethics group only a few weeks after inception" on ZDNet.

Also see

istock-904420104-1.jpg

metamorworks, Getty Images/iStockphoto