How to make AI ethics a priority at your company: 5 tips

One-third of professionals see ethical risks as a top concern about artificial intelligence and technology, according to a Deloitte report.

The ethics that AI will need to succeed We caught up with the IBM Watson CTO at Mobile World Congress 2018 to talk about the ethical dilemmas that are coming for artificial intelligence.

The majority of executives (76%) expect artificial intelligence (AI) to "substantially transform" their organizations within the next three years, according to a Deloitte report released on Wednesday. While AI does help professionals focus on higher value work, ethical concerns remain one of the top concerns for business, the report found.

SEE: Artificial intelligence: A business leader's guide (free PDF) (TechRepublic)

Ethical concerns around AI are not new, with science fiction writers discussing the topic as early as the mid-20th century, the report said. Advancements in AI technology and increased AI adoption in recent years has made the discussion around ethics much more urgent, causing countries to even implement AI ethics guidelines.

The top ethical concerns around the technology includes bias and discrimination, a lack of transparency, erosion of privacy, poor accountability, and workforce displacement and transitions, according to the report.

However, the report outlined the following five ways companies can make AI ethics a priority:

1. Call upon the board and stakeholders: With the many issues AI ethics could cause—reputational and financial—the company's board must be involved. The report recommends creating an advisory committee of cross-functional leaders to partner with stakeholders and oversee the design and use of AI solutions.

2. Leverage tech to avoid risks: Companies should equip AI developers with the necessary training to test and remediate systems that may produce unintentional bias. The organization should also take advantage of analytical tools designed to detect variables of bias.

3. Trust through transparency: Companies must be transparent about AI use with stakeholders, if they want to build trust. They should clearly explain what AI is being used, what its purpose is, and how it affects customers. This communication is especially important for stakeholders who don't have a tech background.

4. Quell employee anxiety: Tech will undoubtedly affect jobs in some way, the report said. Companies should be honest with their employees and tell them how their jobs might be affected. The transparency will help alleviate anxiety around the topic.

5. Balance the benefits and risk of AI: Companies must consider the risks and benefits of AI, balancing them accordingly. The organization should always align AI with the overall business initiative, rather than simply implementing the tech to stay relevant.

For more lessons on AI adoption, check out this TechRepublic article.

Also see

ai.jpg
Image: iStockphoto/nespix

By Macy Bayern

Macy Bayern is an Associate Staff Writer for TechRepublic. A recent graduate from the University of Texas at Austin's Liberal Arts Honors Program, Macy covers tech news and trends.