The EU's guidelines offer a framework for ethical, trustworthy artificial intelligence for businesses and governments.
This week, the European Union published a set of ethical guidelines detailing how businesses and governments can achieve trustworthy artificial intelligence (AI)—that is, AI that is lawful, ethical, and socially and technologically robust.
Trustworthy AI should respect all laws and regulations, as well as meet the following requirements, according to the guidelines:
SEE: Special report: How to implement AI and machine learning (free PDF) (TechRepublic)
- Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
- Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
- Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
- Transparency: The traceability of AI systems should be ensured.
- Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
- Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
- Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.
While these guidelines are not laws, they set out a framework for lawmakers and companies to achieve trustworthy AI.
"The EU's new Ethics guidelines for trustworthy AI are a considered and constructive step toward addressing the impact of trustworthy AI on humankind, and toward laying the groundwork for necessary further discussion between key stakeholders in the private, public and governmental sectors," Juan Miguel de Joya, a consultant at the International Telecommunication Union and a member of the Association for Computing Machinery's US Technology Policy Committee, told TechRepublic.
SEE: Artificial intelligence: A business leader's guide (free PDF) (TechRepublic)
The business impact of the EU's AI ethics guidelines
The EU's new guidelines should start conversations among businesses worldwide that may not have the resources to independently assess the impact of the technology, de Joya said.
"Perhaps most fundamentally and significantly, release of the new guidelines is an opportunity for government, business, computing professionals and other stakeholders—particularly in the United States—to capture and channel the momentum of these discussions into real understanding of AI's potential and pitfalls," de Joya said.
These guidelines are "a welcome, solid and significant step forward," Lorraine Kisselburgh, a visiting fellow in the Center for Education and Research in Information Assurance and Security (CERIAS) at Purdue University, and a member of the Association for Computing Machinery's US Technology Policy Committee, told TechRepublic.
"Industries such as Amazon, Google, Uber, and Boeing have been rocked this year with issues regarding the fairness, accuracy, and safety of AI-based algorithms and autonomous systems," Kisselburgh said. "At the same time, faced with the tremendous opportunities for AI systems to improve the health, education, and economic welfare of our society—and global competition to generate innovative solutions—industry, academia, and government are struggling with the need to optimize the societal benefits of emerging AI technologies while maintaining clearly articulated principles of ethical practice."
Governments and organizations worldwide, including the European Commission and the US Congress, continue to wrestle with developing foundational principles to ensure that AI is fair, accountable, and transparent, as well as safe, reliable, and trustworthy, Kisselburgh said. These guidelines help lay out a path forward to realizing these goals.
The EU's guidelines include a pilot Trustworthy AI Assessment List for companies to use when building AI systems, covering the seven topics mentioned above. The list includes questions such as "Is there is a self-learning or autonomous AI system or use case? If so, did you put in place more specific mechanisms of control and oversight?"; "Did you assess potential forms of attacks to which the AI system could be vulnerable?"; and "Did you put in place ways to measure whether your system is making an unacceptable amount of inaccurate predictions?"
The EU plans to pilot the framework with a number of companies and organizations, and will review the list and build in feedback in early 2020, before proposing next steps, according to the guidelines announcement.
For more, check out TechRepublic's Five steps for getting started with AI in your business.
- Machine learning: A cheat sheet (TechRepublic)
- Artificial intelligence: A business leader's guide (TechRepublic download)
- IT leader's guide to deep learning (Tech Pro Research)
- What is AI? Everything you need to know about Artificial Intelligence (ZDNet)
- 6 ways to delete yourself from the internet (CNET)
- Artificial Intelligence: More must-read coverage (TechRepublic on Flipboard)