Innovation

IEEE announces 3 AI standards to protect human well-being in the robot revolution

Designers of artificial intelligence systems must take human ethical considerations into account to protect our society, according to IEEE.

As artificial intelligence (AI) and autonomous systems begin to pervade daily life, developers of these technologies must keep certain ethical considerations at hand to ensure the safety of human society.

On Friday, technical professional organization IEEE announced three new standards for ethics in AI that prioritize human well-being as these technologies advance, according to a press release. They will become a part of the IEEE publication Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems, a living document that encourages technologists to prioritize ethical considerations in working with AI.

"Robotics and autonomy are expected to introduce big innovations for society. Recently, there has been growing public attention focused on possible social problems that might occur, as well as on the huge potential benefits that can be realized," Satoshi Tadokoro, president of the IEEE Robotics and Automation Society, said in the release. "Some incorrect information from fiction and imagination may unfortunately be observed in those discussions. As the world's largest technical professional organization, IEEE will introduce knowledge and wisdom based on the accepted facts of science and technology to help reach public decisions that maximize the overall benefits for humanity."

SEE: Special report: How to implement AI and machine learning (free PDF)

The three IEEE standards projects are chaired by subject matter experts in their respective fields of study. They include the following:

1. Standard for ethically driven nudging for robotic, intelligent, and autonomous systems

This standard examines "nudges," which in terms of AI are overt or hidden suggestions designed to influence human behavior or emotions. It explains the concepts, functions, and benefits needed to ensure that robotic and autonomous systems are adhering to worldwide ethics and moral theories. It emphasizes the need for aligning the ethics and engineering communities on designing and implementing these systems.

2. Standard for fail-safe design of autonomous and semi-autonomous systems

Autonomous and semi-autonomous systems that malfunction can potentially harm human users, society, and the environment, IEEE noted. Effective fail-safe measures are needed to lower risks related to systems breaking down, and to provide developers, installers, and operators with clear technical instructions to terminate compromised systems safely. This standard establishes clear procedures for measuring, testing, and certifying an autonomous system's ability to fail safely on a scale from weak to strong, along with instructions for improving performance. It also provides a basis for developers, users, and regulators to design these fail-safe systems to improve accountability.

3. Well-being metrics standard for ethical artificial intelligence and autonomous systems

As AI systems improve, programmers, engineers, and technologists must consider how the products and services they build can improve human well-being in terms of economic growth and productivity. This standard identifies human well-being indicators and metrics that may be directly impacted by autonomous and intelligent systems, and provides a baseline to align the data that these systems should include so they can be used to increase human well-being.

"As the technology advances, it's clear that autonomous and intelligent systems will play an increasing role our daily lives," Konstantinos Karachalios, managing director for IEEE-SA, said in the release. "The efforts we undertake today are of utmost urgency to ensure all stakeholders are afforded the peace of mind to know these systems have been well thought out and incorporate the globally accepted ethical considerations at the heart of these technologies."

istock-664732532.jpg
Image: iStockphoto/monsitj

Also see

About Alison DeNisco Rayome

Alison DeNisco Rayome is a Staff Writer for TechRepublic. She covers CXO, cybersecurity, and the convergence of tech and the workplace.

Editor's Picks

Free Newsletters, In your Inbox