The widespread adoption of artificial intelligence technology by organizations and governments over the next decade or so, promises to be just as disruptive to society as personal computers and mobile smart devices were in the previous two decades—perhaps even more so. AI technology has the potential to permeate almost every nook and cranny of our collective psyche and change how we interact with the world around us and each other in ways we cannot yet imagine.
The uncertainty surrounding whatever societal disruptions are caused by the use of AI raises many concerns, particularly for companies researching, developing, and promoting AI technology like Microsoft. While the economic benefits for AI industry leaders seems obvious, there are still too many unanswerable questions and too many unpredictable consequences to make AI a surefire profitable endeavor.
This uncomfortable level of uncertainty is one of the reasons Microsoft recently published the book, The Future Computed: Artificial Intelligence and its role in society (PDF). The book is available as a free download and lays out Microsoft's vision of how AI could impact everyday life for both businesses and consumers. It also suggests several areas where academia, industry, and government need to step in with applicable and effective ethics, best practices, and legislation that will reign in any potential harm AI may cause as it becomes an integral part of modern life.
SEE: IT leader's guide to deep learning (Tech Pro Research)
The future is here
When we discuss disruption, we are not talking about something as drastic and dystopian as the malevolent AI depicted in movies like The Terminator. Microsoft, and most everyone else, is more concerned about the abuse of AI by malevolent humans. We have already seen what sensitive personal data in the hands of unethical political agents can do to unsuspecting individuals. All the unscrupulous among us lack is knowing what buttons to push—something AI can provide with unflinching precision if not counter-balanced with enforceable ethics and legislation.
Microsoft's book on the future of AI suggests that technologists must work closely with government, academia, business, and other stakeholders to offer guidance on how AI is used and deployed under these six ethical principles:
- Reliability and safety
- Privacy and security
Microsoft's main practical solution to many of the potential problems associated with AI development is democratization. If everyone has access to AI developer tools and can build their own AI-based solutions, no one can monopolize the benefits, or the potential pitfalls, of artificial intelligence. The book suggests democratization of AI may not be enough, however, governments and other stakeholders must be actively involved.
Historically, regardless of the new technology being introduced to society, technological advancement disrupts the status quo. AI will be no exception. Some industries are bound to suffer from widespread AI use, while other industries will flourish. There is also no doubt that some jobs will be eliminated and that some new jobs will be created to take their place. How this all shakes out over the next few decades is something we will all have to collectively deal with.
Microsoft, in an effort to stave off reactionary action by governments, ethicists, and other stakeholders proposes that technologists contemplate and discuss these disruptions now, while the AI technology is in its nascent stages and before the unethical and unscrupulous among us do irrevocable damage to this potential profitable endeavor. That is probably very good advice, because the implementation of AI carries with it the potential for unprecedented harm, if used for malevolent purposes.
- How AI tools make privacy policies easier to understand (TechRepublic)
- The malicious uses of AI: Why it's urgent to prepare now (TechRepublic)
- IBM Watson CTO: The 3 ethical principles AI needs to embrace (TechRepublic)
- Artificial intelligence: McKinsey talks workforce, training, and AI ethics (ZDNet)
- AI and the Future of Business (ZDNet special feature)
Do you think we are ethically prepared for AI? Share your thoughts and opinions with your peers at TechRepublic in the discussion thread below.
Mark W. Kaelin has been writing and editing stories about the IT industry, gadgets, finance, accounting, and tech-life for more than 25 years. Most recently, he has been a regular contributor to BreakingModern.com, aNewDomain.net, and TechRepublic.