CXO

Artificial intelligence: Know its purported benefits and risks

Some high-profile technologists claim that artificial intelligence (AI) may destroy the human race. Should we be worried?

iStock_000009058764Medium.jpg
Image: iStock/Linda Bucklin

Artificial intelligence (AI) has been on the radar of technology leaders for decades, and while we don't yet have autonomous machines helping us make decisions or carry out complex tasks, most enterprises have reaped some benefit from AI. From "intelligent" networks that reroute packets in the event of a hardware or network failure to advanced "rules engines" that do everything from estimate the risk of lending money to routing aircraft to appropriate gates, machines are developing an increasing ability to perform tasks that were once the domain of humans.

As AI advances, however, some of technology's big thinkers, including Stephen Hawking and Elon Musk, are seeing more peril than progress, and IT leaders will be expected to comment on the risks of AI, in addition to the purported benefits.

What's the danger of what we're creating?

Traditional computing has focused on performing mathematical calculations at increasingly rapid speeds. Tasks like rendering the latest animated film or simulating weather patterns are essentially massive math problems, and the current generation of computers excels at performing the necessary calculations quickly. However, computers are completely inept at tasks humans take for granted — an ability to learn and modify behavior is one of those core tasks. Computers can beat us at chess, but they can't tell us why they prefer a Picasso to a Monet.

Artificial intelligence research focuses on inventing an entirely new class of computers. Rather than creating faster calculation machines, AI research is focused on machine learning, with the ultimate goal being a computer that can interpret its environment and modify itself to adapt to that environment; essentially, this computer would be able to change itself based on knowledge it acquires, while retaining the ability to perform massively complex calculations more rapidly.

A machine with these capabilities could quickly outpace the understanding of its creators. With access to everything from the entirety of human wisdom via the internet to connected financial markets and power grids, such a machine could acquire knowledge, modify itself based on that knowledge, and continue the cycle. This machine could help humanity solve some of its most difficult problems, or it could conclude that humans are a threat or competing entity, and seek to mitigate that threat.

A new species?

Should AI efforts prove successful, scientists suggest that what will be created could be what amounts to a new species of life on this planet. We might relate to this new species as a dog relates to humans. Dogs can generally understand some human commands and respond to human interactions at some level, yet even the most intelligent dog cannot comprehend something like a smartphone or the processes that produced it. An AI species might quickly acquire the ability to produce knowledge, weapons, vehicles, or other technologies that we can't begin to comprehend or understand at even the most basic level.

Can the process be controlled?

What's interesting about the many voices expressing concern about AI is that they're well-versed in technology, and all indicate that part of their concern is rooted in the fact that we may be entirely unable to stop the creation of this new species. Various suggestions for "fail safe" technologies have been put forth, whereby an intelligent machine could be disabled. However, if the very premise of these machines is that they learn and modify themselves, how could we "outsmart" a superior intelligence and prevent it from removing a human-installed safeguard?

Furthermore, suggesting that humanity abandon research toward a technology that could vastly improve the human race seems equally impossible. No government or extra-governmental entity could ban invention and innovation and, even if that were possible, it would be impossible to know in advance which research is "safe" and which is a few steps away from producing an intelligence that has little need for humanity.

What, me worry?

Much of this debate is reminiscent of the litany of other disasters that could spell the end of humanity. Whether the concern is research on disease and bioweapons that could create an unintentional pandemic, to worries about weapons of mass destruction, there are all manner of potential disasters for the human race.

While worry about any of these risks might be enough to cause one to never leave the house, ignoring areas like AI is equally counterproductive. Having a basic understanding of the research into these areas, along with the pros and cons of what that technology might create, can only help us better understand these activities.

Also see

About Patrick Gray

Patrick Gray works for a global Fortune 500 consulting and IT services company and is the author of Breakthrough IT: Supercharging Organizational Value through Technology as well as the companion e-book The Breakthrough CIO's Companion. He has spent ...

Editor's Picks

Free Newsletters, In your Inbox