Building a slide deck, pitch, or presentation? Here are the big takeaways:
- At SXSW 2018, Tesla's Elon Musk said that the current state of AI regulation is "insane," calling the technology "more dangerous than nukes."
- Elon Musk called for more symbiosis between humans and AI, with a public body to oversee AI implementations.
Just how dangerous is artificial intelligence (AI)? Well, according to Tesla CEO Elon Musk, "AI is far more dangerous than nukes."
That statement was made at the 2018 SXSW conference in Austin. Speaking to panel moderator Jonathan Nolan, co-creator of HBO's Westworld, Musk said AI "scares the hell out of me," and even though he typically isn't big on regulation, he believes that AI is something where stronger regulation is called for.
Musk has been known to sound the alarm on AI in the past, but recent growth of technologies like Google's AlphaGo have perpetuated those fears in him, he said speaking to Nolan. As noted by Asha McLean, of our sister site ZDNet, the AI competitor went from losing games of Go to relatively average players to absolutely crushing the game's world champion in a few short months. That rate of growth is what scares Musk, he said.
SEE: IT leader's guide to the future of artificial intelligence (Tech Pro Research)
The big issue in the space is hubris, Musk said. In the talk, Musk said that some experts in the field of AI "think they know more than they do and they think they're smarter than they are." That's a problem, because these experts don't believe a machine could ever be smarter than they are.
In addition to the rate of improvement seen in AlphaGo, the rate of improvement in self-driving cars is also something that needs to be addressed.
"The rate of improvement is really dramatic, but we have to figure out some way to ensure that the advent of digital super intelligence is one which is symbiotic with humanity," Musk said. "I think that's the single biggest existential crisis that we face, and the most pressing one."
According to Musk, AI represents a serious danger to the public, and needs regulatory oversight from a public body. In touching on the nuclear weapon analogy, he said: "The danger of AI is much greater than the danger of nuclear warheads, by a lot and nobody would suggest that we allow anyone to just build nuclear warheads if they want—that would be insane."
The potential danger of AI is one of the most stressful things in Musk's life, he said, next to the production of the Tesla Model 3. One option to keep humans safe from the machines would be to create a physical input on the human body, so AI would be an extension one's self, Musk said.
In keeping with the dystopian theme, Musk also said that he believed we might be heading toward a Dark Age brought about by World War III. To protect humanity, he said we need to build self-sustaining bases on the moon and on Mars to survive the war.
- Special report: How to implement AI and machine learning (free PDF) (TechRepublic)
- 'More dangerous than nukes': Elon Musk still firm on regulatory oversight of AI (ZDNet)
- Elon Musk wants to preserve humanity in space (CNET)
- Machine learning: The smart person's guide (TechRepublic)
- What is AI? Everything you need to know about Artificial Intelligence (ZDNet)
- How artificial intelligence is unleashing a new type of cybercrime (TechRepublic)
Conner Forrest has nothing to disclose. He doesn't hold investments in the technology companies he covers.
Conner Forrest is a Senior Editor for TechRepublic. He covers enterprise technology and is interested in the convergence of tech and culture.