The simulation hypothesis is the idea that reality is a digital simulation. Technological advances will inevitably produce automated artificial superintelligence that will, in turn, create simulations to better understand the universe. This opens the door for the idea that superintelligence already exists and created simulations now occupied by humans. At first blush the notion that reality is pure simulacra seems preposterous, but the hypothesis springs from decades of scientific research and is taken seriously by academics, scientists, and entrepreneurs like Stephen Hawking and Elon Musk.
From Plato's allegory of the cave to The Matrix ideas about simulated reality can be found scattered through history and literature. The modern manifestation of the simulation argument is postulates that, like Moore's Law, over time computing power becomes exponentially more robust. Barring a disaster that resets technological progression, experts speculate that it is inevitable computing capacity will one day be powerful enough to generate realistic simulations.
TechRepublic's smart person's guide is a routinely updated "living" precis loaded with up-to-date information about about how the simulation hypothesis works, who it affects, and why it's important.
- What it is: Often mislabeled as the "simulation theory" (a hypothesis is a suggested explanation, whereas a theory is a scientifically vetted model), the simulation hypothesis advances the idea that realistic simulations and models of the universe will be the inevitable product of perpetual technological evolution.
- Why it matters: The march towards artificial superintelligence and simulations will create automated technologies that fundamentally change and disrupt the global economy. Additionally, a runaway "intelligence explosion" could result in uncontrollable technologies that produce an existential threat on par with nuclear annihilation.
- Who it affects: In the short term, anticipate disruptions and rapid change propelled by machine learning and big data in every industry that relies heavily on automated algorithms, like the financial services sector.
- When it's happening: Now. While ideas about simulated reality have been tied to human culture for at least 4,000 years, Alan Turing proposed machines with human-equivalent intelligence in 1950. Ideas Turing developed during the Second World War paved the way for modern computing.
- How to access simulated realities: Though whole brain emulation and realistic simulations are potentially decades away, from advertising systems to video games to the stock market, artificial intelligence research has and will continue to produced dozens of automated tools used by thousands of companies and millions of consumers every day.
SEE: Quick glossary: Artificial intelligence (Tech Pro Research)
What is the simulation hypothesis?
The simulation hypothesis advances the idea that simulations might be the inevitable outcome of technological evolution. Though ideas about simulated reality are far from new and novel, the contemporary hypothesis springs from research conducted by Oxford University professor of philosophy Nick Bostrom.
In 2003 Bostrom presented a paper that proposed a trilemma, a decision between three challenging options, related to the potential of future superintelligence to develop simulations. Bostrom argues this likelihood is nonzero, meaning the odds of a simulated reality are astronomically small, but because percentage likelihood is not zero we must consider rational possibilities that include a simulated reality. Bostrom does not propose that humans occupy a simulation. Rather, he argues that massive computational ability developed by posthuman superintelligence will likely develop simulations to better understand that nature of reality.
In his book Superintelligence using anthropic rhetoric Bostrom argues that the odds of a population with human-like population advancing to superintelligence is "very close to zero," or (with an emphasis on the word or) the odds that a superintelligence would desire to create simulations is also "very close to zero," or the odds that people with human-like experiences actually live in a simulation is "very close to one." He concludes by arguing that if the claim "very close to one" is the correct answer and most people do live in simulations, then the odds are good that we too exist in a simulation.
Simulation hypothesis has many critics, namely those in academic communities who question an overreliance on anthropic reasoning and scientific detractors who point out simulations need not be conscious to be studied by future superintelligence. But as artificial intelligence and machine learning emerge as powerful business and cultural trends, many of Bostrom's ideas are going mainstream.
- Educate yourself on AI: Seven books to get you started (TechRepublic)
- 10 things you need to know about artificial intelligence (TechRepublic)
- Prepare for the Singularity (ZDNet)
- Are we in the Matrix? Science looks for signs we're not real (CNET)
- Evolution to AI will be more radical than ape-to-human, says Nick Bostrom (TechRepublic)
SEE: Research: 63% say business will benefit from AI (Tech Pro Research)
Why the simulation hypothesis matters
It's natural to wonder if the simulation hypothesis has real-world applications, or if it's a fun but purely abstract consideration. For business and culture, the answer is unambiguous: It doesn't matter if we live in a simulation or not. The accelerating pace of automated technology will have a significant impact on business, politics, and culture in the near future.
The simulation hypothesis is coupled inherently with technological evolution and the development of superintelligence. While superintelligence remains speculative, investments in narrow and artificial general intelligence are significant. Using the space race as an analogue, advances in artificial intelligence create technological innovations that build, destroy, and augment industry. IBM is betting big with Watson and anticipates a rapidly emerging $2 trillion market for cognitive products. Cybersecurity experts are investing heavily in AI and automation to fend off malware and hackers. In a 2016 interview with TechRepublic, United Nations chief technology diplomat, Atefeh Riazi, anticipated the economic impact of AI to be profound and referred to the technology as "humanity's final innovation."
- Why AI could destroy more jobs than it creates, and how to save them (TechRepublic)
- United Nations CITO: Artificial intelligence will be humanity's final innovation (TechRepublic)
- IBM Watson: What are companies using it for? (ZDNet)
- Artificial intelligence positioned to be a game-changer (CBS News)
- Free ebook: Executive's guide to AI in business (ZDNet)
SEE: Artificial Intelligence and IT: The good, the bad and the scary (Tech Pro Research)
Who the simulation hypothesis affects
Though long-term prognostication about the impact of automated technology is ill-advised, in the short term advances in machine learning, automation, and artificial intelligence represent a paradigm shift akin to the development of the internet or the modern mobile phone. In other words, the economy post-automation will be dramatically different. AI will hammer manufacturing industries, and logistics distribution will lean heavily on self-driving cars, ships, drones, and aircraft, and financial services jobs that require pattern recognition will evaporate.
Conversely, automation could create demand for inherently interpersonal skills like HR, sales, manual labor, retail, and creative work. "Digital technologies are in many ways complements, not substitutes for, creativity," Erik Brynjolfsson said, in an interview with TechRepublic. "If somebody comes up with a new song, a video, or piece of software there's no better time in history to be a creative person who wants to reach not just hundreds or thousands, but millions and billions of potential customers."
- How to prepare your business to benefit from AI (TechRepublic)
- Smart machines are about to run the world: Here's how to prepare (TechRepublic)
- Artificial intelligence: The 3 big trends to watch in 2017 (TechRepublic)
- The first 10 jobs that will be automated by AI and robots (ZDNet)
- AI, Automation, and Tech Jobs (ZDNet/TechRepublic special feature)
SEE: IT leader's guide to the future of artificial intelligence (Tech Pro Research)
When the simulation hypothesis is happening
The golden age of artificial intelligence began in 1956 at the Ivy League research institution Dartmouth College with the now-infamous proclamation, "every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it." The conference established AI and computational protocols that defined a generation of research. The conference was preceded and inspired by developments at Manchester College in 1951 that produced a program that could play checkers, and another program that could play chess.
Though excited researchers anticipated the speedy emergence of human-level machine intelligence, programming intelligence unironically proved to be a steep challenge. By the mid-1970s the field entered the so-called "first AI winter." The era was marked by the development of strong theories limited by insufficient computing power.
Spring follows winter, and by the 1980s AI and automation technology grew from the sunshine of faster hardware and the boom of consumer technology markets. By the end of the century parallel processing—the ability to perform multiple computations at one time—emerged. In 1997 IBM's Deep Blue defeated human chess player Gary Kasparov. Last year Google's DeepMind defeated a human at Go, and this year the same technology easily beat four of the best human poker players.
Driven and funded by research and academic institutions, governments, and the private sector these benchmarks indicate a rapidly accelerating automation and machine learning market. Major industries like financial services, healthcare, sports, travel, and transportation are all deeply invested in artificial intelligence. Facebook, Google, and Amazon are using AI innovation for consumer applications, and a number of companies are in a race to build and deploy artificial general intelligence.
Some AI forecasters like Ray Kurzweil predict a future with the human brain cheerly connected to the cloud. Other AI researchers aren't so optimistic. Bostrom and his colleagues in particular warn that creating artificial general intelligence could produce an existential threat.
Among the many terrifying dangers of superintelligence—ranging from out-of-control killer robots to economic collapse—the primary threat of AI is the coupling of of anthropomorphism with the misalignment of AI goals. Meaning, humans are likely to imbue intelligent machines with human characteristics like empathy. An intelligent machine, however, might be programed to prioritize goal accomplishment over human needs. In a terrifying scenario known as instrumental convergence, or the "paper clip maximizer," a superintelligent narrowly focused AI designed to produce paper clips would turn humans into gray goo in pursuit of resources.
- Facebook's machine learning director shares tips for building a successful AI platform (TechRepublic)
- AI helpers aren't just for Facebook's Zuckerberg: Here's how to build your own (TechRepublic)
- How developers can take advantage of machine learning on Google Cloud Platform (TechRepublic)
- Google engineer's swarm of mini robots could be the future of exploring Mars, and much more (TechRepublic)
- SAP aims to step up its artificial intelligence, machine learning game as S/4HANA hits public cloud (ZDNet)
SEE: Research: Companies lack skills to implement and support AI and machine learning (Tech Pro Research)
How to access simulated realities
It may be impossible to test or experience the simulation hypothesis, but it's easy to learn more about the hypothesis. TechRepublic's Hope Reese enumerated the best books on artificial intelligence, including Bostrom's essential tome Superintelligence, Kurzweil's The Singularity Is Near: When Humans Transcend Biology, and Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat.
Make sure to read TechRepublic's smart person's guides on machine learning, Google's DeepMind, and IBM's Watson. Tech Pro Research provides a quick glossary on AI and research on how companies are using machine learning and big data.
Finally, to have some fun with hands-on simulations, grab a copy of Cities: Skylines, Sim City, Elite:Dangerous, or Planet Coaster on game platform Steam. These small-scale environments will let you experiment with game AI while you build your own simulated reality.
- Kurzweil: Your brain will connect directly to the cloud within 30 years (TechRepublic)
- Why AI is the 'agent of the economy': EmTechDIGITAL leaders show global impact of AI (TechRepublic)
- How Google's DeepMind beat the game of Go, which is even more complex than chess (TechRepublic)
- Turning pings into packets: Why the future of computers looks a lot like your brain (ZDNet)
- Researchers uncover algorithm which may solve human intelligence (ZDNet)
- Why robots still need us: David A. Mindell debunks theory of complete autonomy (TechRepublic)
- Artificial Intelligence and life beyond the algorithm: Alan Turing and the future of computing (TechRepublic)
- Britain's World War II codebreakers tell their story (TechRepublic)
- Photos: The life of Alan Turing (TechRepublic)
- Why you should watch The Imitation Game and why you might want to skip it (TechRepublic)
- The 10 most interesting portrayals of AI in movies (TechRepublic)
- Rebuilding the brain: Using AI, electrodes, and machine learning to bridge gaps in the human nervous system (ZDNet)
- Researchers awarded $16m to develop brain tech to reanimate paralyzed limbs (ZDNet)
- Hiring kit: Data architect (Tech Pro Research)
Dan Patterson has nothing to disclose. He does not hold investments in the technology companies he covers.
Dan is a Senior Writer for TechRepublic. He covers cybersecurity and the intersection of technology, politics and government.