The simulation hypothesis is the idea that reality is a digital simulation. Technological advances will inevitably produce automated artificial superintelligence that will, in turn, create simulations to better understand the universe. This opens the door for the idea that superintelligence already exists and created simulations now occupied by humans. At first blush the notion that reality is pure simulacra seems preposterous, but the hypothesis springs from decades of scientific research and is taken seriously by academics, scientists, and entrepreneurs like Stephen Hawking and Elon Musk.

From Plato’s allegory of the cave to The Matrix ideas about simulated reality can be found scattered through history and literature. The modern manifestation of the simulation argument is postulates that, like Moore’s Law, over time computing power becomes exponentially more robust. Barring a disaster that resets technological progression, experts speculate that it is inevitable computing capacity will one day be powerful enough to generate realistic simulations.

TechRepublic’s smart person’s guide is a routinely updated “living” precis loaded with up-to-date information about about how the simulation hypothesis works, who it affects, and why it’s important.

SEE: Check out all of TechRepublic’s smart person’s guides

Executive summary

  • What it is: Often mislabeled as the “simulation theory” (a hypothesis is a suggested explanation, whereas a theory is a scientifically vetted model), the simulation hypothesis advances the idea that realistic simulations and models of the universe will be the inevitable product of perpetual technological evolution.
  • Why it matters: The march towards artificial superintelligence and simulations will create automated technologies that fundamentally change and disrupt the global economy. Additionally, a runaway “intelligence explosion” could result in uncontrollable technologies that produce an existential threat on par with nuclear annihilation.
  • Who it affects: In the short term, anticipate disruptions and rapid change propelled by machine learning and big data in every industry that relies heavily on automated algorithms, like the financial services sector.
  • When it’s happening: Now. While ideas about simulated reality have been tied to human culture for at least 4,000 years, Alan Turing proposed machines with human-equivalent intelligence in 1950. Ideas Turing developed during the Second World War paved the way for modern computing.
  • How to access simulated realities: Though whole brain emulation and realistic simulations are potentially decades away, from advertising systems to video games to the stock market, artificial intelligence research has and will continue to produced dozens of automated tools used by thousands of companies and millions of consumers every day.

SEE: Quick glossary: Artificial intelligence (Tech Pro Research)

What is the simulation hypothesis?

The simulation hypothesis advances the idea that simulations might be the inevitable outcome of technological evolution. Though ideas about simulated reality are far from new and novel, the contemporary hypothesis springs from research conducted by Oxford University professor of philosophy Nick Bostrom.

In 2003 Bostrom presented a paper that proposed a trilemma, a decision between three challenging options, related to the potential of future superintelligence to develop simulations. Bostrom argues this likelihood is nonzero, meaning the odds of a simulated reality are astronomically small, but because percentage likelihood is not zero we must consider rational possibilities that include a simulated reality. Bostrom does not propose that humans occupy a simulation. Rather, he argues that massive computational ability developed by posthuman superintelligence will likely develop simulations to better understand that nature of reality.

In his book Superintelligence using anthropic rhetoric Bostrom argues that the odds of a population with human-like population advancing to superintelligence is “very close to zero,” or (with an emphasis on the word or) the odds that a superintelligence would desire to create simulations is also “very close to zero,” or the odds that people with human-like experiences actually live in a simulation is “very close to one.” He concludes by arguing that if the claim “very close to one” is the correct answer and most people do live in simulations, then the odds are good that we too exist in a simulation.

Simulation hypothesis has many critics, namely those in academic communities who question an overreliance on anthropic reasoning and scientific detractors who point out simulations need not be conscious to be studied by future superintelligence. But as artificial intelligence and machine learning emerge as powerful business and cultural trends, many of Bostrom’s ideas are going mainstream.

Additional resources

SEE: Research: 63% say business will benefit from AI (Tech Pro Research)

Why the simulation hypothesis matters

It’s natural to wonder if the simulation hypothesis has real-world applications, or if it’s a fun but purely abstract consideration. For business and culture, the answer is unambiguous: It doesn’t matter if we live in a simulation or not. The accelerating pace of automated technology will have a significant impact on business, politics, and culture in the near future.

The simulation hypothesis is coupled inherently with technological evolution and the development of superintelligence. While superintelligence remains speculative, investments in narrow and artificial general intelligence are significant. Using the space race as an analogue, advances in artificial intelligence create technological innovations that build, destroy, and augment industry. IBM is betting big with Watson and anticipates a rapidly emerging $2 trillion market for cognitive products. Cybersecurity experts are investing heavily in AI and automation to fend off malware and hackers. In a 2016 interview with TechRepublic, United Nations chief technology diplomat, Atefeh Riazi, anticipated the economic impact of AI to be profound and referred to the technology as “humanity’s final innovation.”

Additional resources

SEE: Artificial Intelligence and IT: The good, the bad and the scary (Tech Pro Research)

Who the simulation hypothesis affects

Though long-term prognostication about the impact of automated technology is ill-advised, in the short term advances in machine learning, automation, and artificial intelligence represent a paradigm shift akin to the development of the internet or the modern mobile phone. In other words, the economy post-automation will be dramatically different. AI will hammer manufacturing industries, and logistics distribution will lean heavily on self-driving cars, ships, drones, and aircraft, and financial services jobs that require pattern recognition will evaporate.

Conversely, automation could create demand for inherently interpersonal skills like HR, sales, manual labor, retail, and creative work. “Digital technologies are in many ways complements, not substitutes for, creativity,” Erik Brynjolfsson said, in an interview with TechRepublic. “If somebody comes up with a new song, a video, or piece of software there’s no better time in history to be a creative person who wants to reach not just hundreds or thousands, but millions and billions of potential customers.”

Additional resources

SEE: IT leader’s guide to the future of artificial intelligence (Tech Pro Research)

When the simulation hypothesis is happening

The golden age of artificial intelligence began in 1956 at the Ivy League research institution Dartmouth College with the now-infamous proclamation, “every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it.” The conference established AI and computational protocols that defined a generation of research. The conference was preceded and inspired by developments at Manchester College in 1951 that produced a program that could play checkers, and another program that could play chess.

Though excited researchers anticipated the speedy emergence of human-level machine intelligence, programming intelligence unironically proved to be a steep challenge. By the mid-1970s the field entered the so-called “first AI winter.” The era was marked by the development of strong theories limited by insufficient computing power.

Spring follows winter, and by the 1980s AI and automation technology grew from the sunshine of faster hardware and the boom of consumer technology markets. By the end of the century parallel processing–the ability to perform multiple computations at one time–emerged. In 1997 IBM’s Deep Blue defeated human chess player Gary Kasparov. Last year Google’s DeepMind defeated a human at Go, and this year the same technology easily beat four of the best human poker players.

Driven and funded by research and academic institutions, governments, and the private sector these benchmarks indicate a rapidly accelerating automation and machine learning market. Major industries like financial services, healthcare, sports, travel, and transportation are all deeply invested in artificial intelligence. Facebook, Google, and Amazon are using AI innovation for consumer applications, and a number of companies are in a race to build and deploy artificial general intelligence.

Some AI forecasters like Ray Kurzweil predict a future with the human brain cheerly connected to the cloud. Other AI researchers aren’t so optimistic. Bostrom and his colleagues in particular warn that creating artificial general intelligence could produce an existential threat.

Among the many terrifying dangers of superintelligence–ranging from out-of-control killer robots to economic collapse–the primary threat of AI is the coupling of of anthropomorphism with the misalignment of AI goals. Meaning, humans are likely to imbue intelligent machines with human characteristics like empathy. An intelligent machine, however, might be programed to prioritize goal accomplishment over human needs. In a terrifying scenario known as instrumental convergence, or the “paper clip maximizer,” a superintelligent narrowly focused AI designed to produce paper clips would turn humans into gray goo in pursuit of resources.

Additional resources

SEE: Research: Companies lack skills to implement and support AI and machine learning (Tech Pro Research)

How to access simulated realities

It may be impossible to test or experience the simulation hypothesis, but it’s easy to learn more about the hypothesis. TechRepublic’s Hope Reese enumerated the best books on artificial intelligence, including Bostrom’s essential tome Superintelligence, Kurzweil’s The Singularity Is Near: When Humans Transcend Biology, and Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat.

Make sure to read TechRepublic’s smart person’s guides on machine learning, Google’s DeepMind, and IBM’s Watson. Tech Pro Research provides a quick glossary on AI and research on how companies are using machine learning and big data.

Finally, to have some fun with hands-on simulations, grab a copy of Cities: Skylines, Sim City, Elite:Dangerous, or Planet Coaster on game platform Steam. These small-scale environments will let you experiment with game AI while you build your own simulated reality.

Additional resources

Read more

Subscribe to the Innovation Insider Newsletter

Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more. Delivered Tuesdays and Fridays

Subscribe to the Innovation Insider Newsletter

Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more. Delivered Tuesdays and Fridays