“The robots are here,” Dr. Roman V. Yampolskiy told a packed room at IdeaFestival 2015 in Louisville, KY. “You may not see them every day, but we know how to make them. We fund research to design them. We have robots who can assist us and robot soldiers who can kill us.”

Yampolskiy, director of the Cyber Security Laboratory at the University of Louisville and author of Artificial Superintelligence: a Futuristic Approach, studies the implications of AI, the interface between machines and people, and the influence they have on our workplace. AI has formally been around since in the 1950s and a lot of it–spell-check, for example–is no longer called artificial intelligence. “In your head,” Yampolskiy said, “those technologies aren’t AI. But they really are.”

AI, he said, is everywhere. “It’s in your phones, your cars. It’s Google. It’s every bit of technology we’re using.”

But AI has seen a recent explosion–a new Barbie doll, for example, will use AI to have conversations with children–and some worry advanced technology will begin to replace humans in the workplace. In Chengdu, China, Foxconn, a company making Apple and other electronics, has just built a factory run entirely run by robots.

What is next, Yampolskiy asked rhetorically. His answer: Superintelligence, intelligence beyond human. There are projects funded at unprecedented levels, conferences devoted to this, and private companies employing the brightest people in the world to solve these problems, Yampolskiy said. “Given the funding and intelligence, it would be surprising if they don’t succeed.”

Here is Yampolskiy’s list of machine attributes to be aware of:

  • Superfast–These machines are not only super smart, but they’re superfast. They can predict “ultrafast extreme events,” such as stock market crashes, at a pace no human can keep up with.
  • Supercomplex–The intelligence that runs an airplane, for example, is made up of so many interconnected elements that the people operating them can’t fully comprehend.
  • Supercontrolling–Once we cede power to the machines, Yampolskiy said, “we’ve lost it. We can’t take it back.”

What kind of devices will we see that have these abilities?

  • Supersoldiers–The military, Yampolskiy said, will be the first to use the advanced technology, in the form of drones, robot soldiers, and more.
  • Superviruses–We are only at the beginning of understanding how much damage can be done through computer viruses created with artificial intelligence.
  • Superworkers–We have been losing physical labor jobs for years due to automation. Now we’re losing intellectual jobs. “Employers love robots,” Yampolskiy said. “You don’t have to deal with sick days, vacation, sexual harassment, 401k. There’s a good chance a lot of us will be out of jobs.”

There are potential positive impacts of AI as well. Yampolskiy pointed to the possibilities of a cure for AIDS, ending hunger, stimulating new kinds of economic growth. But “we don’t need to spend much time talking about it,” Yampolskiy said. “If it’s a good thing, you don’t need to get ready for it.” He spent more time on the potential downsides.

His list of negative impacts includes losing jobs, losing human and civic rights, potentially deadly military applications. And the biggest worry? The unknown. “AI can have completely different mental capacities, desires, and common sense. Things humans immediately understand and agree on will be very different from machines,” he said. Machines, Yampolskiy said, can behave like children in some situations. “When you take humans out of decision-making, you have a system making very important decisions with no common sense.” This, he said, could be dangerous.

“We are no longer playing science fiction, fighting terminators,” Yampolskiy said. “We are doing important research.” Here are some solutions for responding to AI:

  • Do nothing–This is a common, but ill-advised solution, he said. “Maybe the machines will be nice? Or maybe they will kill us.” We don’t know.
  • Relinquish technology–Some advocate for stopping research in advanced AI altogether.
  • Integrate with society–There’s a line of thought that humans may be able to work together with the robots. Yampolskiy is not so sure.
  • Apply human laws to machines–“We could punish them,” Yampolskiy said. “But they may not want to obey.”
  • Enhance human capabilities–Can we become competitive? Upload information to our brains like we do to computers, via advances in genetic engineering? Unlikely.
  • Simulation argument–We could fool machines into thinking they’re living in a simulation, like a matrix or something in reverse. But some will break the rules.

In terms of researching AI, Yampolskiy cautions that we need to treat the ethical element seriously, in the same camp as other unethical research such as testing biological and chemical weapons or harming animals and children. “A certain type of software research,” Yampolskiy said, “may also be one of those things. AI may be more dangerous than nuclear weapons.” We need to design research review boards, decide on funding, and control what’s happening. “We need human ethics applied to robots. We should do the same thing we do with human cloning with advanced AI.”

Developed in the 1950’s, The Turing Test was designed to test of a machine’s ability to perform in a way that could not be differentiated from a human. “The Turing Test,” Yampolskiy said, “has been passed.”