20 terrifying uses of artificial intelligence
Image 1 of 20
1. Robots predicting the future
Many advances in artificial intelligence are innovative and extraordinary, but some are downright creepy. Here are 20 of the eeriest ways people are using, or could use, AI.
Nautilus is a supercomputer that can predict the future based on news articles. It is a self-learning supercomputer that was given information from millions of articles, dating back to the 1940s. It was able to locate Osama Bin Laden within 200km. Now, scientists are trying to see if it can predict actual future events, not ones that have already occurred.
SEE: Artificial Intelligence: More must-read coverage (TechRepublic on Flipboard)
2. Robot soldiers
One of the scariest potential uses of AI and robotics is the development of a robot soldier. Although many have moved to ban the use of so-called “killer robots,” the fact that the technology could potentially power those types of robots soon is upsetting, to say the least.
SEE: Artificial Intelligence: More must-read coverage (TechRepublic on Flipboard)
3. Schizophrenic robot
Researchers at the University of Texas at Austin and Yale University used a neural network called DISCERN to teach the system certain stories. To simulate an excess of dopamine and a process called hyperlearning, they told the system to not forget as many details. The results were that the system displayed schizophrenic-like symptoms and began inserting itself into the stories. It even claimed responsibility for a terrorist bombing in one of the stories.
SEE: Artificial Intelligence: More must-read coverage (TechRepublic on Flipboard)
4. Economic meltdown
It’s no secret that robots and algorithms control many of the major financial and governmental systems around the world, such as trading on Wall Street. But, according to Roman Yampolskiy, the head of the Cybersecurity Lab at the University of Louisville, flaws in those systems could have disastrous consequences.
SEE: Artificial Intelligence: More must-read coverage (TechRepublic on Flipboard)
5. Robots that deceive
In many cases, robots and AI systems seem inherently trustworthy–why would they have any reason to lie to or deceive others? Well, what if they were trained to do just that? Researchers at Georgia Tech have used the actions of squirrels and birds to teach robots how to hide from and deceive one another. The military has reportedly shown interest in the technology.
SEE: Artificial Intelligence: More must-read coverage (TechRepublic on Flipboard)
6. Robot lovers
Among the many ethical concerns posed by robots and the AI systems that power them is the idea that humans could love, or at least copulate with, a robot companion. Companies are already trying to make “sex robots” a reality, and opponents are campaigning against it fervently.
SEE: Artificial Intelligence: More must-read coverage (TechRepublic on Flipboard)
7. Survival robots
In an experiment conducted by the scientists of Intelligent Systems in Switzerland, robots were made to compete for a food source in a single area. The robots could communicate by emitting light and, after they found the food source, they began turning their lights off or using them to steer competitors away from the food source.
SEE: Artificial Intelligence: More must-read coverage (TechRepublic on Flipboard)
8. Police using AI algorithms to predict crimes
Police in certain cities around the US are experimenting with an AI algorithm that predicts which citizens are most likely to commit a crime in the future. Hitachi announced a similar system back in 2015. Maybe the film Minority Report wasn’t completely off base in its representation of the future?
SEE: Artificial Intelligence: More must-read coverage (TechRepublic on Flipboard)
9. AI-based medical treatment
One of the biggest industries that AI could potentially benefit is healthcare. AI is already in use in many fields of medicine, even helping doctors decide on treatment. But, what if that AI system misses a critical aspect of your medical history or makes the wrong recommendation?
SEE: Artificial Intelligence: More must-read coverage (TechRepublic on Flipboard)
10. Autonomous drones and weapons
There has been much controversy around the use of drones from a civilian sense, but even more so around military use of drones. However, the scary issue isn’t that people are piloting these services, but that they can pilot themselves. The US Navy has even given ground transport vehicles the ability to “autonomously identify a target” before carrying out a mission. Think about if a machine decided who is a friend and who is an enemy.
SEE: Artificial Intelligence: More must-read coverage (TechRepublic on Flipboard)
11. Supercomputer with imagination
Google experimented with a self-learning computer that had a simulated neural network. The computer was provided free access to the internet. Out of all the contents of the network, the computer began looking at pictures of kittens. It even developed its own concept of what a kitten looks like. This shows how human-like AI can become.
SEE: Artificial Intelligence: More must-read coverage (TechRepublic on Flipboard)
12. AI is granted citizenship
In October of 2017, Sophia became the first robot to have a nationality, gaining citizenship to Saudi Arabia. The robot was granted the same rights as a human, enabling it to live amongst humans in everyday life. This makes the idea of a robot takeover feel a little more possible.
SEE: Artificial Intelligence: More must-read coverage (TechRepublic on Flipboard)
13. Self-driving cars gone wrong
There have already been multiple instances of self-driving cars going awry, but some of the mistakes turn deadly. An example is the self-driving car that hit and killed a pedestrian in March. AI operating heavy machinery can result in deadly consequences, making the future of driverless cars worrisome.
SEE: Artificial Intelligence: More must-read coverage (TechRepublic on Flipboard)
14. AI communicating with AI
Last year, people were captivated by a video of two Google home assistants talking and arguing with each other. The conversation wasn’t dull, either, turning philosophical at one point. They argue about which one is a human and which is a computer, and one even claims it is God. If AI can talk and understand one another, that poses a terrifying problem for humans: What if AI starts teaming up?
SEE: Artificial Intelligence: More must-read coverage (TechRepublic on Flipboard)
15. AI taking jobs
With innovations in AI functionality, many jobs are at risk of being automated. While automating jobs might increase efficiency and production for organizations, it could put thousands of employees out of work.
SEE: Artificial Intelligence: More must-read coverage (TechRepublic on Flipboard)
16. AI hackers
Hackers are beginning to use AI technologies to carry out malicious cyberattacks online. By using AI as an attack vector, hackers have the capability of carrying out large-scale attacks at even faster rates, which would be detrimental to organizations and individuals. This is a very real fear for workers: 82% of security professionals said they fear hackers using AI to attack their companies.
SEE: Artificial Intelligence: More must-read coverage (TechRepublic on Flipboard)
17. Robots in our brains
Tiny robots are predicted to be able to live inside our heads. Futurist and inventor Ray Kurzweil predicted that by 2030, nanobots will be able to be implanted in our brains. The nanobots will be able to access the internet and help us learn information in minutes. The scary part–besides having robots in our brains–is that since the bots would be connected to the internet, there could be risk of hackers accessing our brains.
SEE: Artificial Intelligence: More must-read coverage (TechRepublic on Flipboard)
18. AI machines learning right from wrong
AI researchers are using literature to help machines learn right from wrong, hopefully preventing an AI takeover. Learning right from wrong teaches robots empathy, which can be good, but empathy also makes the machines more human-like. The more human-like the machine, the more difficult it is to discern a robot from a human.
SEE: Artificial Intelligence: More must-read coverage (TechRepublic on Flipboard)
19. AI in court
What if AI ran the judicial system? Discussions have started that place AI in the courtroom, determining judicial sentences. While this AI is intended to eliminate bias, there is chance of human bias infiltrating the AI by means of its human creators. We would then be placing people’s lives in the hands of biased AI.
SEE: Artificial Intelligence: More must-read coverage (TechRepublic on Flipboard)
20. Not-so-smart home AI
Between automated door bells, high-tech appliances, and heating systems, smart home designs are gaining ground. While smart home AI is intended to make functions around the house easier, there are many stories of AI making things worse. If AI reaches the point of autonomous function, it could alter smart home tools. For example, if AI goes awry, it could turn of the heat, turn off carbon monoxide monitoring, or open the windows during a storm and cause a flood. Home sweet home, right?
Also see
- Our autonomous future: How driverless cars will be the first robots we learn to trust (cover story PDF) (TechRepublic)
- Meet Norman, the world’s first ‘psychopathic’ AI (ZDNet)
- Amazon AI: Cheat sheet (TechRepublic)
- The road to automation, the joy of work, and the ‘Jen problem’ (ZDNet)
- Top 5 things to know about AI (TechRepublic)
- Artificial Intelligence: More must-read coverage (TechRepublic on Flipboard)
-
Account Information
Contact Conner Forrest
- |
- See all of Conner's content