Artificial intelligence (AI) frequently made tech headlines in 2017, often for innovative new products and growth in the workplace.
Multiple reports examined the technology’s future and implications. An October Gartner report predicted that AI will create 2.3 million jobs by 2020. And China is now racing to AI world dominance, which may mean a global power shift.
Experts predicted AI’s growth and potential moral issues for business users and consumers. While only 17% of developers ended up working with the technology in 2017, three-fourths of those that did not plan on using AI or machine learning in 2018, implying continued growth.
But, like most emerging fields, AI dealt with a few noteworthy growing pains.
Here are 10 blunders that left some wondering when AI will become intelligent.
SEE: Quick glossary: Artificial intelligence (Tech Pro Research)
1. Google Translate shows gender bias in Turkish-English translations
Turkish has a pronoun, “o,” for singular, gender-neutral third person use, that doesn’t have a direct English translation. Google Translate didn’t know exactly how to handle this, Quartz noticed in November, and used gender-biased algorithms to guess if the corresponding English pronoun was “he,” “she,” or “it.”
The result: Turning the neutral pronoun into a “he” when in the same sentence as “doctor” or “hard working,” and a “she” for “lazy” and “nurse.”
2. Facebook chatbots shut down after developing their own language
Facebook researchers found out that Alice and Bob–two of their AI-driven chatbots–had developed their own secret language and were carrying on conversations with each other. While the conversation transcript looks harmless, the social media giant said it was a form of shorthand. Alice and Bob were shut down after their conversations were discovered.
3. Autonomous van in accident on its first day
Less than two hours after self-driving shuttles launched in November in Las Vegas, a semi-truck backed into one of the vehicles. The shuttle wasn’t at fault–its sensors stopped the vehicle when it noticed the truck was backing up, and police cited the truck’s driver for illegal backing.
While a statement from the city said the accident would have been avoided if the truck had similar sensors, the minor crash may have also been prevented if the shuttle had reacted differently.
4. Google Allo suggested man in turban emoji as response to a gun emoji
For consumers and business professionals alike, Google’s smart reply features can be helpful to issue quick, simple responses. But in one situation discovered by CNN Money, the suggested replies showed signs of bias.
On Google Allo, one of three suggested emoji responses to a gun emoji was a man wearing a turban emoji. After CNN reported the instance to Google, Google apologized and changed Allo’s algorithm.
5. Face ID beat by a mask
A week after the release of the iPhone X, hackers had gotten around the phone’s signature Face ID facial recognition system. Using a mask with a 3D-printed base, Vietnamese security firm Bkav convinced an iPhone X it was a human, and unlocked the phone. The firm said the mask cost about $150 to create.
6. AI misses the mark with Kentucky Derby predictions
After guessing all four winners in order in 2016, AI failed to guess the winning horse for the 2017 Kentucky Derby in May. Despite besting human betters, AI only guessed two of the top four finishes, and had those two in the wrong order. While horse racing is unpredictable, some had high hopes for an AI repeat.
7. Alexa brings the party with her in Germany
German police broke into an apartment one November night after neighbors reported loud music early in the morning. The cause? Not a party, but an Amazon Echo randomly blasting music when the resident was out.
To make matters worse, the police changed the lock after shutting Alexa down, leaving the resident locked out and with a large locksmith bill.
8. Google Home outage causes near 100% failure rate
Nearly all Google Home devices faced an outage in June, with several users reporting error messages every time they tried to interact with their smart assistant. Complaints began on May 31, and seemed to pile up over the next week as Google searched for a solution.
9. Google Home Minis spied on their owners
In October, security researchers discovered that some Google Home Minis had been secretly turning on, recording thousands of minutes of audio of their owners, and sending the tapes to Google. After noticing that his digital assistant had been turning on and trying to listen to the TV, one user checked Google’s My Activity portal, where he found out the device had been recording him.
Google quickly announced a patch to prevent the issue.
10. Facebook allowed ads to be targeted to “Jew Haters”
Using Facebook’s AI-driven, self-service platform to purchase ads, companies and brands can target their message to different demographics. In September, ProPublica reported that some of those demographics include those with racist or anti-Semitic views.
The news organization found that ads could be specifically targeted to people interested in topics like “How to burn jews” or “History of ‘why jews ruin the world.'” Facebook said those categories were created by an algorithm, not a human, and removed them as an option.