Researchers at several universities in the United States and China have developed a method of manipulating white noise to trick digital assistants into executing commands.

Called “adversarial audio,” the trick involves creating “a special ‘loss function’ based on CTC [connectionist temporal classification] Loss that takes a desired transcription and an audio file as input, and returns a real number as output,” said U.C. Berkeley PhD candidate Nicholas Carlini.

The manipulating recording is run through a process called gradient descent until the distortion is minimized but still effective at triggering a response from digital assistants like Siri, Alexa, and Google Assistant.

Experiments like these make noticeable a problem that is only going to grow as digital assistants become more ubiquitous: They can be hacked in new and different ways since they rely on voice commands.

As Sheng Shen of University of Illinois at Urbana-Champaign points out, the commands don’t even need to be audible–they can be ultrasonic. Shen has looked into the possibility of commands outside the range of human hearing opening doors, placing online orders, and doing other malicious things without the owner of the device hearing a single thing.

Should businesses with digital assistants or smart speakers be worried?

Digital assistants may be growing rapidly in popularity, but they’re still a relatively new technology. I can remember when speech recognition was so poor it was comical, and now only a decade or so later machines can recognize speech as well, if not better, than humans.

AI speech recognition is still in its relative infancy, and that means people will find interesting ways to hack it. Like using a Cap’n Crunch whistle from a cereal box to trick payphones into giving free calls, this latest attack is simply the evolution of using a system against itself, and it will make digital assistants (like it did with telephones) more secure in the long run.

SEE: Security awareness and training policy (Tech Pro Research)

Should those using digital assistants be concerned right now, though? Not necessarily. The current exploits have a relatively narrow focus, and their widespread use is unlikely at this point.

Carlini’s team created recordings designed to fool Google Assistant, but in reality they are only successful against Mozilla’s DeepSpeech. 100% successful, but still only successful against one speech-to-text engine that’s little used compared to Google Assistant, Siri, and Alexa.

It’s unlikely that a hacker hiding in the bushes is going to hack your Amazon Echo using their smartphone anytime soon. Here’s hoping that Google, Amazon, and Apple will do their own research and fix speech recognition exploits before they become mainstream.

In the meantime, I’ll be turning “Hey, Siri” off on my iPhone just to be sure.

The big takeaways for tech leaders:

  • Researchers from several universities have managed to trick speech recognition software into picking up commands that humans can’t hear, both in the form of white noise and ultrasonic sound.
  • Current applications of this sort of hack are limited in scope and only a proof of concept. It’s unlikely that there is any use in the wild.

Also see