A lot of households are now getting in on the smart speaker train, whether it’s Amazon Echo or Google Home devices, Alexa or Google Assistant. But this is still sort of unchartered territory when it comes to some issues, including privacy concerns. For the last couple of years, researchers from both China and the US have been trying to prove that they can send “subliminal messages” to the speakers and get them to do things that the user didn’t ask them to do.
Researchers from Berkley have published a research paper based on studies they’ve conducted since 2016 showing they can actually embed commands into music recordings or even spoken text. These commands are not audible to human ears but the smart assistants are able to detect them and execute the commands. These commands include adding items to your shopping list, opening a website, turning it into airplane mode, etc. And while they conducted these within a lab setting, it’s only a matter of time before this kind of trick will fall into nefarious hands.
The reason why researchers are doing this in the first place is to show that there is still a gap between human and machine speech recognition and that AI is still at a stage where it can be tricked and manipulated with techniques like these. While we’re still at a relatively early stage where voice-activated gadgets are slowly getting more popular, OEMs should be able to solve issues like this where hackers and other malicious entities will be able to take advantage of this technology’s weakness.
Amazon says that they have “taken steps” to ensure that all their Echo devices are secure. Google said that they are working on features so their Home devices and other speakers that support Google Assistant will not be victimized by these undetectable audio commands. Hopefully, there will not be any cases soon where human lives and security have been compromised because of smart speakers or else this industry will be in trouble.
SOURCE: New York Times
May 11, 2018 at 11:00AM