Before virtual assistants such as Amazon’s Alexa and Google Assistant became ubiquitous household technology, thieves needed to gain physical access into a home to inflict harm. Now all they need is their voice.
Artificial intelligence-powered voice assistants have a not-so-secret vulnerability: They can be hacked with audible cues, ambient noise or even ultrasound, leaving sensitive personal information such as credit card numbers and passwords open to theft. Yingying Chen, a Rutgers professor of electrical and computer engineering, created an application called WearID to address these exploits.
“We’re a long way away from The Shining, when it took brute force to hurt someone,” Chen said. “In the digital age you can dissect people’s lives and access their most important information simply by speaking from behind a closed door.”
Since 2020, Chen and her colleagues Yan Wang at Temple University and Nitesh Saxena at Texas A&M University have been developing a user-authentication framework that captures human voice patterns in the vibration domain and uses them as an identity token to verify spoken commands given to a virtual assistant.
The solution, WearID, works like this: When someone issues a command to a voice assistant, the WearID app, which is installed on the user’s smartphone or wearable device, uses the device’s accelerometer to capture the vibration characteristics of the person speaking and compare them with the audio captured by the voice assistant’s microphone.
If a legitimate user has given the command, the spectral pattern between the vibration and audio domains will be similar. If the pattern doesn’t match, the voice assistant will ignore the prompt.
Chen is working with Rutgers to patent the technology and with Silicon Valley industry leaders to help bring WearID to market. She hopes to have the app available for download sometime next year.
“Because this is a software solution that requires no backend hardware, it should be straightforward to deploy,” she said.
“As internet-connected devices rise in popularity and voice prompts become an increasingly common form of interaction, user-verification technology will become even more important,” Chen said. Current vulnerabilities mean that adversaries could theoretically hack low-cost home appliances (such as an internet-enabled television) and use them to give commands to manipulate security-critical systems – such as a smart lock on a front door. WearID is being designed to close these loopholes.
“Manufacturers creating ‘intelligent’ appliances and other devices are focused more on how to make these devices user-friendly than they are on security and privacy,” she said. “That’s why we think it is important to use a combination of smart devices to defend against possible adversarial attacks.”