Technology

A voice assistant becomes a spy with little

A voice assistant becomes a spy with little

A group of cybersecurity experts has shown how relatively simple it is to manipulate home assistants – such as Amazon's Echoes and Google's Nests – to spy on private conversations and obtain other people's passwords. Voice assistants are now widespread and used in millions of homes around the world, but some aspects of their safety are still neglected, with risks for the protection of their data and privacy.

The research was carried out by experts from SRLabs, a hacker organization that deals with cybersecurity and offers advice to companies. In a series of videos, researchers demonstrated how some seemingly harmless assistant applications can be used for different purposes, such as spying on user activities. The demonstrations were made public after warning Google and Amazon, so they could have time to take some countermeasures.

A demonstration shows an action for the Google Assistant which, once installed, allows you to ask your device to say a number at random. The action does exactly what it promises, but after saying the number it does not mute and keeps the microphone of your home device open, continuing to listen to what is happening in the room.

An Alexa skill presented as a system for receiving the daily horoscope does something similar. After making his prediction and receiving the “Stop” command, he does not stop, but continues to listen to the conversations.

Other videos show how hackers were able to change actions and skills, causing them to give fake error messages, followed by prompting users to speak their passwords to particular online services. An assistant would never ask for a password, but not all users know it and the less experienced could easily fall for it, providing sensitive information to some attacker.

SRLabs experts obtained their results by exploiting a flaw, common to Amazon and Google systems, which allowed them to listen to conversations much longer than expected. To do this, it was enough to add a series of special characters that voice assistants cannot pronounce: the user does not hear anything, but in reality the assistant is continuing to read something unpronounceable, while keeping the microphone on to receive any commands. In this way, hackers could hear what users were saying, and automatically transcribe their sentences.

The actions for the Google Assistant and the Amazon Alexa skills can be added by anyone, but when they are requested to be published, the two companies perform a check, to make sure that the new code complies with the rules. The problem is that approval occurs only for the first request, while no checks are performed when actions and skills are updated by developers. Those at SRLabs were therefore able to get their systems approved before adding features to spy on user activities in subsequent updates.

Following the publication of the demonstrations, Amazon announced that it has taken some solutions to reduce the problem, so as to more easily detect malicious skills. Google has confirmed that it is hard at work to perform an action review for its Assistant, and has announced that it has suspended some features while waiting to better understand how to rule out the issues revealed by SRLabs.

The ability to install skills and actions can significantly extend the capabilities of your home assistant, but security experts urge you to be very cautious and always check who is behind the additional options you want to install. The automatisms offered by voice assistants to install new features make it more complicated to check what you are adding, not to mention that in general there is a certain underestimation of the risks associated with installing unknown extensions, compared to the precautions that usually apply for computer programs and smartphone apps.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular

To Top