Naturally, people are panicking. Alexa is spying on us!
But did Alexa really f*ck up? My first instinct is yes, but on second thought maybe not.
To recap: A couple was talking about hardwood floors and the conversation somehow triggered one of their Amazon Echos, which then sent the conversation as an audio message to one of their friends listed on their contact list.
And Amazon confirmed the incident after engineers checked the device’s log. The company provided Mashable with the following statement on what happened:
“Echo woke up due to a word in background conversation sounding like “Alexa.” Then, the subsequent conversation was heard as a “send message” request. At which point, Alexa said out loud “To whom?” At which point, the background conversation was interpreted as a name in the customer’s contact list. Alexa then asked out loud, “[contact name], right?” Alexa then interpreted background conversation as “right”. As unlikely as this string of events is, we are evaluating options to make this case even less likely.”
After reviewing the logs and confirming the events to the couple, an Amazon engineer reportedly apologized “15 times in a matter of 30 minutes” and said “this is something we need to fix.”
Okay, this isn’t good, but I really don’t think it’s something to freak out about.
Don’t get me wrong, what happened to these people shouldn’t have happened in the first place and shouldn’t ever happen again. I, personally, wouldn’t want a friend calling me to tell me my Echo sent them a voice message that I didn’t intentionally send.
But there’s still some things we don’t know about the event that would help clarify the incident.
In which room was the Echo device that sent the audio message in? If it was in the same room, how do you miss or ignore Alexa when it asked “To whom?” and “[contact name], right?” — not to mention the telltale light right that activates when the device is listening. If it was an Echo in another distant room, though, my question is for Amazon: Does Alexa just pick a random person if it didn’t hear a clear utterance?
The volume at which Alexa was set to is also a factor. If Alexa is set to a low volume, then it’s more likely the couple could have not heard Alexa’s multiple prompts. But if it was set to a higher volume, it’s a different story. I own multiple Echos and it’s impossible not to hear it if the volume is at even 50 percent (assuming I’m within earshot).
Looking over Amazon’s summary, it sounds like a series of very unlikely Alexa misinterpretations that would ever happen to most people. It wasn’t spying on them — it just misunderstood what it heard. It’s hard to know exactly what words in the conversation led Alexa down this path, but it’s safe to say the odds of this happening to other Echo owners are pretty low.
How low? This is only one case out of “tens of millions” of Echo devices sold. Amazon doesn’t ever share sales figures, but analysts peg it at around 20 million. If we go by that, one screw-up out of 20 million equals only 0.000005 percent of all devices. It’s a tiny blip.
I agree, Alexa sending out an unwanted audio message is bad. But at the same time, wasn’t it also doing its job? We expect digital assistants to understand our voice even in the most challenging and difficult conditions and it’s Alexa’s duty to make sense of what it hears.
In this case, yes, it heard the audio and interpreted incorrectly. But it still tried its best to understand. And isn’t that what we all demand from our digital assistants? Isn’t that the biggest frustration to these things — when they hear, but fail to understand?
If anything, this incident is a wake-up call for everyone, both on the consumer and technology sides. Not to banish the devices from our homes, but to be more cognizant of the tradeoff between convenience and privacy at work:
There’s still a long ways to go before we get AI that won’t f*ck up. No AI is perfect, and I’m willing to bet this will happen again in the future, whether it’s Alexa, Google Assistant, or another assistant. Just like there’ve been and will likely be more accidents involving self-driving cars, there will probably be more screw ups AI becomes more weaved into our lives.
Accidents are natural and inevitable when we’re talking AI (remember how Google Photos screwed up and categorized two black people as “gorillas”?). The answer isn’t to stop using AI; it’s to improve it.
Still, there are a few things you can do to prevent your Alexa-enabled device from misunderstanding you.
Change the wake word
Unless Amazon releases the actual audio logs, there’s no way to know for sure what words and phrases Alexa misinterpreted.
If you’re worried about accidentally triggering Alexa — maybe you often say words that sound similar to “Alexa” — you can change your Echo’s wake word to either “Echo,” “Amazon” or “computer.” Just make sure these words aren’t more likely to trigger your device than Alexa.
Turn off ‘Follow-Up Mode’
Amazon recently bestowed Alexa-enabled devices with the ability to make follow up requests without needing to repeat the wake word.
To prevent an Alexa-enabled device to continue listening after the initial wake word trigger, go to your Alexa app > Menu > Alexa Devices > Select your device > Turn off Follow-Up Mode.
Don’t allow access to contacts
These days, Echo devices can send audio messages to another person. Don’t want this ever to happen? Don’t grant it access to your contact list. Simple as that.
At the end of the day, we willingly install Alexa-enabled devices into our homes — devices with microphones and digital assistants capable of understanding our voices and sending them to another device.
In a perfect world, the AI would be airtight and these kinds of screw-ups wouldn’t happen. But until things improve, the best we can do is be more aware of how they work and, as listed above, lock them down so they only do the things we want them to do.
But let’s definitely not panic and jump the gun and assume Alexa is spying on us. It’s not. Let’s call this incident what it was: a complete fluke.