Researchers from ANSSI, France's National Information System Security Agency, have demonstrated a "hack" where, using transmitters from a short distance away, they can trigger Apple's Siri and Google Now under certain specific circumstances. Wired:
Wait, why does Wired's headline and lede paragraph focus only on Apple's Siri but the demo and rest of the article talk about Google Now?
But my iPhone asked about Siri at setup and does have Voice ID, what gives?
Mine too and I'm not entirely sure. There seem to be several glaring errors in the article as published.
- As of iOS 9, the iPhone absolutely does have a Voice ID feature, which is part of the setup process.
- If you buy a new iPhone that comes pre-installed with iOS 9, it will ask during setup if you want to enable "Hey Siri", and then require you to go through a setup process to enable it. (And, if you do, defaults to Lock screen access because that's the whole point of hands-free.)
- If you upgrade an existing iPhone to iOS 9 and it supports "Hey Siri", the first time you enable it or toggle it off and back on, it will require you to go through the setup.
- Only iPhone 6s and iPhone 6s Plus can do persistent "Hey Siri". Older iPhones can only do "Hey Siri" when plugged into power, and if expressly enabled in Settings. While battery packs are a possibility, most iPhones connected to headphones on the go probably won't be in that state.
The article does state that, absent "Hey Siri", the "hackers" can spoof the audio signal used by the headset button to trigger Siri.
Doesn't Siri give audio replies and confirmations as well?
Indeed. You don't have to be visually attentive to see mysterious voice commands because Siri responds with audio replies you can hear.
While connected headphones stuffed into pockets are possible, they're probably not the most common situation.
So why does the headline say "silently" hack?
The radio signal beamed to the headset "antenna" is presumably "silent". Siri's responses and confirmations wouldn't be.
At what distances does this "hack" work?
Wired says 16-feet in their headline, but later elaborate:
What can be done if someone activates voice from a distance?
From earlier in the article:
Communications is something that can be triggered directly using Siri. Other features, like using Siri to go to website, require passcode or Touch ID unlock first. That's if you could get Siri to recognize the likely obscure and not easily rendered name of a malicious website and request it to begin with.
It's also unclear how an audio transmission could form spam or phishing messages precisely enough to be functional. (Again, try getting Siri to render a complex URL for you and see how far you get.)
But everything from podcasts to prankster friends have been triggering voice activation for years, right?
Right. More Wired:
While true, of course, it's absolutely nothing new. When Google Now debuted, especially on Google Glass, CES pranksters loved to jump into rooms filled with early adopters and yell out search requests for... various parts of the human anatomy.
It's why Apple and other vendors have added Voice ID technology.
The difference here is a clever use of transmitters by security researchers unfortunately wrapped in what seems like really poor reporting.
Can Apple and Google prevent this type of "hack"?
The researchers make some recommendations for mitigating the "hack", including the ability to set custom trigger words. Some Android devices let you do that already, and I've been wishing for it on iOS for a while as well.
To prevent spoofing the button press, they also recommend enhanced shielding in the headphone cords. Though no doubt an expense, even if only the most popular brands implemented that, it would reduce the surface potential of the "hack".
So, should I be worried about any of this?
Like usual, it's something to be aware of but not overly concerned about. Once again, we should all be more concerned about the state of security reporting at mainstream publications.
Siri and Google Now are enabling and empowering technologies that help people live better lives. We should all be informed and educated about any potential security issues, but not sensationalized or made to feel scared in any way.
What, if anything, should I do?
iPhone 6s implements Voice ID, which profoundly reduces the chances of third-party activations, accidental, prank, or malicious. Keeping your headphones on when they're plugged in mitigates the potential consequences of any third-party activations as well—because you can hear them and intervene.
Security and convenience are almost always at odds. Siri, Google Now, "Hey Siri", and "Okay Google Now" provide for increased convenience at the expense of some security. If you don't use or need voice activation or Lock screen access, by all means turn them off.
Updated 3:30pm: Further explained how "Hey Siri" Voice ID setup works.
Master your iPhone in minutes
iMore offers spot-on advice and guidance from our team of experts, with decades of Apple device experience to lean on. Learn more with iMore!
Rene Ritchie is one of the most respected Apple analysts in the business, reaching a combined audience of over 40 million readers a month. His YouTube channel, Vector, has over 90 thousand subscribers and 14 million views and his podcasts, including Debug, have been downloaded over 20 million times. He also regularly co-hosts MacBreak Weekly for the TWiT network and co-hosted CES Live! and Talk Mobile. Based in Montreal, Rene is a former director of product marketing, web developer, and graphic designer. He's authored several books and appeared on numerous television and radio segments to discuss Apple and the technology industry. When not working, he likes to cook, grapple, and spend time with his friends and family.