On Siri and sequential inference

I use Siri, Apple's virtual assistant, all day, every day. Most of the time it's an incredibly enabling technology that lets me both do my job and manage my home more easily and naturally than I'd have previously though possible. But, when it crashes, it crashes hard.

Walt Mossberg, writing for The Verge:

In recent weeks, on multiple Apple devices, Siri has been unable to tell me the names of the major-party candidates for president and vice president of the United States. Or when they were debating. Or when the Emmy awards show was due to be on. Or the date of the World Series. […]

John Gruber, writing for Daring Fireball:

Siri now does know the date and time of the next U.S. presidential debate, but where Siri fundamentally is its inability to maintain context and chain together multiple commands.

Sequential inference — that contextual awareness and chaining of commands — is something Siri's done since launch. It's what made Siri so special, even back in 2011.

Me: What's the capitol of Germany?

Siri: Berlin.

Me: Population?

Siri: The population of Berlin is about 3,610,000

Me: Turn my hallway light on.

Siri: Okay, your light is on.

Me: Make it purple.

Siri: Purple, it is!

The problem isn't that Siri doesn't do sequential inference — again, it's done it since launch — it's that there's no way for you to know which domains or queries will use it when you try.

Siri is a server-side service, and query handling is something that can be tweaked at any time, so I'm guessing any reports of poor behavior are fixable. What's frustrating is that it seems many of those sequential inferences haven't already been mapped out and account for.

That leads to inconsistency, which for customers is as bad or worse than it not existing at all.

Mossberg:

You can now use Siri to "turn the lights blue" or "turn on the bathroom heater" — integrations that Amazon's Echo and Alexa assistant have led the way on. And the always-listening Echo is faster than pressing the iPhone's home button to call up Siri, and more reliable than the "Hey Siri" command, which can be hit-or-miss.

Echo is a room device that's always connected to power and has seven beam-forming microphones. That's great, but a totally different product from iPhone, which is an always with you but mostly on battery, and with only a few phone mics.

Alexa on Echo is more reliable than Siri on iPhone when you're in the same room. When you're across town or across the globe, Siri on iPhone is infinitely more reliable than Alexa on Echo back home. Because it's back home.

I can't comment on whether or not Echo/Alexa "led the way" on home automation controls because Echo/Alexa still doesn't exist where I live, nor in the vast majority of the world. Siri, while not everywhere, is far more global and multilingual. Again, different products with different priorities.

Gruber:

To be fair, I tried the same two-step sequence (when's the next debate?; add it to my calendar) with Google Assistant running in the Allo app on Android, and it failed in the same way. I remain unconvinced that Siri is behind the competition, and even if it is, I don't think it's by much.

This stuff is incredibly hard. Apple's been accused of tying their own virtual assistant hands by instituting a privacy policy that effectively prevents them from churning through all the personal data Google uses to provide more and arguably better features.

But they're also solving for different problems. Apple is focusing on a personal assistant. Siri has a name and a Pixar-like personality, and for all sorts of personal tasks, it does sequential inference just fine. Google is focusing on a Star Trek computer. That's why Google Now and Google Assistant don't have any more of a name or personality than Star Trek's "Computer".

Apple doesn't have to match Google Now or Google Assistant feature for feature — and privacy means they won't be digging through your email or web history to do so any time soon — but Apple does have to make sure Siri can handle the kind of tasks most of Apple's customers will ask most of the time. And do it in a way that's not just delightful but reliable and consistent.

(That, and handle the much bigger part of AI that gets much less attention: The behind the scenes stuff that'll eventually make everything from code more reliable to interface preemptively faster...)

When it comes to the front facing stuff, I'm just a dumb writer, which is about as far from a product manager as you can get. That said, it seems like a lot of what Mossberg complains about, and Gruber notes, could be headed off by having someone with a Steve Jobs or Craig Federighi-like drill-down-to-the-smallest-detail approach empowered inside Apple, hammering on Siri, all day, every day, and making sure it never gets caught off its virtual guard.

Rene Ritchie
Contributor

Rene Ritchie is one of the most respected Apple analysts in the business, reaching a combined audience of over 40 million readers a month. His YouTube channel, Vector, has over 90 thousand subscribers and 14 million views and his podcasts, including Debug, have been downloaded over 20 million times. He also regularly co-hosts MacBreak Weekly for the TWiT network and co-hosted CES Live! and Talk Mobile. Based in Montreal, Rene is a former director of product marketing, web developer, and graphic designer. He's authored several books and appeared on numerous television and radio segments to discuss Apple and the technology industry. When not working, he likes to cook, grapple, and spend time with his friends and family.