Siri: How far has it come in two years, and how far does it still have to go?

Siri provided a glimpse into an entirely new realm of human interface, and while it's now delivering more than ever before, it still faces significant, and lingering, challenges for the future

A year ago I wrote about the challenges facing Siri and Apple's services. Over the last 12 months, not much has changed in terms of the big picture. Siri processing is still completely server-bound, allowing the network to serve as a single point of failure even for local operations like setting an alarm. Siri still isn't prescient either, providing information only when you ask for it, but not when it might be needed anyway. And it's still not available, beyond dictation, on the Mac. I hold to hope that some of the advances we've seen in OS X filter across to iOS 8, but there are a couple of things Apple has already done that are worth mentioning.

Since launch, Siri's Pixar-like personality has been fantastic, as has its contextually awareness. Together they make Siri less of query/response engine and more of a conversational assistant. The result is that you can talk with Siri, not just at it, which encourages playful experimentation and, overall, makes the technology more accessible to more people. Add to that the new data sources made available with iOS 7 - Wikipedia, Twitter, Bing, Facebook - greater access to other iOS apps, and a new, persistent interface, and the capabilities within context have certainly increased. To wit, you can now perform an impressive amount of tasks with Siri:

So on the one hand we have this amazing service that a four year old who can't even read or write can interact with as a new friend and use to communicate with family in a way previously impossible. And on the other, it's spinny thing, spinny thing, spinny thing, nothing.

What'll be most interesting to see is how that progresses over the course of the next year. What Apple lacks in predictability, Google lacks in humanity. Who will get better at both first? Could what's currently a separate natural language voice interface layered on top of iOS could become a holistic part of the entire experience? Could anything I say or type into iOS get fed through the natural language, context aware, personality-driven interface, and make the entire system friendlier and even more accessible? Could iSight somehow gain a visual awareness to match Siri's audio abilities? Could other sensors make for other intelligent senses?

As much as iOS 7 presages the coming of more functional, dynamic interface, Siri and like systems presage the coming of even more human interface. Apple and others remain in pursuit of it, but right now it remains just that - a pursuit. Part of that is pushing against the limits of technology and privacy. Part remains the single act of willing it through.

There's a reason everything from Star Trek to Knight Rider to Iron Man has rendered this in fantasy long before it will ever be possible in reality. It's the future.

And we need more of it in 2014.

Rene Ritchie
Contributor

Rene Ritchie is one of the most respected Apple analysts in the business, reaching a combined audience of over 40 million readers a month. His YouTube channel, Vector, has over 90 thousand subscribers and 14 million views and his podcasts, including Debug, have been downloaded over 20 million times. He also regularly co-hosts MacBreak Weekly for the TWiT network and co-hosted CES Live! and Talk Mobile. Based in Montreal, Rene is a former director of product marketing, web developer, and graphic designer. He's authored several books and appeared on numerous television and radio segments to discuss Apple and the technology industry. When not working, he likes to cook, grapple, and spend time with his friends and family.