TechCrunch has followed up on their original report on Apple/Nuance partnership talks, saying the deal might already be on its way and could be part of next month's World Wide Developer Conference (WWDC) keynote.
More specifically, we’re hearing that Apple is running Nuance software — and possibly some of their hardware — in this new data center. Why? A few reasons. First, Apple will be able to process this voice information for iOS users faster. Second, it will prevent this data from going through third-party servers. And third, by running it on their own stack, Apple can build on top of the technology, and improve upon it as they see fit.
Obviously, Nuance, which owns the technology, would have to sign off on all of this. And we now believe that they have. Hence, the big time partnership that should be formally announced soon.
Apple can't build this stuff themselves, buying Nuance is too expensive, so they're licensing it and deploying it on their massive cloud so they can integrate it into [iOS 5](http://www.imore.com/tag/ and make SIRI-style voice recognition and "artificial intelligence" core to iPhone, iPad, and iPod touch moving forward.
But here's the thing. Google introduced system-wide voice control in Android with the debut of the Nexus One, and so far it hasn't really gone mainstream. Last year, with iOS 4, Apple tried to mainstream video calling with FaceTime and... it's hard to say how successful it's been. It's by far the easiest video calling system released to date, usable by kids and grandparents alike... but who knows how much they use it?
Apple could make an incredibly simple, "it just works" voice control system as well -- MouthTime, so to speak -- but would you use it? Would your parents?