Why Boston is a hotbed for speech recognition like Siri

In July it was reported that Apple was beefing up its in-house voice recognition team in Boston, possibly to improve Siri. Nuance, the company that provides the speech recognition used in Siri, is in the area, and Microsoft and Amazon are reportedly making a grab for speech recognition talent too. What makes Boston such a hotbed for speech recognition technology? To find the answer, you have to look back 40 years, reports Scott Kirsner for The Boston Globe.

The development of speech recognition technology goes back to the era of mainframe computers at research labs funded by the DARPA, the Department of Defense's advanced research arm. The nearby Massachusetts Institute of Technology (MIT) in Cambridge, which shares a common border with Boston, supplied the steady stream of highly trained and specialized engineers needed to man these projects, and the mainframes used to do the heavy computational lifting of early speech recognition projects were built in the area as well.

The locus of computer development shifted to Silicon Valley in the era of personal computers, but much of the brain trust responsible for figuring out how computers can recognize human speech has remained in the Boston area.

Nuance is opening a new R&D office for 120 employees in Cambridge, while Amazon opened a facility in 2011 and immediately hired luminaries well-known in the speech recognition industry. Microsoft has also staffed up its Cambridge-based New England Research and Development (NERD) Center with speech recognition experts.

While new players like Nuance have popped up and have scooped up many smaller companies spawned by various speech recognition projects, some of the old guard are still around, like BBN Technologies. And they're still working on the tough problems involved in getting machines to recognize and interpret human language, forty years later.

Their work isn't done yet.