What you need to know
- Apple is using Apple Podcasts to train Siri to understand people with a stutter.
- The company has collected 28,000 audio samples from podcasts to aid in the research.
Apple is looking to improve Siri to understand users with atypical speech patterns, such as those with a stutter. In order to support the effort, the company is extracting audio samples from Apple Podcasts that will help train Siri to understand more kinds of speech.
According to a report from the Wall Street Journal (via 9to5Mac), Apple has built a bank of 28,000 audio clips from podcasts that feature someone with a stutter.
As noted by the report, Siri can misinterpret Apple users with a stutter as ending a voice command due to pauses in their speech.
Apple's research paper says that they are specifically studying "five event types including blocks, prolongations, sound repetitions, word repetitions, and interjections."
Apple believes that, despite the first use being to aid in understanding those who stutter, the research could continue to also include other things like dysarthria.
Joe Wituschek is a Contributor at iMore. With over ten years in the technology industry, one of them being at Apple, Joe now covers the company for the website. In addition to covering breaking news, Joe also writes editorials and reviews for a range of products. He fell in love with Apple products when he got an iPod nano for Christmas almost twenty years ago. Despite being considered a "heavy" user, he has always preferred the consumer-focused products like the MacBook Air, iPad mini, and iPhone 13 mini. He will fight to the death to keep a mini iPhone in the lineup. In his free time, Joe enjoys video games, movies, photography, running, and basically everything outdoors.
Thank you for signing up to iMore. You will receive a verification email shortly.
There was a problem. Please refresh the page and try again.