For many people, Siri has been more of a nuisance than an empowering personal assistant since debuting on the iPhone 4s in 2011. Sure, she’s received some upgrades and is getting even more in iOS 8, but fancy new features mean nothing if she can’t understand what you’re saying.
Siri’s favoriting line, “Sorry I didn’t get that,” might soon be a thing of the past though as a report from Wired says the time is ripe for Apple to unleash a neural-net-boosted Siri.
Despite using Nuance technology for years, Apple might be looking to move away from licensing the voice recognition technology in favor of its own neural network engine built by Apple’s team of speech recognition experts.
Over the last three years of development, Apple has turned Siri into its own search of sorts. Drawing on third-party sources like Wolfram Alpha, Yelp, Wikipedia, and Shazam. Siri can help with your math homework, find new songs and buy them, tell you sports scores, but understanding what you’re saying could be the biggest upgrade of all.
According to the report, Apple hired Alex Acero to be the senior director in Apple’s Siri group after researching speech technology for 20 years at Microsoft. Apple has also poached top speech recognition talent from Nuance.
“Apple is not hiring only in the managerial level, but hiring also people on the team-leading level and the researcher level,” says Abdel-rahman Mohamed, a postdoctoral researcher at the University of Toronto, who was courted by Apple. “They’re building a very strong team for speech recognition research.
Microsoft, Google have been using neural network algorithms to power Skype and Android Voice Search with noticeably better results. Apple is the only major tech company that hasn’t adopted the technology. Nothing was mentioned at WWDC, but if Microsoft’s head of research is right, Siri could get its new neural network super powers within six months.