July 29, 1993: Apple releases the Macintosh Centris 660av, a computer packed with innovative audio-visual features. These include an AppleVision monitor with microphone and speakers, and a port that can works as a modem with a telecom adapter. It also comes with the first Apple software to recognize and synthesize speech.
At the relatively low price of $2,489, this was one of the first great affordable multimedia Macs.
Having your iPhone respond to “Hey Siri” seems like such a simple thing, but it’s actually quite complicated. Recognizing this code phrase, and the person who said it, is critical for Apple speech-recognition system.
A post in Apple’s Machine Learning Journal just published today describes many of the challenges developers overcame to make this work.
Your smart life is about to get even smarter with a new set of software development tools that will let coders include world-class speech recognition and natural language processing — the same stuff that powers Siri, Apple’s personal digital assistant — to thermostats, refrigerators, apps and, yes, even robots.
The folks at Nuance have created a new system, currently in beta, to allow any company to include code with language commands that are specific to their hardware or apps. It’s called Nuance Mix, and anyone can sign in and create their own speech-recognition code to work with their apps or connected devices.
“Any developer, big or small, can come in and define a custom set of use cases,” Nuance’s Kenn Harper told Cult of Mac during a demo of the SDK. “You’re going to start talking to everything at home and work — speech is about to get more ubiquitous.”
This software solution offer far more than speech-to-text. With it you can create and edit documents, manage email, surf the Web, update social networks, and more – quickly, easily, and accurately – all by voice. Just read your text aloud and watch the magic appear before your eyes right on your computer screen.
Back in 1987 during the era of John Sculley, Apple released a “what if” video describing a device called the Knowledge Navigator. This prescient work anticipated a personal digital assistant a la Siri, a touch screen tablet computer like the iPad, videoconferencing (FaceTime) and more.
As WWDC and the unveiling of iOS 5 approaches, we’re all wondering what Apple may or may not bring to its devices with the next major iOS release. One thing that could be introduced is speech recognition, courtesy of Nuance Communications – the company behind the Dragon Dictation applications for the iPhone and iPad.
According to a TechCrunchreport that cites “multiple sources,” Apple has been negotiating a deal with Nuance which could see them integrate the company’s speech recognition technology into the iOS platform. While negotiations could have potentially been about an Apple takeover of Nuance, TechCrunch believes that at this point that’s unlikely.