There’s been a lot of discussion about some purported deal between Apple and Nuance under which the latter firm might license its really rather amazing voice recognition technology to Cupertino, for use in some forms of future service. I spoke with Nuance yesterday and am pleased to report..nothing…they wouldn’t budge. Somewhat annoyingly not one of them broke their silence on the matter, in fact, in a wonderful slice of evasion, they made out like they hadn’t heard anything about this, except possibly on Twitter. Well done them. That kind of determined attempt to refuse to either confirm or deny a rumor impresses me. I was also impressed by the beautiful dark-haired PR I spoke with originally, but that’s a TOTALLY IRRELEVANT thing, but just had to say so.
So — I can’t tell you if there’s any truth in the Apple licensing rumours. (I watched a demo of the latest version, which answers a lot of my prayers as a MacSpeech user, it is fast, so accurate, and offers natural-seeming spoken word commands.)
Anyway, in the absence of both confirmation and denial, here’s three things Apple could do with Nuance’s simply impressive tech:
1: Improve its assistive technologies inside Mac OS X by adding native voice support
2: Introduce voice-control for Macs and other Apple devices, including potentially for iPhone nano
3: You have to train Nuance to recognise your voice. If the training file were held on a central (iCloud?) server, you could potentially voice control any device with access to that file, or, indeed, use voice commands to confirm your identity for using your iPhone, or (in future) paying with your smart device.