When it comes to artificial intelligence (AI), Apple wants to have its cake and eat it, too. That is to say that it wants cutting edge data-driven machine learning on its devices, without violating its own user privacy pledge.
And you know what? It’s managing to pull it off — as the company’s latest AI startup acquisition underlines.
This week, Apple purchased Seattle-based AI company Xnor.ai for a reported $200 million. While there’s no shortage of startups in the current AI boom doing machine learning (ML), Xnor.ai’s ambition is a bit different. Whereas everyone else is combing through massive amounts of data to create smart tools, Xnor.ai focuses on building AI algorithms which run locally on devices, rather than in remote data centers.
That means getting the benefits of AI (the things made possible thanks to modern ML technology, without the negatives.
Apple gave its usual non-statement when asked to comment on the Xnor.ai deal. It said it “buys smaller technology companies from time to time,” but will not say why. We know why, of course. And privacy is the reason.
AI with extra privacy
This isn’t the first time Apple has shown this balancing act commitment when it comes to AI. In late 2018, it acquired Silk Labs, an AI startup that does image and audio recognition for people detection, facial recognition, and more. All of this is carried out locally, without sending data to the cloud. “Privacy and security is built into our company’s DNA,” read Silk’s now-defunct website. “With every line of code we write and in every design decision we make, Silk takes great measures to ensure that user data on the Silk Intelligence Platform is fully protected at all times.”
Apple has also started to share some details about its privacy-focused AI prowess. Late last year, Apple published a paper describing something called federated learning. Federated learning trains machine learning algorithms on multiple local datasets without exchanging data samples. This allows Apple to do things like get Siri to recognize your voice (and only your voice) as a wake word. However, it does it without sharing this data with a data center. All that gets shared are the updated neural networks, which are used for improving the overall master network.
Apple additionally uses differential privacy. This adds a small amount of noise into raw data so that it’s harder to reverse-engineer audio files from a trained model.
It’s a way for Apple to differentiate itself from data-hungry tech giants, which CEO Tim Cook has repeatedly spoken out against. More importantly, it does this without ignoring the importance of AI — which is something Apple previously was guilty of. It has allowed Apple to implement tools like deep learning into every aspect of its product line, while minimizing the negatives along the way.
An admirable (but tricky) stance
Apple’s stance is admirable. Privacy-protecting AI is a worthy endeavor. It’s also a great selling point for Apple, which gets to assert a moral superiority over rivals which, quite literally, make the customer their product by mining their information. Things aren’t perfect. Apple’s AI tools lag behind rivals such as those made by Google. (Try using Siri versus Google Assistant or even the predictive keyboard or transcription abilities on an iPhone.)
But Apple’s made impressive progress. What’s more, it’s done it in a way that cocoons the company from potential criticism about user data down the line. (For the most part, that is. Don’t forget that Apple had its own privacy scandal when it turned out contractors were listening in on Siri conversations.) Is Apple leading the pack when it comes to artificial intelligence? No, it’s not. However, it’s also refusing to compromise by sacrificing privacy for performance.
A few years ago, it seemed Apple would need to choose one or the other. Instead it opted for both. Now that, to quote Steve Jobs, is a dent worth putting in the universe.