Apple’s chief privacy officer says that Apple scans photos uploaded to iCloud to check whether they contain child abuse. Jane Horvath discussed the use of the technology during a Tuesday panel on user privacy at CES.
Horvath didn’t reveal exactly how Apple carries this out. Many companies — including Facebook, Twitter and Google — already use a Microsoft-developed tool called PhotoDNA. This checks images against a database of previously identified pictures.
During the panel she participated in, Horvath noted only that: “We are utilizing some technologies to help screen for child sexual abuse material.”
“Apple uses image matching technology to help find and report child exploitation. Much like spam filters in email, our systems use electronic signatures to find suspected child exploitation. Accounts with child exploitation content violate our terms and conditions of service, and any accounts we find with this material will be disabled.”
Apple scours iCloud images for possible child abuse
Apple’s challenge is balancing law enforcement with privacy. Any decent person would support efforts to crack down on child abuse. But the question of whether or not it’s okay to scan massive amounts of user data to find wrongdoers is a big, immensely complex topic.
Apple has previously had a standoff with the FBI on the subject of privacy. In that instance, Apple came down on the side of keeping users’ data private.
Apple already uses image recognition technology to identify people and objects in photos. This is done using machine learning. ML is an area Apple has increasingly invested in in recent years. However, from the sound of things, when it comes to spotting child abuse, Apple’s technology in this area is focused more on matching images with already reported ones.
In 2018, Apple removed Tumblr from the App Store. This was reportedly because it contained child pornography, which somehow managed to get around Tumblr’s filters.