Apple walks a tightrope on privacy to spot child abuse in iCloud

For years, technology companies have struggled between two impulses: the need to encrypt their users’ data to protect their privacy, and the need to detect the worst abuses on their platforms. Apple is now launching a new crypto system that seeks to thread that needle, by detecting child abuse images stored on iCloud without, in theory, introducing new forms of invasion of privacy. In doing so, he has also driven a wedge between privacy and crypto experts who see his work as an innovative new solution, and those who see it as a dangerous capitulation to government oversight.

Today, Apple introduced a new set of technological measures in iMessage, iCloud, Siri and search, which the company says are designed to prevent child abuse. A new activation setting in iCloud family accounts will use machine learning to detect nudity in images sent to iMessage. The system can also block the sending or receiving of these images, display warnings, and in some cases alert parents that a child has seen or sent them. Siri and Search will now display a warning if it detects that someone is searching for or seeing child pornography, also known as CSAM, and offers options to ask for help with their behavior or to report what they are. he found.

But in Apple’s most innovative and technically controversial new feature, iPhones, iPads and Macs will now also feature a new system that checks images uploaded to iCloud in the United States for known images of child sexual abuse. This feature will use a cryptographic process that takes place partly on the device and partly on Apple’s servers to detect these images and report them to the National Center for Missing and Exploited Children or NCMEC, and ultimately to law enforcement. American.

Apple claims that none of these new features for managing CSAM endanger user privacy – that even the iCloud detection mechanism will use smart cryptography to prevent Apple’s scanning mechanism from accessing visible images that are not CSAM. The system was designed and analyzed in collaboration with Stanford University cryptographer Dan Boneh, and Apple’s announcement of the feature includes endorsement from several other well-known crypto experts.

“I think the Apple PSI system offers an excellent balance between privacy and utility, and will be extremely useful in identifying CSAM content while maintaining a high level of user privacy and minimizing false positives,” Benny Pinkas, cryptographer at Israel’s Bar-Ilan University, which reviewed Apple’s system, wrote in a statement to WIRED.

Child safety groups, for their part, also immediately applauded Apple’s measures, arguing that they strike a necessary balance that “brings us closer to justice for the survivors whose most traumatic moments are broadcast online.” , like Julie Cordua, CEO of Child Safety. rights group Thorn wrote in a statement to WIRED.

Other cloud storage providers from Microsoft to Dropbox are already performing detection on images uploaded to their servers. But by adding all sort of image analysis to users’ devices, some privacy critics say, Apple has also taken a step towards a disturbing new form of surveillance and weakened its historically strong position on privacy in the face of pressure from the police.

“I don’t stand up for child abuse. But this whole idea that your personal device is constantly scanning and monitoring you locally based on some objectionable content criteria and conditionally reporting it to the authorities is a very, very slippery slope, ”says Nadim Kobeissi, cryptographer and founder of Paris- Symbolic Software-based cryptography software company. “I will definitely upgrade to an Android phone if this continues. “

Apple’s new system is not a simple analysis of user images, either on their devices or on Apple’s iCloud servers. Instead, it’s a smart and complex new form of image analysis designed to prevent Apple from seeing those photos unless they’re already determined to be part of a collection of multiple images. CSAM uploaded by a user. The system takes a “hash” of all the images a user sends to iCloud, converting the files into strings that are only derived from those images. Then, like older CSAM detection systems such as PhotoDNA, it compares them to a large collection of known CSAM image hashes provided by NCMEC to find matches.

Apple is also using a new form of hash called NeuralHash, which the company says can match images despite changes like cropping or colorizing. Equally crucial to preventing evasion, his system never actually downloads these NCMEC hashes to a user’s device. Instead, it uses cryptographic tricks to convert them into a so-called “blind database” which is downloaded to the user’s phone or PC, containing seemingly insignificant strings derived from these hashes. This blindness prevents any user from obtaining the hashes and using them to bypass system detection.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *