Are Apple’s tools against child abuse bad for your privacy?

APPLE PRIVACY 2
An undated photo provided by Apple shows iPhones with a series of information alerts regarding sensitive photos. Apple said it would soon allow parents to turn on a feature that can flag when their children send or receive nude photos in text messages. (Photo: NYTimes)
Apple unveiled a plan two weeks ago founded in good intentions: Root out images of child sexual abuse from iPhones.اضافة اعلان

But as is often the case when changes are made to digital privacy and security, technology experts quickly identified the downside: Apple’s approach to scanning people’s private photos could give law enforcement authorities and governments a new way to surveil citizens and persecute dissidents. Once one chip in privacy armor is identified, anyone can attack it, they argued.

The technology that protects the ordinary person’s privacy can also hamstring criminal investigations. But the alternative, according to privacy groups and many security experts, would be worse.

“Once you create that back door, it will be used by people whom you don’t want to use it,” said Eva Galperin, cybersecurity director at the Electronic Frontier Foundation, a digital-rights group. “That is not a theoretical harm. That is a harm we’ve seen happen time and time again.”

Apple was not expecting such backlash. When the company announced the changes, it sent reporters complex technical explainers and laudatory statements from child safety groups, computer scientists and Eric Holder Jr., the former US attorney general. After the news went public, an Apple spokesperson emailed a reporter a tweet from Ashton Kutcher, the actor who helped found a group that fights child sexual abuse, cheering the moves.

But his voice was largely drowned out. Cybersecurity experts, the head of the messaging app WhatsApp and Edward Snowden, the former intelligence contractor who leaked classified documents about government surveillance, all denounced the move as setting a dangerous precedent that could enable governments to look into people’s private phones. Apple scheduled four more press briefings to combat what it said were misunderstandings, admitted it had bungled its messaging and announced new safeguards meant to address some concerns. More than 8,000 people responded with an open letter calling on Apple to halt its moves.

As of now, Apple has said it is going forward with the plans. But the company is in a precarious position. It has for years worked to make iPhones more secure, and in turn, it has made privacy central to its marketing pitch. But what has been good for business also turned out to be bad for abused children.

A few years ago, the National Center for Missing and Exploited Children began disclosing how often tech companies reported cases of child sexual abuse material, commonly known as child pornography, on their products.

Apple was near the bottom of the pack. The company reported 265 cases to authorities last year, compared with Facebook’s 20.3 million. That enormous gap was largely due, in most cases, to Apple’s electing not to look for such images to protect the privacy of its users.

In late 2019, after reports in The New York Times about the proliferation of child sexual abuse images online, members of Congress told Apple that it had better do more to help law enforcement officials or they would force the company to do so. Eighteen months later, Apple announced that it had figured out a way to tackle the problem on iPhones while, in its view, protecting the privacy of its users.

The plan included modifying its virtual assistant, Siri, to direct people who ask about child sexual abuse to appropriate resources. Apple said it would also soon enable parents to turn on technology that scans images in their children’s text messages for nudity. Children 13 and older would be warned before sending or viewing a nude photo, while parents could ask to be notified if children younger than 13 did so.

Those changes were met with little controversy compared with Apple’s third new tool: software that scans users’ iPhone photos and compares them against a database of known child sexual abuse images.

To prevent false positives and hide the images of abuse, Apple took a complex approach. Its software reduces each photo to a unique set of numbers — a sort of image fingerprint called a hash — and then runs them against hashes of known images of child abuse provided by groups like the National Center for Missing and Exploited Children.

If 30 or more of a user’s photos appear to match the abuse images, an Apple employee reviews the matches. If any of the photos show child sexual abuse, Apple sends them to authorities and locks the user’s account. Apple said it would turn on the feature in the United States over the next several months.

Law enforcement officials, child safety groups, abuse survivors and some computer scientists praised the moves. In statements provided by Apple, the president of the National Center for Missing and Exploited Children called it a “game changer,” while David Forsyth, chair of computer science at the University of Illinois at Urbana-Champaign, said that the technology would catch child abusers and that “harmless users should experience minimal to no loss of privacy.”

To many technologists, Apple has opened a Pandora’s box. The tool would be the first technology built into a phone’s operating system that can look at a person’s private data and report it to law enforcement authorities. Privacy groups and security experts are worried that governments looking for criminals, opponents or other targets could find plenty of ways to use such a system.

“As we now understand it, I’m not so worried about Apple’s specific implementation being abused,” said Alex Stamos, a Stanford University researcher who previously led Facebook’s cybersecurity efforts. “The problem is, they’ve now opened the door to a class of surveillance that was never open before.”

If governments had previously asked Apple to analyze people’s photos, the company could have responded that it could not. Now that it has built a system that can, Apple must argue that it will not.

“I think Apple has clearly tried to do this as responsibly as possible, but the fact they’re doing it at all is the problem,” Galperin said. “Once you build a system that can be aimed at any database, you will be asked to aim the system at a database.”

In response, Apple has assured the public that it will not accede to such requests. “We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future,” the company said in a statement.

Read more Lifestyle