Computer scientists who built a CSAM system warn Apple not to use the technology

iPhone 12 Pro review
iPhone 12 Pro review (Image credit: Daniel Bader / iMore)

What you need to know

  • Two computer scientists who built a CSAM detection system have warned Apple not to implement its technology.
  • They said their system "could be easily repurposed for surveillance and censorship."
  • The warned that Apple was gambling with user security, privacy, and free speech with its new measures.

Two computer scientists who built a CSAM detection system have warned Apple that the system can be easily repurposed for surveillance and censorship and that it shouldn't go ahead with new Child Safety plans.

In a feature for The Washington Post Jonathan Mayer, assistant professor of computer science and public affairs at Princeton University, and Anunay Kulshrestha, a graduate researcher at the Princeton University Center for Information Technology Policy, spoke about how they'd created their own CSAM system:

We wrote the only peer-reviewed publication on how to build a system like Apple's — and we concluded the technology was dangerous. We're not concerned because we misunderstand how Apple's system works. The problem is, we understand exactly how it works.

The pair state:

We sought to explore a possible middle ground, where online services could identify harmful content while otherwise preserving end-to-end encryption. The concept was straightforward: If someone shared material that matched a database of known harmful content, the service would be alerted. If a person shared innocent content, the service would learn nothing. People couldn't read the database or learn whether content matched since that information could reveal law enforcement methods and help criminals evade detection.

However, they say they encountered a "glaring problem" in that the system "could be easily repurposed for surveillance and censorship" because the design isn't restricted to a specific category of content and that a service "could simply swap in any content-matching database." The piece echoes other concerns raised about Apple's technology, but the pair go further:

We were so disturbed that we took a step we hadn't seen before in computer science literature: We warned against our own system design, urging further research on how to mitigate the serious downsides. We'd planned to discuss paths forward at an academic conference this month.

Apple has fervently protested against the idea that its system can be repurposed. In its FAQ Apple says its system is built solely to detect CSAM images:

Apple would refuse such demands and our system has been designed to prevent that from happening. Apple's CSAM detection capability is built solely to detect known CSAM images stored in iCloud Photos that have been identified by experts at NCMEC and other child safety groups. The set of image hashes used for matching are from known, existing images of CSAM and only contains entries that were independently submitted by two or more child safety organizations operating in separate sovereign jurisdictions. Apple does not add to the set of known CSAM image hashes, and the system is designed to be auditable. The same set of hashes is stored in the operating system of every iPhone and iPad user, so targeted attacks against only specific individuals are not possible under this design. Furthermore, Apple conducts human review before making a report to NCMEC. In a case where the system identifies photos that do not match known CSAM images, the account would not be disabled and no report would be filed to NCMEC. We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future. Let us be clear, this technology is limited to detecting CSAM stored in iCloud and we will not accede to any government's request to expand it.

Apple's claims that it would refuse requests to expand the technology have led some commenters to note that this is a policy decision, rather than a technological limit.

Stephen Warwick
News Editor

Stephen Warwick has written about Apple for five years at iMore and previously elsewhere. He covers all of iMore's latest breaking news regarding all of Apple's products and services, both hardware and software. Stephen has interviewed industry experts in a range of fields including finance, litigation, security, and more. He also specializes in curating and reviewing audio hardware and has experience beyond journalism in sound engineering, production, and design. Before becoming a writer Stephen studied Ancient History at University and also worked at Apple for more than two years. Stephen is also a host on the iMore show, a weekly podcast recorded live that discusses the latest in breaking Apple news, as well as featuring fun trivia about all things Apple. Follow him on Twitter @stephenwarwick9