Skip to main content

Warrant reveals Apple's process behind intercepting child abuse imagery

iCloud
iCloud (Image credit: iMore)

What you need to know

  • A Forbes report has revealed how Apple scans emails for child abuse imagery.
  • It claims to have uncovered a warrant filed in Seattle, Washington.
  • Apple's servers scan emails for signs of child abuse imagery based on previously-identified photos.

A Forbes report regarding a warrant filed in Seattle, Washington, has revealed in-part how Apple uses technology to "intercept" emails that may contain child abuse imagery.

According to the report:

"...thanks to a search warrant uncovered by Forbes, for the first time we now know how the iPhone maker intercepts and checks messages when illegal material - namely, child abuse - is found within. The warrant, filed in Seattle, Washington, this week, shows that despite reports of Apple being unhelpful in serious law enforcement cases, it's being helpful in investigations."

As Forbes notes, Apple uses hashes, much like Facebook and Google, to detect child abuse imagery:

Think of these hashes as signatures attached to previously-identified child abuse photos and videos. When Apple systems - not staff - see one of those hashes passing through the company's servers, a flag will go up. The email or file containing the potentially illegal images will be quarantined for further inspection.

If companies identify a problem, they contact an authority, usually the National Center for Missing and Exploited Children. With regards to Apple specifically, the warrant contained notes on the process and even comments from an Apple employee:

But in Apple's case, its staff are clearly being more helpful, first stopping emails containing abuse material from being sent. A staff member then looks at the content of the files and analyzes the emails. That's according to a search warrant in which the investigating officer published an Apple employee's comments on how they first detected "several images of suspected child pornography" being uploaded by an iCloud user and then looked at their emails.

The employee notes stated:

When we intercept the email with suspected images they do not go to the intended recipient. This individual ... sent 8 emails that we intercepted. [Seven] of those emails contained 12 images. All 7 emails and images were the same, as was the recipient's email address. The other email contained 4 images which were different than the 12 previously mentioned. The intended recipient was the same""I suspect what happened was he was sending these images to himself and when they didn't deliver he sent them again repeatedly. Either that or he got word from the recipient that they did not get delivered."

After examining the images, Apple was able to provide the user's data, including his name, address and mobile numbers. The government also reportedly asked Apple to turn over the contents of the user's emails, texts, instant messages and "all files and other records stored on iCloud."

This method is not applicable in the case of encrypted content and seems to pertain only to emails sent through Apple's servers. As the report notes, it's a server, not employees, that screen all emails that pass through it, and employees only see emails that have been flagged as containing signatures that could point to child abuse imagery in their content.

The news is an interesting insight into just how much Apple can assist law enforcement, at least in the realm of child abuse imagery. The report can be contrasted to reports at the beginning of this year regarding Apple's battle with the FBI over two phones used by the Pensacola naval base shooter, during which the FBI suggested Apple was being unhelpful in the investigation.

Stephen Warwick
Stephen Warwick

Stephen Warwick has written about Apple for five years at iMore and previously elsewhere. He covers all of iMore's latest breaking news regarding all of Apple's products and services, both hardware and software. Stephen has interviewed industry experts in a range of fields including finance, litigation, security, and more. He also specializes in curating and reviewing audio hardware and has experience beyond journalism in sound engineering, production, and design.

Before becoming a writer Stephen studied Ancient History at University and also worked at Apple for more than two years. Stephen is also a host on the iMore show, a weekly podcast recorded live that discusses the latest in breaking Apple news, as well as featuring fun trivia about all things Apple.

3 Comments
  • but if Ring Doorbell helps police, we all get upset and file protests!? Where's the protest here?
  • What is there to protest about? The Ring situation was probably due to misunderstanding but if Ring only share footage with a warrant or with user's consent, I have zero problems. Same here. Also, 1. Email is not a secure protocol, anything you email can be intercepted by anyone.
    2. Since it is not secure, file attachments aren't either.
    3. The said files matched the known hashes that the national database have, which caused the service to flag it and then the person investigate it to confirm it is not a false positive.
    4. This is a law requirement for all US companies. If known files are going through their services, they must report it or they risk being fined. If you upload an image post here and it matches the known hash, iMore is required to share everything on you to the feds.
  • Email isn't secure by default, but you can apply end-to-end encryption to it with something like OpenPGP. If you aren't sending a secure email, you can still encrypt the file with a password or key