iCloudSource: iMore

What you need to know

  • A Forbes report has revealed how Apple scans emails for child abuse imagery.
  • It claims to have uncovered a warrant filed in Seattle, Washington.
  • Apple's servers scan emails for signs of child abuse imagery based on previously-identified photos.

A Forbes report regarding a warrant filed in Seattle, Washington, has revealed in-part how Apple uses technology to "intercept" emails that may contain child abuse imagery.

According to the report:

"...thanks to a search warrant uncovered by Forbes, for the first time we now know how the iPhone maker intercepts and checks messages when illegal material - namely, child abuse - is found within. The warrant, filed in Seattle, Washington, this week, shows that despite reports of Apple being unhelpful in serious law enforcement cases, it's being helpful in investigations."

As Forbes notes, Apple uses hashes, much like Facebook and Google, to detect child abuse imagery:

Think of these hashes as signatures attached to previously-identified child abuse photos and videos. When Apple systems - not staff - see one of those hashes passing through the company's servers, a flag will go up. The email or file containing the potentially illegal images will be quarantined for further inspection.

If companies identify a problem, they contact an authority, usually the National Center for Missing and Exploited Children. With regards to Apple specifically, the warrant contained notes on the process and even comments from an Apple employee:

But in Apple's case, its staff are clearly being more helpful, first stopping emails containing abuse material from being sent. A staff member then looks at the content of the files and analyzes the emails. That's according to a search warrant in which the investigating officer published an Apple employee's comments on how they first detected "several images of suspected child pornography" being uploaded by an iCloud user and then looked at their emails.

The employee notes stated:

When we intercept the email with suspected images they do not go to the intended recipient. This individual ... sent 8 emails that we intercepted. [Seven] of those emails contained 12 images. All 7 emails and images were the same, as was the recipient's email address. The other email contained 4 images which were different than the 12 previously mentioned. The intended recipient was the same"

"I suspect what happened was he was sending these images to himself and when they didn't deliver he sent them again repeatedly. Either that or he got word from the recipient that they did not get delivered."

After examining the images, Apple was able to provide the user's data, including his name, address and mobile numbers. The government also reportedly asked Apple to turn over the contents of the user's emails, texts, instant messages and "all files and other records stored on iCloud."

Best online learning tools for kids: ABCmouse, Reading IQ, & more

This method is not applicable in the case of encrypted content and seems to pertain only to emails sent through Apple's servers. As the report notes, it's a server, not employees, that screen all emails that pass through it, and employees only see emails that have been flagged as containing signatures that could point to child abuse imagery in their content.

The news is an interesting insight into just how much Apple can assist law enforcement, at least in the realm of child abuse imagery. The report can be contrasted to reports at the beginning of this year regarding Apple's battle with the FBI over two phones used by the Pensacola naval base shooter, during which the FBI suggested Apple was being unhelpful in the investigation.