Apple introduces new child safety protections to help detect CSAM and more

Phone Child Safety Features Apple
Phone Child Safety Features Apple (Image credit: Apple)

What you need to know

  • Apple commits to add extra protection for children across its platforms.
  • New tools will be added in Messages to help protect children from predators.
  • New tools in iOS and iPadOS will help detect CSAM in iCloud Photos.

It's an ugly part of the world we live in, but children are often the target of abuse and exploitation online and through technology. Today, Apple announced several new protections coming to its platforms — iOS 15, iPadOS 15, and macOS Monterey — to help protect children and limit the spread of CSAM, collectively known as Apple Child Safety.

The Messages app will be getting new tools to warn kids and their parents when they receive or send sexually explicit photos. If an explicit photo is sent, the image will be blurred, and the child will be warned. On top of being warned about the contact, they will also be presented with resources and reassurances that it is okay not to view the photo. Apple also states parents will be able to be notified that their child may not be okay (opens in new tab).

" As an additional precaution, the child can also be told that, to make sure they are safe, their parents will get a message if they do view it. Similar protections are available if a child attempts to send sexually explicit photos. The child will be warned before the photo is sent, and the parents can receive a message if the child chooses to send it."

Apple also reassures that these new tools in Messages use on-device machine learning to identify the images in a way that does not allow the company to access the messages.

CSAM detection

Another big concern Apple is addressing is the spread of CSAM (Child Sexual Abuse Material). New technology in iOS 15 and iPad OS 15 will allow Apple to detect know CSAM images stored in iCloud Photos and report those images to the National Center for Missing and Exploited Children.

" Apple's method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users' devices."

The exact processes can be complicated, but Apple assures the method has an extremely low chance of flagging content incorrectly.

"Using another technology called threshold secret sharing, the system ensures the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content. The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account."

Siri and search updates

Csam Siri And Search

Csam Siri And Search (Image credit: Apple)

Lastly, Apple also announced that Siri and Search would provide additional resources to help children and parents stay safe online and get help if they find themselves in unsafe situations. And on top of that, Apple says there will be protections for when someone searches for queries related to CSAM.

These updates are expected in iOS 15, iPadOS 15, watchOS 8, and macOS Monterey later this year.

Luke Filipowicz
Staff Writer

Luke Filipowicz has been a writer at iMore, covering Apple for nearly a decade now. He writes a lot about Apple Watch and iPad but covers the iPhone and Mac as well. He often describes himself as an "Apple user on a budget" and firmly believes that great technology can be affordable if you know where to look. Luke also heads up the iMore Show — a weekly podcast focusing on Apple news, rumors, and products but likes to have some fun along the way. 

Luke knows he spends more time on Twitter than he probably should, so feel free to follow him or give him a shout on social media @LukeFilipowicz.

  • Buh Bye Apple, this is devastating
  • Apple is going to have a hard time convincing people that Apple is private and secure moving forward now that they're doing this.
  • "Apple also reassures that these new tools in Messages use on-device machine learning to identify the images in a way that does not allow the company to access the messages." So in other words Apple will put image hashes of naked body parts on everyones iPhones, iPads, ..., and then have the AI run through the customers images, on their devices to see if any of those naked body parts show up on a users images, then the AI block out those naked body parts, and report you to parents, and/or authorities, or Apple. I know Apple likes to say things that they don't know, but come on, that BS. Ask yourself this, whats to stop Apple from adding in image hashes of a face from a person of interest, or other objects, like drugs, chemicals, guns, bomb materials, or any other images of interest, or anything else they want to look for. Apple is really living up to the orwell life style. Apple uses scare tactics to implements things, especially on the gullible, or subjugated users. Apple: "Privacy is a fundemental human right". LOL, What a joke. This from a company that Automatically commandeered every iPhone starting with iOS 14.5, and will use every iPhone automatically to help out with AirTags. Users have to go into the setting to opt out, but Apple automatically defaults every user to helping Apple with every AirTag that is found around that users iPhone. Even using a customers data plan to send those found AirTags to Apples iCloud servers. Privacy is only a myth these days with Apple.
  • This is the biggest misunderstood thing Apple has done in a very long time. Not one of the comments below reflects what is the reality of these two separate items. 1: Apple does not scan photos, or determine the contents of any photos. They don't scan at all if you don't have iCloud library turned on. But even if you do, they have been behind the competition for what will be scanned and how. Google has already been doing this and have made millions of reports in the past. So where you gonna go? 2: They only make a mathematical representation of your photo, all done by the phone itself, Apple never sees it. Then they match the math that photo represents against the math of known CSAM images in a database (also on your phone). No two photos will have the same mathematical number unless they are exactly the same. Even two photos of the same subject from slightly different angles would have different codes and thus not be matched. And even then, when you upload photos to your iCloud account, you could get several positive matches and nothing will happen. Once your photo uploads reaches a specific number of images that are flagged, only then Apple examines the photos' security folder to see if it's an images that should be looked at closer. If it then passes that threshold, they look at a blurred version of that photo to see if it appears to be what it got flagged for. Only then do they actually report the person. As for the flagging of images your child sends or receives, that only happens for children under the age of 12. Anyone older does not have notices sent to their parents. Again, Apple does not see the images, and neither does anybody else. And again, they have been behind the competition on how this kind of thing works. But noooo, it's panic time and people everywhere have their hair on fire. If you don't believe me, and you shouldn't, check out the source where I got all this information. John Gruber has a very thorough examination with quotes from experts in the worlds of encryption and tracking of digital data. These people know what they are talking about. And Apple is going very carefully here to not allow what the hair on fire pundits are spreading says will happen. Educate yourself, before you sell your Apple gear and get into a system that respects your privacy much less and has been doing this exact same thing for years, only with way less care being taken.
  • You should educate yourself before telling others to educate themselves. Google is NOT doing the same thing. Google waits until you upload a photo to its servers Apple in a minority report style is going onto the device you bought and paid for and doing this all on their own. tell me, what's to stop saudi arabia from doing this for lgbtiqa+ people? or china and their detractors? i have educated myself. I have talked to actual tech experts. I eman Jon Gruber? really? I'm selling my iPhone SE 2020 as fast as i can
  • Apple intends to install software on American iPhones to scan for child abuse imagery, according to people briefed on its plans, raising alarm among security researchers who warn that it could open the door to surveillance of millions of people’s personal devices. Apple detailed its proposed system — known as “neuralMatch” — to some US academics earlier this week, according to two security researchers briefed on the virtual meeting. The automated system would proactively alert a team of human reviewers if it believes illegal imagery is detected, who would then contact law enforcement if the material can be verified. The scheme will initially roll out only in the US. Apple confirmed its plans in a blog post, saying the scanning technology is part of a new suite of child protection systems that would “evolve and expand over time”. The features will be rolled out as part of iOS 15, expected to be released next month. “This innovative new technology allows Apple to provide valuable and actionable information to the National Center for Missing and Exploited Children and law enforcement regarding the proliferation of known CSAM [child s--e.x-u.a-l abuse material],” the company said. “And it does so while providing significant privacy benefits over existing techniques since Apple only learns about users’ photos if they have a collection of known CSAM in their iCloud Photos account.” The proposals are Apple’s attempt to find a compromise between its own promise to protect customers’ privacy and demands from governments, law enforcement agencies and child safety campaigners for more assistance in criminal investigations, including terrorism and child ***********. The tension between tech companies such as Apple and Facebook, which have defended their increasing use of encryption in their products and services, and law enforcement has only intensified since the iPhone maker went to court with the FBI in 2016 over access to a terror suspect’s iPhone following a shooting in San Bernardino, California. Security researchers, while supportive of efforts to combat child abuse, are concerned that Apple risks enabling governments around the world to seek access to their citizens’ personal data, potentially far beyond its original intent. “It is an absolutely appalling idea, because it is going to lead to distributed bulk surveillance of . . . our phones and laptops,” said Ross Anderson, professor of security engineering at the University of Cambridge. Although the system is currently trained to spot child s-e..x abuse, it could be adapted to scan for any other targeted imagery and text, for instance, terror beheadings or anti-government signs at protests, say researchers. Apple’s precedent could also increase pressure on other tech companies to use similar techniques. “This will break the dam — governments will demand it from everyone,” said Matthew Green, a security professor at Johns Hopkins University, who is believed to be the first researcher to post a tweet about the issue. Apple’s system is less invasive in that the screening is done on the phone, and “only if there is a match is notification sent back to those searching”, said Alan Woodward, a computer security professor at the University of Surrey. “This decentralised approach is about the best approach you could adopt if you do go down this route.” Apple’s neuralMatch algorithm will continuously scan photos that are stored on a US user’s iPhone and have also been uploaded to its iCloud back-up system. Users’ photos, converted into a string of numbers through a process known as “hashing”, will be compared with those on a database of known images of child sexual abuse. The system has been trained on 200,000 s-e.x abuse images collected by the NCMEC. According to people briefed on the plans, every photo uploaded to iCloud in the US will be given a “safety voucher” saying whether it is suspect or not. Once a certain number of photos are marked as suspect, Apple will enable all the suspect photos to be decrypted and, if apparently illegal, passed on to the relevant authorities.
  • Need to delete this