AI tool detects child abuse images with 99% accuracy

AI tool detects child abuse images with 99% accuracy

A new AI-powered tool claims to detect child abuse images with around 99 percent accuracy.

The tool, called Safer, is developed by non-profit Thorn to assist businesses which do not have in-house filtering systems to detect and remove such images.

According to the Internet Watch Foundation in the UK, reports of child abuse images surged 50 percent during the COVID-19 lockdown. In the 11 weeks starting on 23rd March, its hotline logged 44,809 reports of images compared with 29,698 last year. Many of these images are from children who’ve spent more time online and been coerced into releasing images of themselves.

Andy Burrows, head of child safety online at the NSPCC, recently told the BBC: “Harm could have been lessened if social networks had done a better job of investing in technology, investing in safer design features heading into this crisis.”

Safer is one tool which could help with quickly flagging child abuse content to limit the harm caused.

The detection services of Safer include:

  • Image Hash Matching: The flagship service that generates cryptographic and perceptual hashes for images and compares those hashes to known CSAM hashes. At the time of publishing, the database includes 5.9M hashes. Hashing happens in the client’s infrastructure to maintain user privacy.
  • CSAM Image Classifier: Machine learning classification model developed by Thorn and leveraged within Safer that returns a prediction for whether a file is CSAM. The classifier has been trained on datasets totaling hundreds of thousands images including adult pornography, CSAM, and various benign imagery and can aid in the identification of potentially new and unknown CSAM.
  • Video Hash Matching: Service that generates cryptographic and perceptual hashes for video scenes and compares them to hashes representing scenes of suspected CSAM. At the time of publishing, the database includes over 650k hashes of suspected CSAM scenes.
  • SaferList for Detection: Service for Safer customers to leverage the knowledge of the broader Safer community by matching against hash sets contributed by other Safer customers to broaden detection efforts. Customers can customise what hash sets they would like to include.

However, the problem doesn’t stop with flagging content. It’s been documented that moderators for social media platforms often require therapy or even commit suicide after being exposed day-in, day-out to some of the most disturbing content posted online.

Thorn claims Safer is built with the wellness of moderators in mind. To this end, content is automatically blurred (the company says this currently only works for images.)

Safer has APIs available for developers that “are built to broaden the shared knowledge of child abuse content by contributing hashes, scanning against other industry hashes, and sending feedback on false positives.”

One of Thorn’s most high-profile clients so far is Flickr. Using Safer, Flickr found an image of child abuse hosted on its platform which – following a law enforcement investigation – led to the recovery of 21 children ranging from 18 months to 14 years old, and the arrest of the perpetrator.

Safer is currently available for any company operating in the US. Thorn plans to expand to other countries next year after customising for each country’s national reporting requirements.

You can find out more about the tool and how to get started here.

Tags: , , , , , ,