child exploitation – AI News https://news.deepgeniusai.com Artificial Intelligence News Fri, 31 Jul 2020 16:08:33 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png child exploitation – AI News https://news.deepgeniusai.com 32 32 AI tool detects child abuse images with 99% accuracy https://news.deepgeniusai.com/2020/07/31/ai-tool-detect-child-abuse-images-accuracy/ https://news.deepgeniusai.com/2020/07/31/ai-tool-detect-child-abuse-images-accuracy/#respond Fri, 31 Jul 2020 16:08:31 +0000 https://news.deepgeniusai.com/?p=9789 A new AI-powered tool claims to detect child abuse images with around 99 percent accuracy. The tool, called Safer, is developed by non-profit Thorn to assist businesses which do not have in-house filtering systems to detect and remove such images. According to the Internet Watch Foundation in the UK, reports of child abuse images surged... Read more »

The post AI tool detects child abuse images with 99% accuracy appeared first on AI News.

]]>
A new AI-powered tool claims to detect child abuse images with around 99 percent accuracy.

The tool, called Safer, is developed by non-profit Thorn to assist businesses which do not have in-house filtering systems to detect and remove such images.

According to the Internet Watch Foundation in the UK, reports of child abuse images surged 50 percent during the COVID-19 lockdown. In the 11 weeks starting on 23rd March, its hotline logged 44,809 reports of images compared with 29,698 last year. Many of these images are from children who’ve spent more time online and been coerced into releasing images of themselves.

Andy Burrows, head of child safety online at the NSPCC, recently told the BBC: “Harm could have been lessened if social networks had done a better job of investing in technology, investing in safer design features heading into this crisis.”

Safer is one tool which could help with quickly flagging child abuse content to limit the harm caused.

The detection services of Safer include:

  • Image Hash Matching: The flagship service that generates cryptographic and perceptual hashes for images and compares those hashes to known CSAM hashes. At the time of publishing, the database includes 5.9M hashes. Hashing happens in the client’s infrastructure to maintain user privacy.
  • CSAM Image Classifier: Machine learning classification model developed by Thorn and leveraged within Safer that returns a prediction for whether a file is CSAM. The classifier has been trained on datasets totaling hundreds of thousands images including adult pornography, CSAM, and various benign imagery and can aid in the identification of potentially new and unknown CSAM.
  • Video Hash Matching: Service that generates cryptographic and perceptual hashes for video scenes and compares them to hashes representing scenes of suspected CSAM. At the time of publishing, the database includes over 650k hashes of suspected CSAM scenes.
  • SaferList for Detection: Service for Safer customers to leverage the knowledge of the broader Safer community by matching against hash sets contributed by other Safer customers to broaden detection efforts. Customers can customise what hash sets they would like to include.

However, the problem doesn’t stop with flagging content. It’s been documented that moderators for social media platforms often require therapy or even commit suicide after being exposed day-in, day-out to some of the most disturbing content posted online.

Thorn claims Safer is built with the wellness of moderators in mind. To this end, content is automatically blurred (the company says this currently only works for images.)

Safer has APIs available for developers that “are built to broaden the shared knowledge of child abuse content by contributing hashes, scanning against other industry hashes, and sending feedback on false positives.”

One of Thorn’s most high-profile clients so far is Flickr. Using Safer, Flickr found an image of child abuse hosted on its platform which – following a law enforcement investigation – led to the recovery of 21 children ranging from 18 months to 14 years old, and the arrest of the perpetrator.

Safer is currently available for any company operating in the US. Thorn plans to expand to other countries next year after customising for each country’s national reporting requirements.

You can find out more about the tool and how to get started here.

The post AI tool detects child abuse images with 99% accuracy appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/31/ai-tool-detect-child-abuse-images-accuracy/feed/ 0
Aussie police use Clearview AI’s facial recognition to fight child exploitation https://news.deepgeniusai.com/2020/04/15/aussie-police-clearview-ai-facial-recognition-child-exploitation/ https://news.deepgeniusai.com/2020/04/15/aussie-police-clearview-ai-facial-recognition-child-exploitation/#respond Wed, 15 Apr 2020 15:17:52 +0000 https://news.deepgeniusai.com/?p=9548 Police in Australia have employed the use of Clearview AI’s controversial facial recognition to tackle child exploitation. The Australian Federal Police (AFP) admitted to using Clearview AI’s system despite not having a legislative framework in place for the technology. Deputy commissioner Karl Kent said that the AFP trialled the facial recognition system but has not... Read more »

The post Aussie police use Clearview AI’s facial recognition to fight child exploitation appeared first on AI News.

]]>
Police in Australia have employed the use of Clearview AI’s controversial facial recognition to tackle child exploitation.

The Australian Federal Police (AFP) admitted to using Clearview AI’s system despite not having a legislative framework in place for the technology.

Deputy commissioner Karl Kent said that the AFP trialled the facial recognition system but has not entered any formal arrangements with Clearview AI to procure their technology.

In a statement, opposition party Labor called for Home Affairs Minister Peter Dutton to explain whether the AFP’s investigations into child exploitation were jeopardised by the use of Clearview AI’s technology without legal authorisation:

“Peter Dutton must immediately explain what knowledge he had of Australian Federal Police officers using the Clearview AI facial recognition tool despite the absence of any legislative framework in relation to the use of identity-matching services.”

Clearview AI’s facial recognition was used specifically by the AFP-led Australian Centre to Counter Child Exploitation (ACCCE) to support their vital work.

“The trial was to assess the capability of the Clearview AI system in the context of countering child exploitation,” wrote the AFP.

ACCCE’s testing took place between 2nd November 2019 and 22nd January 2020.

“Searches included images of known individuals, and unknown individuals related to current or past investigations relating to child exploitation,” the AFP said. “Outside of the ACCCE Operational Command there was no visibility that this trial had commenced.”

Clearview AI’s facial recognition has come under stiff opposition due to its controversial practices and extensive links to the far-right.

Hoan Ton-That, founder of Clearview AI, claims to have disassociated from far-right views, movements, and individuals. Ton-That told Huffington Post recently that growing up on the internet did not “serve him well” and “there was a period when I explored a range of ideas—not out of belief in any of them, but out of a desire to search for self and place in the world.”

Clearview AI’s facial recognition system uses a large database consisting of billions of scraped images from across the web. Activists believe the system infringes on people’s right to privacy as they never gave permission for their images to be stored and used in such a way.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland said recently. “It’s kind of a bizarre argument to make because [your face is the] most public thing out there.”

(Photo by Joey Csunyo on Unsplash)

The post Aussie police use Clearview AI’s facial recognition to fight child exploitation appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/04/15/aussie-police-clearview-ai-facial-recognition-child-exploitation/feed/ 0