Detroit Police chief says AI face recognition doesn’t work ‘96% of the time’

Detroit Police chief Editor Craig has acknowledged that AI-powered face recognition doesn’t work the vast majority of times.

“If we would use the software only [for subject identification], we would not solve the case 95-97 percent of the time,” Craig said. “If we were just to use the technology by itself to identify someone, I would say 96 percent of the time it would misidentify.”

Craig’s comments arrive just days after the ACLU (American Civil Liberties...

The ACLU uncovers the first known wrongful arrest due to AI error

The ACLU (American Civil Liberties Union) has forced the police to acknowledge a wrongful arrest due to an erroneous algorithm.

While it’s been suspected that documented racial bias with facial recognition algorithms has led to false arrests, it’s been difficult to prove.

On Wednesday, the ACLU lodged a complaint against the Detroit police after black male Robert Williams was arrested on his front lawn “as his wife Melissa looked on and as his daughters wept from...

ACLU sues Clearview AI calling it a ‘nightmare scenario’ for privacy

The American Civil Liberties Union (ACLU) is suing controversial facial recognition provider Clearview AI over privacy concerns.

“Companies like Clearview will end privacy as we know it, and must be stopped,” said Nathan Freed Wessler, senior staff attorney with the ACLU’s Speech, Privacy, and Technology Project.

“The ACLU is taking its fight to defend privacy rights against the growing threat of this unregulated surveillance technology to the courts, even as we...

Amazon joins calls to establish facial recognition standards

Amazon has put its weight behind the growing number of calls from companies, individuals, and rights groups to establish facial recognition standards.

Michael Punke, VP of Global Public Policy at Amazon Web Services, said.

"Over the past several months, we've talked to customers, researchers, academics, policymakers, and others to understand how to best balance the benefits of facial recognition with the potential risks.

It's critical that any legislation protect...

Microsoft warns its AI offerings ‘may result in reputational harm’

Microsoft has warned investors that its AI offerings could damage the company’s reputation in a bid to prepare them for the worst.

AI can be unpredictable, and Microsoft already has experience. Back in 2016, a Microsoft chatbot named Tay became a racist, sexist, generally-rather-unsavoury character after internet users took advantage of its machine learning capabilities

The chatbot was covered in media around the world and itself was bound to have caused Microsoft some...

AI is sentencing people based on their ‘risk’ assessment

AI-powered tools for determining the risk of an individual are being used to make incarceration and sentencing decisions.

During the Data for Black Lives conference last weekend, several experts shared how AI is evolving America’s controversial prison system.

America imprisons more people than any other nation. It’s not just a result of the population of the country, the incarceration per head is the highest in the world at ~716 per 100,000 of the national...

Amazon expert suggests AI regulation after ACLU’s bias findings

An expert from Amazon has suggested the government should implement a minimum confidence level for the use of facial recognition in law enforcement. Dr. Matt Wood, GM of Deep Learning and AI at Amazon Web Services, made the suggestion in a blog post responding to the ACLU’s (American Civil Liberties Union) findings of a racial bias in the ‘Rekognition’ facial recognition algorithm by Amazon. In their findings, the ACLU found Rekognition erroneously labelled those with darker skin...

ACLU finds Amazon’s facial recognition AI is racially biased

A test of Amazon’s facial recognition technology by the ACLU has found it erroneously labelled those with darker skin colours as criminals more often. Bias in AI technology, when used by law enforcement, has raised concerns of infringing on civil rights by automated racial profiling. A 2010 study by researchers at NIST and the University of Texas in Dallas found that algorithms designed and tested in East Asia are better at recognising East Asians, while those designed in Western...