Amazon expert suggests AI regulation after ACLU’s bias findings

Amazon expert suggests AI regulation after ACLU’s bias findings

An expert from Amazon has suggested the government should implement a minimum confidence level for the use of facial recognition in law enforcement.

Dr. Matt Wood, GM of Deep Learning and AI at Amazon Web Services, made the suggestion in a blog post responding to the ACLU’s (American Civil Liberties Union) findings of a racial bias in the ‘Rekognition’ facial recognition algorithm by Amazon.

In their findings, the ACLU found Rekognition erroneously labelled those with darker skin colours as criminals more often when members of Congress were matched against a database of 25,000 arrest photos.

Amazon argued the ACLU left Rekognition’s default confidence setting of 80 percent on when it suggests 95 percent or higher for law enforcement.

Commenting on the ACLU’s findings, Wood wrote:

“The default confidence threshold for facial recognition APIs in Rekognition is 80%, which is good for a broad set of general use cases (such as identifying celebrities on social media or family members who look alike in photos apps), but it’s not the right setting for public safety use cases.

The 80% confidence threshold used by the ACLU is far too low to ensure the accurate identification of individuals; we would expect to see false positives at this level of confidence.”

Wood provided a case example of their own test where – using a dataset of over 850,000 faces commonly used in academia – the company searched against public photos of all members of US Congress ‘in a similar way’ to the ACLU.

Using the 99 percent confidence threshold, the misidentification rate dropped to zero despite comparing against a larger number of faces (30x larger than the ACLU test).

Amazon is naturally keen to highlight the positive uses its technology has been used for. The company says it’s been used for things such as fighting human trafficking and reuniting lost children with their families.

However, the ACLU’s test shows the potential for the technology to be misused to disastrous effect. Without oversight, civil liberties could be impacted and lead to increased persecution of minorities.

To help prevent this from happening, Wood calls it “a very reasonable idea” for “the government to weigh in and specify what temperature (or confidence levels) it wants law enforcement agencies to meet to assist in their public safety work.”

A 2010 study by researchers at NIST and the University of Texas in Dallas found that algorithms designed and tested in East Asia are better at recognising East Asians, while those designed in Western countries are more accurate at detecting Caucasians.

When a clear bias problem remains in AI algorithms, it’s little wonder there’s concern about the use of inaccurate facial recognition for things such as police body cams.

Should a minimum confidence level be set for law enforcement?

 

Tags: , , , , , , , , , , ,