San Francisco hopes AI will prevent bias in prosecutions

San Francisco hopes AI will prevent bias in prosecutions

San Francisco will soon implement AI in a bid to prevent bias when prosecuting a potential criminal.

Even subconscious human biases can impact courtroom decisions. Racial bias in the legal system is particularly well-documented (PDF) and often leads to individuals with darker skin being prosecuted more, or with tougher sentencing, than people with lighter skin tones accused of similar crimes.

Speaking during a press briefing today, SF District Attorney George Gascón said: “When you look at the people incarcerated in this country, they’re going to be disproportionately men and women of colour.”

To combat this, San Francisco will use a ‘bias mitigation tool’ which automatically redacts any information from a police report that could identify a suspect’s race.

Information stripped from reports will not only include descriptions of race but also things such as hair and eye colour. The bias mitigation tool will even remove things such as neighbourhoods and the names of people which may indicate an individual’s racial background.

San Francisco’s bias-reducing AI even strips out information which identifies specific police officers, like their badge number. Removing this data helps to ensure the prosecutor isn’t biased through knowing an officer.

The AI tool is being developed by Alex Chohlas-Wood of the Stanford Computational Policy Lab. Several computer vision algorithms are used to recognise words and replace them with more generic equivalents like Officer #2 or Associate #1.

San Francisco hopes to start using the bias mitigation tool in early July. Hopefully, it will help to address the problem of bias in the legal system while also reducing the perception that AI only introduces bias.

Tags: , , , , , , , ,