IBM releases tool for tackling scourge of bias in AI algorithms

IBM releases tool for tackling scourge of bias in AI algorithms

Bias and prejudice remains a serious issue across many societies, take away human input and the result could be disastrous.

IBM is stepping in with a tool it calls ‘Fairness 360’ which scans for signs of bias in algorithms to recommend adjustments on how to correct them.

AIs already have a documented bias problem. It’s rarely intentional, but typically a result of their developers coming from the predominant part of each society.

Take facial recognition software, for example.

A 2010 study by researchers at NIST and the University of Texas in Dallas found algorithms designed and tested in East Asia are better at recognising East Asians, while those designed in Western countries are more accurate at detecting Caucasians.

The ACLU (American Civil Liberties Union) recently conducted a test of Amazon’s facial recognition technology on members of Congress to see if they match with a database of criminal mugshots. The false matches disproportionately affected members of the Congressional Black Caucus.

Humans have natural biases. Political stances, for example, are – for the most part – fine to have on an individual basis. However, if an AI starts to conduct actions or spread the views of its developer(s) then it creates a problem.

A problem today is that developers often don’t know exactly what decisions are being made by their AI and why. The AIs work in what’s known as a ‘black box’.

IBM’s tool wants to make these decisions more transparent so that developers can see what factors are being used by their AIs.

A recent study conducted by IBM’s Institute for Business Value found 82 percent of enterprises are considering AI deployments. However, 60 percent fear liability issues.

The software will be cloud-based and open source, it will also work with various common AI frameworks including Watson, Tensorflow, SparkML, AWS SageMaker, and AzureML.

You can find out more about Fairness 360 here, or find initial code on GitHub.

What are your thoughts on IBM’s tool for detecting AI bias?

 

Tags: , , , ,