Lack of STEM diversity is causing AI to have a ‘white male’ bias

A report from New York University's AI Now Institute has found a predominantly white male coding workforce is causing bias in algorithms.

The report highlights that – while gradually narrowing – the lack of diverse representation at major technology companies such as Microsoft, Google, and Facebook is causing AIs to cater more towards white males.

For example, at Facebook just 15 percent of the company's AI staff are women. The problem is even more substantial at...

EU AI Expert Group: Ethical risks are ‘unimaginable’

The EU Commission’s AI expert group has published its assessment of the rapidly-advancing technology and warned it has “unimaginable” ethical risks.

Some of the highlighted risks includes lethal autonomous systems, tracking individuals, and ‘scoring’ people in society.

On the subject of lethal autonomous systems, the experts warn machines with cognitive skills could “decide whom, when and where to fight without human intervention”.

When it comes...

UK government investigates AI bias in decision-making

The UK government is launching an investigation to determine the levels of bias in algorithms that could affect people's lives.

A browse through our ‘ethics’ category here on AI News will highlight the serious problem of bias in today's algorithms. With AIs being increasingly used for decision-making, parts of society could be left behind.

Conducted by the Centre for Data Ethics and Innovation (CDEI), the investigation will focus on areas where AI has tremendous...

Joy Buolamwini: Fighting algorithmic bias needs to be ‘a priority’

Algorithmic Justice League founder Joy Buolamwini gave a speech during the World Economic Forum this week on the need to fight AI bias.

Buolamwini is also an MIT Media Lab researcher and went somewhat viral for her TED Talk in 2016 titled ‘How I’m fighting bias in algorithms’.

Her latest speech included a presentation in which Buolamwini went over an analysis of the current popular facial recognition algorithms.

Here were the overall accuracy results when...

AI is sentencing people based on their ‘risk’ assessment

AI-powered tools for determining the risk of an individual are being used to make incarceration and sentencing decisions.

During the Data for Black Lives conference last weekend, several experts shared how AI is evolving America’s controversial prison system.

America imprisons more people than any other nation. It’s not just a result of the population of the country, the incarceration per head is the highest in the world at ~716 per 100,000 of the national...

AI tags potential criminals before they’ve done anything

British police want to use AI for highlighting who is at risk of becoming a criminal before they’ve actually committed any crime. Although it sounds like a dystopian nightmare, there are clear benefits. Resources and outreach programs can be allocated to attempt preventing a crime, stop anyone becoming a victim, and remove the costs associated with prosecuting and jailing someone. With prisons overburdened and space limited, reducing the need to lock someone up is a win for everyone....

AI-powered lie detector will question travellers at EU borders

The EU is experimenting with an AI-powered lie detector in a bid to help solve its border control problems and policing demand. For some, gaining EU citizenship is a dream as passports allow free movement between all member states. However, for those ineligible, this also makes them a prime target for criminals. Just earlier this week, Bulgarian officials were arrested for selling fake EU passports ‘to 30 people a week’ for £4,445 each. According to The Times:

Chinese AI darling SenseTime wants facial recognition standards

The CEO of Chinese AI darling SenseTime wants to see facial recognition standards established for a ‘healthier’ industry. SenseTime is among China’s most renowned AI companies. Back in April, we reported it had become the world’s most funded AI startup. Part of the company’s monumental success is the popularity of facial recognition in China where it’s used in many aspects of citizens’ lives. Just yesterday, game developer Tencent announced it’s testing facial recognition to...

Amazon expert suggests AI regulation after ACLU’s bias findings

An expert from Amazon has suggested the government should implement a minimum confidence level for the use of facial recognition in law enforcement. Dr. Matt Wood, GM of Deep Learning and AI at Amazon Web Services, made the suggestion in a blog post responding to the ACLU’s (American Civil Liberties Union) findings of a racial bias in the ‘Rekognition’ facial recognition algorithm by Amazon. In their findings, the ACLU found Rekognition erroneously labelled those with darker skin...

ACLU finds Amazon’s facial recognition AI is racially biased

A test of Amazon’s facial recognition technology by the ACLU has found it erroneously labelled those with darker skin colours as criminals more often. Bias in AI technology, when used by law enforcement, has raised concerns of infringing on civil rights by automated racial profiling. A 2010 study by researchers at NIST and the University of Texas in Dallas found that algorithms designed and tested in East Asia are better at recognising East Asians, while those designed in Western...