Google fires ethical AI researcher Timnit Gebru after critical email

A leading figure in ethical AI development has been fired by Google after criticising the company.

Timnit Gebru is considered a pioneer in the field and researched the risks and inequalities found in large language models.

Gebru claims she was fired by Google over an unpublished paper and sending an email critical of the company’s practices.

The paper questions whether language models can be too big, who benefits from them, and whether they can increase...

MIT has removed a dataset which leads to misogynistic, racist AI models

MIT has apologised for, and taken offline, a dataset which trains AI models with misogynistic and racist tendencies.

The dataset in question is called 80 Million Tiny Images and was created in 2008. Designed for training AIs to detect objects, the dataset is a huge collection of pictures which are individually labelled based on what they feature.

Machine-learning models are trained using these images and their labels. An image of a street – when fed into an AI trained...

Detroit Police chief says AI face recognition doesn’t work ‘96% of the time’

Detroit Police chief Editor Craig has acknowledged that AI-powered face recognition doesn’t work the vast majority of times.

“If we would use the software only [for subject identification], we would not solve the case 95-97 percent of the time,” Craig said. “If we were just to use the technology by itself to identify someone, I would say 96 percent of the time it would misidentify.”

Craig’s comments arrive just days after the ACLU (American Civil Liberties...

San Francisco hopes AI will prevent bias in prosecutions

San Francisco will soon implement AI in a bid to prevent bias when prosecuting a potential criminal.

Even subconscious human biases can impact courtroom decisions. Racial bias in the legal system is particularly well-documented (PDF) and often leads to individuals with darker skin being prosecuted more, or with tougher sentencing, than people with lighter skin tones accused of similar crimes.

Speaking during a press briefing today, SF District Attorney George Gascón...

UN: AI voice assistants fuel stereotype women are ‘subservient’

A report from the UN claims AI voice assistants like Alexa and Siri are fueling the stereotype women are ‘subservient’.

Published by UNESCO (United Nations Educational, Scientific and Cultural Organization), the 146-page report titled “I’d blush if I could” highlights the market is dominated by female voice assistants.

According to the researchers, the almost exclusive use of female voice assistants fuels stereotypes that women are "obliging, docile and...

AI-conducted study highlights ‘massive gender bias’ in the UK

A first-of-its-kind study conducted by an AI highlights the ‘massive gender bias’ which continues to plague the UK workforce.

The research was published by the Royal Statistical Society but conducted by Glass AI, a startup which uses artificial intelligence to analyse every UK website.

In a blog post, the company explained its unique approach:

“Previous related studies created for economists, policy-makers, or business analysts have tended to underuse or...

Amnesty International warns of AI ‘nightmare scenarios’

amnesty international ai discrimination human rights equality

Human rights campaigners Amnesty International have warned of the potential ‘nightmare scenarios’ arising from AI if left unchecked. In a blog post, one scenario Amnesty foresees AI being used for is autonomous systems choosing military targets with little-to-no human oversight. Military AI Fears The development of AI has been likened to another arms race. Much like nuclear weapons, there is the argument if a nation doesn’t develop its capabilities then others will. Furthermore,...