racism – AI News https://news.deepgeniusai.com Artificial Intelligence News Thu, 02 Jul 2020 15:43:07 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png racism – AI News https://news.deepgeniusai.com 32 32 MIT has removed a dataset which leads to misogynistic, racist AI models https://news.deepgeniusai.com/2020/07/02/mit-removed-dataset-misogynistic-racist-ai-models/ https://news.deepgeniusai.com/2020/07/02/mit-removed-dataset-misogynistic-racist-ai-models/#comments Thu, 02 Jul 2020 15:43:05 +0000 https://news.deepgeniusai.com/?p=9728 MIT has apologised for, and taken offline, a dataset which trains AI models with misogynistic and racist tendencies. The dataset in question is called 80 Million Tiny Images and was created in 2008. Designed for training AIs to detect objects, the dataset is a huge collection of pictures which are individually labelled based on what... Read more »

The post MIT has removed a dataset which leads to misogynistic, racist AI models appeared first on AI News.

]]>
MIT has apologised for, and taken offline, a dataset which trains AI models with misogynistic and racist tendencies.

The dataset in question is called 80 Million Tiny Images and was created in 2008. Designed for training AIs to detect objects, the dataset is a huge collection of pictures which are individually labelled based on what they feature.

Machine-learning models are trained using these images and their labels. An image of a street – when fed into an AI trained on such a dataset – could tell you about things it contains such as cars, streetlights, pedestrians, and bikes.

Two researchers – Vinay Prabhu, chief scientist at UnifyID, and Abeba Birhane, a PhD candidate at University College Dublin in Ireland – analysed the images and found thousands of concerning labels.

MIT’s training set was found to label women as “bitches” or “whores,” and people from BAME communities with the kind of derogatory terms I’m sure you don’t need me to write. The Register notes the dataset also contained close-up images of female genitalia labeled with the C-word.

The Register alerted MIT to the concerning issues found by Prabhu and Birhane with the dataset and the college promptly took it offline. MIT went a step further and urged anyone using the dataset to stop using it and delete any copies.

A statement on MIT’s website claims it was unaware of the offensive labels and they were “a consequence of the automated data collection procedure that relied on nouns from WordNet.”

The statement goes on to explain the 80 million images contained in the dataset, with sizes of just 32×32 pixels, means that manual inspection would be almost impossible and cannot guarantee all offensive images will be removed.

“Biases, offensive and prejudicial images, and derogatory terminology alienates an important part of our community – precisely those that we are making efforts to include. It also contributes to harmful biases in AI systems trained on such data,” wrote Antonio Torralba, Rob Fergus, and Bill Freeman from MIT.

“Additionally, the presence of such prejudicial images hurts efforts to foster a culture of inclusivity in the computer vision community. This is extremely unfortunate and runs counter to the values that we strive to uphold.”

You can find a full pre-print copy of Prabhu and Birhane’s paper here (PDF)

(Photo by Clay Banks on Unsplash)

The post MIT has removed a dataset which leads to misogynistic, racist AI models appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/02/mit-removed-dataset-misogynistic-racist-ai-models/feed/ 4
Microsoft’s AI editor publishes stories about its own racist error https://news.deepgeniusai.com/2020/06/10/microsoft-ai-editor-publishes-stories-racist-error/ https://news.deepgeniusai.com/2020/06/10/microsoft-ai-editor-publishes-stories-racist-error/#comments Wed, 10 Jun 2020 14:45:43 +0000 https://news.deepgeniusai.com/?p=9683 Microsoft’s replacement of human editors with artificial intelligence has faced its first big embarrassment. In late May, Microsoft decided to fire many of its human editors for MSN News and replace them with an AI. Earlier this week, a news story appeared about Little Mix band member Jade Thirlwall’s experience facing racism. The story appears... Read more »

The post Microsoft’s AI editor publishes stories about its own racist error appeared first on AI News.

]]>
Microsoft’s replacement of human editors with artificial intelligence has faced its first big embarrassment.

In late May, Microsoft decided to fire many of its human editors for MSN News and replace them with an AI.

Earlier this week, a news story appeared about Little Mix band member Jade Thirlwall’s experience facing racism. The story appears innocent enough until you realise Microsoft’s AI confused two of the mixed-race band members. The error was quickly pointed out by Thirlwall.

In an Instagram story, Thirlwall wrote: “@MSN If you’re going to copy and paste articles from other accurate media outlets, you might want to make sure you’re using an image of the correct mixed race member of the group.”

She added: “This shit happens to @leighannepinnock and I ALL THE TIME that it’s become a running joke … It offends me that you couldn’t differentiate the two women of colour out of four members of a group … DO BETTER!”

Microsoft’s human editors were reportedly told to be aware the AI may subsequently publish stories on its own racist error and to manually remove them.

The Microsoft News app ended up being flooded with stories about the incident. It’s clear that the remaining human editors couldn’t move fast enough against their automated counterpart.

According to Waterson, the recently-sacked human staff from MSN have been told to stop reporting to him what the AI is doing.

This isn’t the first time an AI-powered solution from Microsoft has come under fire for racism.

An infamous Twitter chatbot developed by Microsoft called Tay ended up spouting racist and misogynistic vitriol back in 2016. The chatbot obviously wasn’t designed to be such an unsavoury character but Microsoft, for some reason, thought it would be a good idea to allow internet denizens to train it.

One of the most pressing concerns in this increasingly draconian world we live in is that of mass surveillance and facial recognition. While IBM announced this week it wants nothing more to do with the technology, Microsoft remains a key player.

An experiment by the Algorithmic Justice League last year found serious disparities between the performance of facial recognition algorithms based on gender and skin colour. 

Microsoft’s algorithm actually performed the best of those tested and managed a 100 percent accuracy when detecting lighter-skinned males. However, the algorithm was just 79.2 percent accurate when used on darker-skinned females.

If that version of Microsoft’s facial recognition system was used for surveillance – almost two in every ten females with darker skin risks being falsely flagged. In busy areas, that could mean hundreds if not thousands of people facing automated profiling each day.

While ideally algorithms wouldn’t have any biases or issues, all of the incidents show exactly why many humans should almost always be involved in final decisions. That way, when things go wrong, at least there’s accountability to a specific person rather than just blaming an AI error.

(Image Credit: Little Mix by vagueonthehow under CC BY 2.0 license)

The post Microsoft’s AI editor publishes stories about its own racist error appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/10/microsoft-ai-editor-publishes-stories-racist-error/feed/ 1