racist – AI News https://news.deepgeniusai.com Artificial Intelligence News Thu, 02 Jul 2020 15:43:07 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png racist – AI News https://news.deepgeniusai.com 32 32 MIT has removed a dataset which leads to misogynistic, racist AI models https://news.deepgeniusai.com/2020/07/02/mit-removed-dataset-misogynistic-racist-ai-models/ https://news.deepgeniusai.com/2020/07/02/mit-removed-dataset-misogynistic-racist-ai-models/#comments Thu, 02 Jul 2020 15:43:05 +0000 https://news.deepgeniusai.com/?p=9728 MIT has apologised for, and taken offline, a dataset which trains AI models with misogynistic and racist tendencies. The dataset in question is called 80 Million Tiny Images and was created in 2008. Designed for training AIs to detect objects, the dataset is a huge collection of pictures which are individually labelled based on what... Read more »

The post MIT has removed a dataset which leads to misogynistic, racist AI models appeared first on AI News.

]]>
MIT has apologised for, and taken offline, a dataset which trains AI models with misogynistic and racist tendencies.

The dataset in question is called 80 Million Tiny Images and was created in 2008. Designed for training AIs to detect objects, the dataset is a huge collection of pictures which are individually labelled based on what they feature.

Machine-learning models are trained using these images and their labels. An image of a street – when fed into an AI trained on such a dataset – could tell you about things it contains such as cars, streetlights, pedestrians, and bikes.

Two researchers – Vinay Prabhu, chief scientist at UnifyID, and Abeba Birhane, a PhD candidate at University College Dublin in Ireland – analysed the images and found thousands of concerning labels.

MIT’s training set was found to label women as “bitches” or “whores,” and people from BAME communities with the kind of derogatory terms I’m sure you don’t need me to write. The Register notes the dataset also contained close-up images of female genitalia labeled with the C-word.

The Register alerted MIT to the concerning issues found by Prabhu and Birhane with the dataset and the college promptly took it offline. MIT went a step further and urged anyone using the dataset to stop using it and delete any copies.

A statement on MIT’s website claims it was unaware of the offensive labels and they were “a consequence of the automated data collection procedure that relied on nouns from WordNet.”

The statement goes on to explain the 80 million images contained in the dataset, with sizes of just 32×32 pixels, means that manual inspection would be almost impossible and cannot guarantee all offensive images will be removed.

“Biases, offensive and prejudicial images, and derogatory terminology alienates an important part of our community – precisely those that we are making efforts to include. It also contributes to harmful biases in AI systems trained on such data,” wrote Antonio Torralba, Rob Fergus, and Bill Freeman from MIT.

“Additionally, the presence of such prejudicial images hurts efforts to foster a culture of inclusivity in the computer vision community. This is extremely unfortunate and runs counter to the values that we strive to uphold.”

You can find a full pre-print copy of Prabhu and Birhane’s paper here (PDF)

(Photo by Clay Banks on Unsplash)

The post MIT has removed a dataset which leads to misogynistic, racist AI models appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/02/mit-removed-dataset-misogynistic-racist-ai-models/feed/ 4