MIT has removed a dataset which leads to misogynistic, racist AI models
MIT has apologised for, and taken offline, a dataset which trains AI models with misogynistic and racist tendencies.
The dataset in question is called 80 Million Tiny Images and was created in 2008. Designed for training AIs to detect objects, the dataset is a huge collection of pictures which are individually labelled based on what they feature.
Machine-learning models are trained using these images and their labels. An image of a street – when fed into an AI trained...