blm – AI News https://news.deepgeniusai.com Artificial Intelligence News Wed, 29 Jul 2020 16:10:46 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png blm – AI News https://news.deepgeniusai.com 32 32 Researchers create AI bot to protect the identities of BLM protesters https://news.deepgeniusai.com/2020/07/29/researchers-create-ai-bot-protect-identities-blm-protesters/ https://news.deepgeniusai.com/2020/07/29/researchers-create-ai-bot-protect-identities-blm-protesters/#respond Wed, 29 Jul 2020 16:09:37 +0000 https://news.deepgeniusai.com/?p=9776 Researchers from Stanford have created an AI-powered bot to automatically cover up the faces of Black Lives Matter protesters in photos. Everyone should have the right to protest. And, if done legally, to do so without fear of having things like their future job prospects ruined because they’ve been snapped at a demonstration – from... Read more »

The post Researchers create AI bot to protect the identities of BLM protesters appeared first on AI News.

]]>
Researchers from Stanford have created an AI-powered bot to automatically cover up the faces of Black Lives Matter protesters in photos.

Everyone should have the right to protest. And, if done legally, to do so without fear of having things like their future job prospects ruined because they’ve been snapped at a demonstration – from which a select few may have gone on to do criminal acts such as arson and looting.

With images from the protests being widely shared on social media to raise awareness, police have been using the opportunity to add the people featured within them to facial recognition databases.

“Over the past weeks, we have seen an increasing number of arrests at BLM protests, with images circulating around the web enabling automatic identification of those individuals and subsequent arrests to hamper protest activity,” the researchers explain.

Software has been available for some time to blur faces, but recent AI advancements have proved that it’s possible to deblur such images.

Researchers from Stanford Machine Learning set out to develop an automated tool which prevents the real identity of those in an image from being revealed.

The result of their work is BLMPrivacyBot:

Rather than blur the faces, the bot automatically covers them up with the black fist emoji which has become synonymous with the Black Lives Matter movement. The researchers hope such a solution will be built-in to social media platforms, but admit it’s unlikely.

The researchers trained the model for their AI bot on a dataset consisting of around 1.2 million people called QNRF. However, they warn it’s not foolproof as an individual could be identified through other means such as what clothing they’re wearing.

To use the BLMPrivacyBot, you can either send an image to its Twitter handle or upload a photo to the web interface here. The open source repo is available if you want to look at the inner workings.

The post Researchers create AI bot to protect the identities of BLM protesters appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/29/researchers-create-ai-bot-protect-identities-blm-protesters/feed/ 0
Detroit Police chief says AI face recognition doesn’t work ‘96% of the time’ https://news.deepgeniusai.com/2020/06/30/detroit-police-chief-ai-face-recognition/ https://news.deepgeniusai.com/2020/06/30/detroit-police-chief-ai-face-recognition/#respond Tue, 30 Jun 2020 09:45:29 +0000 https://news.deepgeniusai.com/?p=9720 Detroit Police chief Editor Craig has acknowledged that AI-powered face recognition doesn’t work the vast majority of times. “If we would use the software only [for subject identification], we would not solve the case 95-97 percent of the time,” Craig said. “If we were just to use the technology by itself to identify someone, I... Read more »

The post Detroit Police chief says AI face recognition doesn’t work ‘96% of the time’ appeared first on AI News.

]]>
Detroit Police chief Editor Craig has acknowledged that AI-powered face recognition doesn’t work the vast majority of times.

“If we would use the software only [for subject identification], we would not solve the case 95-97 percent of the time,” Craig said. “If we were just to use the technology by itself to identify someone, I would say 96 percent of the time it would misidentify.”

Craig’s comments arrive just days after the ACLU (American Civil Liberties Union) lodged a complaint against the Detroit police following the harrowing wrongful arrest of black male Robert Williams due to a facial recognition error.

Detroit Police arrested Williams for allegedly stealing five watches valued at $3800 from a store in October 2018. A blurry CCTV image was matched by a facial recognition algorithm to Williams’ driver’s license photo.

Current AI algorithms are known to have a racism issue. Extensive studies have repeatedly shown that facial recognition algorithms are almost 100 percent accurate when used on white males, but have serious problems when it comes to darker skin colours and the fairer sex.

This racism issue was shown again this week after an AI designed to upscale blurry photos, such as those often taken from security cameras, was applied to a variety of people from the BAME communities.

Here’s a particularly famous one:

And some other examples:

Last week, Boston followed in the footsteps of an increasing number of cities like San Francisco, Oakland, and California in banning facial recognition technology over human rights concerns.

“Facial recognition is inherently dangerous and inherently oppressive. It cannot be reformed or regulated. It must be abolished,” said Evan Greer, deputy director of the digital rights group Fight for the Future.

Over the other side of the pond, facial recognition tests in the UK so far have also been nothing short of a complete failure. An initial trial at the 2016 Notting Hill Carnival led to not a single person being identified. A follow-up trial the following year led to no legitimate matches but 35 false positives.

An independent report into the Met Police’s facial recognition trials, conducted last year by Professor Peter Fussey and Dr Daragh Murray, concluded that it was only verifiably accurate in just 19 percent of cases.

The next chilling step for AI in surveillance is using it to predict crime. Following news of an imminent publication called ‘A Deep Neural Network Model to Predict Criminality Using Image Processing,’ over 1000 experts signed an open letter last week opposing the use of AI for such purposes.

“Machine learning programs are not neutral; research agendas and the data sets they work with often inherit dominant cultural beliefs about the world,” warned the letter’s authors.

The acknowledgement from Detroit’s police chief that current facial recognition technologies do not work in around 96 percent of cases should be reason enough to halt its use, especially for law enforcement, at least until serious improvements are made.

(Photo by Joshua Hoehne on Unsplash)

The post Detroit Police chief says AI face recognition doesn’t work ‘96% of the time’ appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/30/detroit-police-chief-ai-face-recognition/feed/ 0
Microsoft’s AI editor publishes stories about its own racist error https://news.deepgeniusai.com/2020/06/10/microsoft-ai-editor-publishes-stories-racist-error/ https://news.deepgeniusai.com/2020/06/10/microsoft-ai-editor-publishes-stories-racist-error/#comments Wed, 10 Jun 2020 14:45:43 +0000 https://news.deepgeniusai.com/?p=9683 Microsoft’s replacement of human editors with artificial intelligence has faced its first big embarrassment. In late May, Microsoft decided to fire many of its human editors for MSN News and replace them with an AI. Earlier this week, a news story appeared about Little Mix band member Jade Thirlwall’s experience facing racism. The story appears... Read more »

The post Microsoft’s AI editor publishes stories about its own racist error appeared first on AI News.

]]>
Microsoft’s replacement of human editors with artificial intelligence has faced its first big embarrassment.

In late May, Microsoft decided to fire many of its human editors for MSN News and replace them with an AI.

Earlier this week, a news story appeared about Little Mix band member Jade Thirlwall’s experience facing racism. The story appears innocent enough until you realise Microsoft’s AI confused two of the mixed-race band members. The error was quickly pointed out by Thirlwall.

In an Instagram story, Thirlwall wrote: “@MSN If you’re going to copy and paste articles from other accurate media outlets, you might want to make sure you’re using an image of the correct mixed race member of the group.”

She added: “This shit happens to @leighannepinnock and I ALL THE TIME that it’s become a running joke … It offends me that you couldn’t differentiate the two women of colour out of four members of a group … DO BETTER!”

Microsoft’s human editors were reportedly told to be aware the AI may subsequently publish stories on its own racist error and to manually remove them.

The Microsoft News app ended up being flooded with stories about the incident. It’s clear that the remaining human editors couldn’t move fast enough against their automated counterpart.

According to Waterson, the recently-sacked human staff from MSN have been told to stop reporting to him what the AI is doing.

This isn’t the first time an AI-powered solution from Microsoft has come under fire for racism.

An infamous Twitter chatbot developed by Microsoft called Tay ended up spouting racist and misogynistic vitriol back in 2016. The chatbot obviously wasn’t designed to be such an unsavoury character but Microsoft, for some reason, thought it would be a good idea to allow internet denizens to train it.

One of the most pressing concerns in this increasingly draconian world we live in is that of mass surveillance and facial recognition. While IBM announced this week it wants nothing more to do with the technology, Microsoft remains a key player.

An experiment by the Algorithmic Justice League last year found serious disparities between the performance of facial recognition algorithms based on gender and skin colour. 

Microsoft’s algorithm actually performed the best of those tested and managed a 100 percent accuracy when detecting lighter-skinned males. However, the algorithm was just 79.2 percent accurate when used on darker-skinned females.

If that version of Microsoft’s facial recognition system was used for surveillance – almost two in every ten females with darker skin risks being falsely flagged. In busy areas, that could mean hundreds if not thousands of people facing automated profiling each day.

While ideally algorithms wouldn’t have any biases or issues, all of the incidents show exactly why many humans should almost always be involved in final decisions. That way, when things go wrong, at least there’s accountability to a specific person rather than just blaming an AI error.

(Image Credit: Little Mix by vagueonthehow under CC BY 2.0 license)

The post Microsoft’s AI editor publishes stories about its own racist error appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/10/microsoft-ai-editor-publishes-stories-racist-error/feed/ 1