protest – AI News https://news.deepgeniusai.com Artificial Intelligence News Mon, 11 Jan 2021 17:12:10 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png protest – AI News https://news.deepgeniusai.com 32 32 Police use of Clearview AI’s facial recognition increased 26% after Capitol raid https://news.deepgeniusai.com/2021/01/11/police-use-clearview-ai-facial-recognition-increased-26-capitol-raid/ https://news.deepgeniusai.com/2021/01/11/police-use-clearview-ai-facial-recognition-increased-26-capitol-raid/#respond Mon, 11 Jan 2021 17:12:08 +0000 https://news.deepgeniusai.com/?p=10153 Clearview AI reports that police use of the company’s highly-controversial facial recognition system jumped 26 percent following the raid on the Capitol. The facial recognition system relies on scraping the data of people from across the web without their explicit consent, a practice which has naturally raised some eyebrows—including the ACLU’s which called it a... Read more »

The post Police use of Clearview AI’s facial recognition increased 26% after Capitol raid appeared first on AI News.

]]>
Clearview AI reports that police use of the company’s highly-controversial facial recognition system jumped 26 percent following the raid on the Capitol.

The facial recognition system relies on scraping the data of people from across the web without their explicit consent, a practice which has naturally raised some eyebrows—including the ACLU’s which called it a “nightmare scenario” for privacy.

Around three billion images are said to have been scraped for Clearview AI’s system.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland once argued.

Whether you call them protestors or domestic terrorists, the Trump supporters who raided the US Capitol Building last week – incited by the president to halt democracy and overturn the votes of millions of Americans – committed clear criminal offences that were bipartisanly condemned.

In comments to New York Times, Clearview AI CEO Hoan Ton-That claimed the company’s witnesses “a 26 percent increase of searches over our usual weekday search volume” on January 7th, following the riots.

Given the number of individuals involved, law enforcement has a gargantuan task to identify and locate the people that went far beyond exercising their right to peaceful protest and invaded a federal building, caused huge amounts of damage, and threatened elected representatives and staff.

The FBI has issued public appeals, but it’s little surprise that law enforcement is turning to automated means—regardless of the controversy. According to Clearview AI, approximately 2,400 agencies across the US use the company’s facial recognition technology.

Last year, the UK and Australia launched a joint probe into Clearview AI’s practices.

“The Office of the Australian Information Commissioner (OAIC) and the UK’s Information Commissioner’s Office (ICO) have opened a joint investigation into the personal information handling practices of Clearview Inc., focusing on the company’s use of ‘scraped’ data and biometrics of individuals,” the ICO wrote in a statement.

A similar probe was also launched by the EU’s privacy watchdog. The European Data Protection Board ruled that any use of the service by law enforcement in Europe would “likely not be consistent with the EU data protection regime” and that it “has doubts as to whether any Union or Member State law provides a legal basis for using a service such as the one offered by Clearview AI.”

Clearview AI has already been forced to suspend operations in Canada after the federal Office of the Privacy Commissioner of Canada opened an investigation into the company.

While Clearview AI’s facial recognition tech continues to have widespread use in the US, some police departments have taken the independent decision to ban officers from using such systems due to the well-documented inaccuracies which particularly affect minority communities.

The post Police use of Clearview AI’s facial recognition increased 26% after Capitol raid appeared first on AI News.

]]>
https://news.deepgeniusai.com/2021/01/11/police-use-clearview-ai-facial-recognition-increased-26-capitol-raid/feed/ 0
Researchers create AI bot to protect the identities of BLM protesters https://news.deepgeniusai.com/2020/07/29/researchers-create-ai-bot-protect-identities-blm-protesters/ https://news.deepgeniusai.com/2020/07/29/researchers-create-ai-bot-protect-identities-blm-protesters/#respond Wed, 29 Jul 2020 16:09:37 +0000 https://news.deepgeniusai.com/?p=9776 Researchers from Stanford have created an AI-powered bot to automatically cover up the faces of Black Lives Matter protesters in photos. Everyone should have the right to protest. And, if done legally, to do so without fear of having things like their future job prospects ruined because they’ve been snapped at a demonstration – from... Read more »

The post Researchers create AI bot to protect the identities of BLM protesters appeared first on AI News.

]]>
Researchers from Stanford have created an AI-powered bot to automatically cover up the faces of Black Lives Matter protesters in photos.

Everyone should have the right to protest. And, if done legally, to do so without fear of having things like their future job prospects ruined because they’ve been snapped at a demonstration – from which a select few may have gone on to do criminal acts such as arson and looting.

With images from the protests being widely shared on social media to raise awareness, police have been using the opportunity to add the people featured within them to facial recognition databases.

“Over the past weeks, we have seen an increasing number of arrests at BLM protests, with images circulating around the web enabling automatic identification of those individuals and subsequent arrests to hamper protest activity,” the researchers explain.

Software has been available for some time to blur faces, but recent AI advancements have proved that it’s possible to deblur such images.

Researchers from Stanford Machine Learning set out to develop an automated tool which prevents the real identity of those in an image from being revealed.

The result of their work is BLMPrivacyBot:

Rather than blur the faces, the bot automatically covers them up with the black fist emoji which has become synonymous with the Black Lives Matter movement. The researchers hope such a solution will be built-in to social media platforms, but admit it’s unlikely.

The researchers trained the model for their AI bot on a dataset consisting of around 1.2 million people called QNRF. However, they warn it’s not foolproof as an individual could be identified through other means such as what clothing they’re wearing.

To use the BLMPrivacyBot, you can either send an image to its Twitter handle or upload a photo to the web interface here. The open source repo is available if you want to look at the inner workings.

The post Researchers create AI bot to protect the identities of BLM protesters appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/29/researchers-create-ai-bot-protect-identities-blm-protesters/feed/ 0
Amazon is next to face employee protest over government contracts https://news.deepgeniusai.com/2018/06/22/amazon-employee-protest-government/ https://news.deepgeniusai.com/2018/06/22/amazon-employee-protest-government/#respond Fri, 22 Jun 2018 11:19:39 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3382 Mere days after Google and Microsoft staff protested their employers’ controversial government contracts, Amazon is facing its own internal revolt. Amazon employees are not all too pleased with their company’s sale of facial recognition software and other services to US government bodies. Much like Google and Microsoft’s employees, who demanded their respective companies never undertake... Read more »

The post Amazon is next to face employee protest over government contracts appeared first on AI News.

]]>
Mere days after Google and Microsoft staff protested their employers’ controversial government contracts, Amazon is facing its own internal revolt.

Amazon employees are not all too pleased with their company’s sale of facial recognition software and other services to US government bodies.

Much like Google and Microsoft’s employees, who demanded their respective companies never undertake work that may cause social or physical harm, a similar letter was posted on Amazon’s internal wiki.

The letter, addressed to Amazon CEO Jeff Bezos, said: “We refuse to build the platform that powers ICE [Immigration and Customs Enforcement], and we refuse to contribute to tools that violate human rights. As ethically concerned Amazonians, we demand a choice in what we build, and a say in how it is used.”

ICE has come under increased scrutiny in recent weeks due to the separation and detainment of children from their families at the US border.

“In the face of this immoral US policy, and the US’s increasingly inhumane treatment of refugees and immigrants beyond this specific policy, we are deeply concerned that Amazon is implicated, providing infrastructure and services that enable ICE and DHS,” they wrote.

Another worry for the Amazonians is that marginalised groups, which facial recognition technology is already less accurate at identifying correctly, may be further discriminated against through use of their tech.

The full letter can be found below:

Dear Jeff,

We are troubled by the recent report from the ACLU exposing our company’s practice of selling AWS Rekognition, a powerful facial recognition technology, to police departments and government agencies. We don’t have to wait to find out how these technologies will be used. We already know that in the midst of historic militarization of police, renewed targeting of Black activists, and the growth of a federal deportation force currently engaged in human rights abuses — this will be another powerful tool for the surveillance state, and ultimately serve to harm the most marginalized. We are not alone in this view: over 40 civil rights organizations signed an open letter in opposition to the governmental use of facial recognition, while over 150,000 individuals signed another petition delivered by the ACLU.

We also know that Palantir runs on AWS. And we know that ICE relies on Palantir to power its detention and deportation programs. Along with much of the world we watched in horror recently as U.S. authorities tore children away from their parents. Since April 19, 2018 the Department of Homeland Security has sent nearly 2,000 children to mass detention centers. This treatment goes against U.N. Refugee Agency guidelines that say children have the right to remain united with their parents, and that asylum-seekers have a legal right to claim asylum. In the face of this immoral U.S. policy, and the U.S.’s increasingly inhumane treatment of refugees and immigrants beyond this specific policy, we are deeply concerned that Amazon is implicated, providing infrastructure and services that enable ICE and DHS.

Technology like ours is playing an increasingly critical role across many sectors of society. What is clear to us is that our development and sales practices have yet to acknowledge the obligation that comes with this. Focusing solely on shareholder value is a race to the bottom, and one that we will not participate in.

We refuse to build the platform that powers ICE, and we refuse to contribute to tools that violate human rights.

As ethically concerned Amazonians, we demand a choice in what we build, and a say in how it is used. We learn from history, and we understand how IBM’s systems were employed in the 1940s to help Hitler. IBM did not take responsibility then, and by the time their role was understood, it was too late. We will not let that happen again. The time to act is now.

We call on you to:

  • Stop selling facial recognition services to law enforcement
  • Stop providing infrastructure to Palantir and any other Amazon partners who enable ICE.
  • Implement strong transparency and accountability measures, that include enumerating which law enforcement agencies and companies supporting law enforcement agencies are using Amazon services, and how.

Our company should not be in the surveillance business; we should not be in the policing business; we should not be in the business of supporting those who monitor and oppress marginalized populations.

Sincerely,

Amazonians

Microsoft also came under fire this week from its employees who called on the company to drop its own contract with over $19 million with ICE. The company’s CEO, Satya Nadella, said on Tuesday that Microsoft isn’t directly involved with the government’s policy of separating families at the US-Mexico border.

The leaderships of many technology giants argue that most technological developments could be used for harm. Even the computer may never have been able to be sold if how much harm it could be used for was accounted.

It all comes down to whether the societal benefits outweigh the risks, and right now it seems many agree the risks are too great.

Read more: Amnesty International warns of AI ‘nightmare scenarios’

What are your thoughts on the protests over government contracts?

 

The post Amazon is next to face employee protest over government contracts appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/06/22/amazon-employee-protest-government/feed/ 0