Privacy – AI News https://news.deepgeniusai.com Artificial Intelligence News Mon, 11 Jan 2021 17:12:10 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Privacy – AI News https://news.deepgeniusai.com 32 32 Police use of Clearview AI’s facial recognition increased 26% after Capitol raid https://news.deepgeniusai.com/2021/01/11/police-use-clearview-ai-facial-recognition-increased-26-capitol-raid/ https://news.deepgeniusai.com/2021/01/11/police-use-clearview-ai-facial-recognition-increased-26-capitol-raid/#respond Mon, 11 Jan 2021 17:12:08 +0000 https://news.deepgeniusai.com/?p=10153 Clearview AI reports that police use of the company’s highly-controversial facial recognition system jumped 26 percent following the raid on the Capitol. The facial recognition system relies on scraping the data of people from across the web without their explicit consent, a practice which has naturally raised some eyebrows—including the ACLU’s which called it a... Read more »

The post Police use of Clearview AI’s facial recognition increased 26% after Capitol raid appeared first on AI News.

]]>
Clearview AI reports that police use of the company’s highly-controversial facial recognition system jumped 26 percent following the raid on the Capitol.

The facial recognition system relies on scraping the data of people from across the web without their explicit consent, a practice which has naturally raised some eyebrows—including the ACLU’s which called it a “nightmare scenario” for privacy.

Around three billion images are said to have been scraped for Clearview AI’s system.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland once argued.

Whether you call them protestors or domestic terrorists, the Trump supporters who raided the US Capitol Building last week – incited by the president to halt democracy and overturn the votes of millions of Americans – committed clear criminal offences that were bipartisanly condemned.

In comments to New York Times, Clearview AI CEO Hoan Ton-That claimed the company’s witnesses “a 26 percent increase of searches over our usual weekday search volume” on January 7th, following the riots.

Given the number of individuals involved, law enforcement has a gargantuan task to identify and locate the people that went far beyond exercising their right to peaceful protest and invaded a federal building, caused huge amounts of damage, and threatened elected representatives and staff.

The FBI has issued public appeals, but it’s little surprise that law enforcement is turning to automated means—regardless of the controversy. According to Clearview AI, approximately 2,400 agencies across the US use the company’s facial recognition technology.

Last year, the UK and Australia launched a joint probe into Clearview AI’s practices.

“The Office of the Australian Information Commissioner (OAIC) and the UK’s Information Commissioner’s Office (ICO) have opened a joint investigation into the personal information handling practices of Clearview Inc., focusing on the company’s use of ‘scraped’ data and biometrics of individuals,” the ICO wrote in a statement.

A similar probe was also launched by the EU’s privacy watchdog. The European Data Protection Board ruled that any use of the service by law enforcement in Europe would “likely not be consistent with the EU data protection regime” and that it “has doubts as to whether any Union or Member State law provides a legal basis for using a service such as the one offered by Clearview AI.”

Clearview AI has already been forced to suspend operations in Canada after the federal Office of the Privacy Commissioner of Canada opened an investigation into the company.

While Clearview AI’s facial recognition tech continues to have widespread use in the US, some police departments have taken the independent decision to ban officers from using such systems due to the well-documented inaccuracies which particularly affect minority communities.

The post Police use of Clearview AI’s facial recognition increased 26% after Capitol raid appeared first on AI News.

]]>
https://news.deepgeniusai.com/2021/01/11/police-use-clearview-ai-facial-recognition-increased-26-capitol-raid/feed/ 0
University College London: Deepfakes are the ‘most serious’ AI crime threat https://news.deepgeniusai.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/ https://news.deepgeniusai.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/#respond Thu, 06 Aug 2020 12:41:52 +0000 https://news.deepgeniusai.com/?p=9794 Researchers from University College London have released a ranking of what experts believe to be the most serious AI crime threats. The researchers first created a list of 20 expected ways AI will be used by criminals within the next 15 years. 31 experts were then asked to rank them by potential severity. Deepfakes –... Read more »

The post University College London: Deepfakes are the ‘most serious’ AI crime threat appeared first on AI News.

]]>
Researchers from University College London have released a ranking of what experts believe to be the most serious AI crime threats.

The researchers first created a list of 20 expected ways AI will be used by criminals within the next 15 years. 31 experts were then asked to rank them by potential severity.

Deepfakes – AI-generated images, videos, and articles – ranked top of the list as the most serious threat.

New and dangerous territory

It’s of little surprise to see deepfakes rank so highly, given the existing issues with disinformation campaigns.

Most fake content today at least must be created by humans, such as those working in the likes of Russia’s infamous “troll farms”. Human-generated disinformation campaigns take time to produce to a convincing standard and often have patterns which make them easier to trace. 

Automating the production of fake content en masse, to influence things such as democratic votes and public opinion, is entering into new and dangerous territory.

One of the most high-profile deepfake cases so far was that of US house speaker Nancy Pelosi. In 2018, a deepfake video circulated on social media which made Pelosi appear drunk and slurring her words. Pelosi criticised Facebook’s response, or lack thereof, and later told California’s KQED: “I think they have proven — by not taking down something they know is false — that they were willing enablers of the Russian interference in our election.”

The deepfake of Pelosi was unsophisticated and likely created to be amusing rather than malicious, but it’s an early warning of how such fakes could be used to cause reputational damage – or worse. Just imagine the potential consequences a fake video of the president announcing an imminent strike on somewhere like North Korea could have.

Deepfakes also have obvious potential to be used for fraud purposes, to pretend to be someone else to access things like bank accounts and sensitive information.

Then there’s the issue of blackmail. Deep learning has already been used to put the faces of celebrities on adult performers. While fake, the threat to release such videos – and the embarrassment caused – could lead to some paying a ransom to keep it from being made public.

“People now conduct large parts of their lives online and their online activity can make and break reputations,” comments first author Dr Matthew Caldwell of UCL Computer Science. “Such an online environment, where data is property and information power, is ideally suited for exploitation by AI-based criminal activity.”

All in all, it’s easy to see why experts are so concerned about deepfakes.

As part of a bid to persuade Facebook to change its policies on deepfakes, Israeli startup Canny AI created a deepfake of Facebook CEO Mark Zuckerberg last year which made it appear like he said: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

Other AI crime threats

There were four other major AI crime threats identified by the researchers: the use of driverless cars as weapons, automated spear fishing, harvesting information for blackmail, and the disruption of AI-controlled systems.

“As the capabilities of AI-based technologies expand, so too has their potential for criminal exploitation,” explained senior author Professor Lewis Griffin of UCL Computer Science. “To adequately prepare for possible AI threats, we need to identify what these threats might be, and how they may impact our lives.”

Of medium concern were of things such as the sale of items and services wrongly called AI, such as security screening and targeted advertising solutions. The researchers believe leading people to believe they’re AI-powered could be lucrative.

Among the lesser concerns is things such as so-called “burglar bots” which could get in through access points of a property to unlock them or search for data. The researchers believe this poses less of a threat because they can be easily prevented through methods such as letterbox cages.

Similarly, the researchers note the potential for AI-based stalking is damaging for individuals but isn’t considered a major threat as it could not operate at scale.

You can find the researchers’ full paper in the Crime Science Journal here.

(Photo by Bill Oxford on Unsplash)

The post University College London: Deepfakes are the ‘most serious’ AI crime threat appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/feed/ 0
Researchers create AI bot to protect the identities of BLM protesters https://news.deepgeniusai.com/2020/07/29/researchers-create-ai-bot-protect-identities-blm-protesters/ https://news.deepgeniusai.com/2020/07/29/researchers-create-ai-bot-protect-identities-blm-protesters/#respond Wed, 29 Jul 2020 16:09:37 +0000 https://news.deepgeniusai.com/?p=9776 Researchers from Stanford have created an AI-powered bot to automatically cover up the faces of Black Lives Matter protesters in photos. Everyone should have the right to protest. And, if done legally, to do so without fear of having things like their future job prospects ruined because they’ve been snapped at a demonstration – from... Read more »

The post Researchers create AI bot to protect the identities of BLM protesters appeared first on AI News.

]]>
Researchers from Stanford have created an AI-powered bot to automatically cover up the faces of Black Lives Matter protesters in photos.

Everyone should have the right to protest. And, if done legally, to do so without fear of having things like their future job prospects ruined because they’ve been snapped at a demonstration – from which a select few may have gone on to do criminal acts such as arson and looting.

With images from the protests being widely shared on social media to raise awareness, police have been using the opportunity to add the people featured within them to facial recognition databases.

“Over the past weeks, we have seen an increasing number of arrests at BLM protests, with images circulating around the web enabling automatic identification of those individuals and subsequent arrests to hamper protest activity,” the researchers explain.

Software has been available for some time to blur faces, but recent AI advancements have proved that it’s possible to deblur such images.

Researchers from Stanford Machine Learning set out to develop an automated tool which prevents the real identity of those in an image from being revealed.

The result of their work is BLMPrivacyBot:

Rather than blur the faces, the bot automatically covers them up with the black fist emoji which has become synonymous with the Black Lives Matter movement. The researchers hope such a solution will be built-in to social media platforms, but admit it’s unlikely.

The researchers trained the model for their AI bot on a dataset consisting of around 1.2 million people called QNRF. However, they warn it’s not foolproof as an individual could be identified through other means such as what clothing they’re wearing.

To use the BLMPrivacyBot, you can either send an image to its Twitter handle or upload a photo to the web interface here. The open source repo is available if you want to look at the inner workings.

The post Researchers create AI bot to protect the identities of BLM protesters appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/29/researchers-create-ai-bot-protect-identities-blm-protesters/feed/ 0
UK and Australia launch joint probe into Clearview AI’s mass data scraping https://news.deepgeniusai.com/2020/07/10/uk-australia-probe-clearview-ai-mass-data-scraping/ https://news.deepgeniusai.com/2020/07/10/uk-australia-probe-clearview-ai-mass-data-scraping/#respond Fri, 10 Jul 2020 14:49:51 +0000 https://news.deepgeniusai.com/?p=9745 The UK and Australia have launched a joint probe into the controversial “data scraping” practices of Clearview AI. Clearview AI has repeatedly made headlines, and rarely for good reason. The company’s facial recognition technology is impressive but relies on scraping billions of people’s data from across the web. “Common law has never recognised a right... Read more »

The post UK and Australia launch joint probe into Clearview AI’s mass data scraping appeared first on AI News.

]]>
The UK and Australia have launched a joint probe into the controversial “data scraping” practices of Clearview AI.

Clearview AI has repeatedly made headlines, and rarely for good reason. The company’s facial recognition technology is impressive but relies on scraping billions of people’s data from across the web.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland argued recently.

Regulators in the UK and Australia seem to have a different perspective than Ekeland and have announced a joint probe into Clearview AI’s practices.

“The Office of the Australian Information Commissioner (OAIC) and the UK’s Information Commissioner’s Office (ICO) have opened a joint investigation into the personal information handling practices of Clearview Inc., focusing on the company’s use of ‘scraped’ data and biometrics of individuals,” the ICO wrote in a statement.

“The investigation highlights the importance of enforcement cooperation in protecting the personal information of Australian and UK citizens in a globalized data environment.”

A similar probe was launched by the EU’s privacy watchdog last month.

The European Data Protection Board ruled that any use of the service by law enforcement in Europe would “likely not be consistent with the EU data protection regime” and that it “has doubts as to whether any Union or Member State law provides a legal basis for using a service such as the one offered by Clearview AI.”

Clearview AI’s facial recognition system is used by over 2,200 law enforcement agencies around the world. A recent leak suggests it’s even being used by commercial businesses like Best Buy and Macy’s. In May, Clearview said it would stop working with non–law enforcement entities.

The American Civil Liberties Union (ACLU) launched a lawsuit against Clearview AI in May after calling it a “nightmare scenario” for privacy.

Aside from the company’s practices, concerns have been raised about Clearview AI’s extensive ties with the far-right. Ekeland himself has gained notoriety as “The Troll’s Lawyer” for defending clients such as neo-Nazi troll Andrew Auernheimer.

“Companies like Clearview will end privacy as we know it, and must be stopped,” said Nathan Freed Wessler, senior staff attorney with the ACLU’s Speech, Privacy, and Technology Project.

(Photo by The Creative Exchange on Unsplash)

The post UK and Australia launch joint probe into Clearview AI’s mass data scraping appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/10/uk-australia-probe-clearview-ai-mass-data-scraping/feed/ 0
Detroit Police chief says AI face recognition doesn’t work ‘96% of the time’ https://news.deepgeniusai.com/2020/06/30/detroit-police-chief-ai-face-recognition/ https://news.deepgeniusai.com/2020/06/30/detroit-police-chief-ai-face-recognition/#respond Tue, 30 Jun 2020 09:45:29 +0000 https://news.deepgeniusai.com/?p=9720 Detroit Police chief Editor Craig has acknowledged that AI-powered face recognition doesn’t work the vast majority of times. “If we would use the software only [for subject identification], we would not solve the case 95-97 percent of the time,” Craig said. “If we were just to use the technology by itself to identify someone, I... Read more »

The post Detroit Police chief says AI face recognition doesn’t work ‘96% of the time’ appeared first on AI News.

]]>
Detroit Police chief Editor Craig has acknowledged that AI-powered face recognition doesn’t work the vast majority of times.

“If we would use the software only [for subject identification], we would not solve the case 95-97 percent of the time,” Craig said. “If we were just to use the technology by itself to identify someone, I would say 96 percent of the time it would misidentify.”

Craig’s comments arrive just days after the ACLU (American Civil Liberties Union) lodged a complaint against the Detroit police following the harrowing wrongful arrest of black male Robert Williams due to a facial recognition error.

Detroit Police arrested Williams for allegedly stealing five watches valued at $3800 from a store in October 2018. A blurry CCTV image was matched by a facial recognition algorithm to Williams’ driver’s license photo.

Current AI algorithms are known to have a racism issue. Extensive studies have repeatedly shown that facial recognition algorithms are almost 100 percent accurate when used on white males, but have serious problems when it comes to darker skin colours and the fairer sex.

This racism issue was shown again this week after an AI designed to upscale blurry photos, such as those often taken from security cameras, was applied to a variety of people from the BAME communities.

Here’s a particularly famous one:

And some other examples:

Last week, Boston followed in the footsteps of an increasing number of cities like San Francisco, Oakland, and California in banning facial recognition technology over human rights concerns.

“Facial recognition is inherently dangerous and inherently oppressive. It cannot be reformed or regulated. It must be abolished,” said Evan Greer, deputy director of the digital rights group Fight for the Future.

Over the other side of the pond, facial recognition tests in the UK so far have also been nothing short of a complete failure. An initial trial at the 2016 Notting Hill Carnival led to not a single person being identified. A follow-up trial the following year led to no legitimate matches but 35 false positives.

An independent report into the Met Police’s facial recognition trials, conducted last year by Professor Peter Fussey and Dr Daragh Murray, concluded that it was only verifiably accurate in just 19 percent of cases.

The next chilling step for AI in surveillance is using it to predict crime. Following news of an imminent publication called ‘A Deep Neural Network Model to Predict Criminality Using Image Processing,’ over 1000 experts signed an open letter last week opposing the use of AI for such purposes.

“Machine learning programs are not neutral; research agendas and the data sets they work with often inherit dominant cultural beliefs about the world,” warned the letter’s authors.

The acknowledgement from Detroit’s police chief that current facial recognition technologies do not work in around 96 percent of cases should be reason enough to halt its use, especially for law enforcement, at least until serious improvements are made.

(Photo by Joshua Hoehne on Unsplash)

The post Detroit Police chief says AI face recognition doesn’t work ‘96% of the time’ appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/30/detroit-police-chief-ai-face-recognition/feed/ 0
The ACLU uncovers the first known wrongful arrest due to AI error https://news.deepgeniusai.com/2020/06/25/aclu-uncovers-wrongful-arrest-ai-error/ https://news.deepgeniusai.com/2020/06/25/aclu-uncovers-wrongful-arrest-ai-error/#respond Thu, 25 Jun 2020 12:05:26 +0000 https://news.deepgeniusai.com/?p=9711 The ACLU (American Civil Liberties Union) has forced the police to acknowledge a wrongful arrest due to an erroneous algorithm. While it’s been suspected that documented racial bias with facial recognition algorithms has led to false arrests, it’s been difficult to prove. On Wednesday, the ACLU lodged a complaint against the Detroit police after black... Read more »

The post The ACLU uncovers the first known wrongful arrest due to AI error appeared first on AI News.

]]>
The ACLU (American Civil Liberties Union) has forced the police to acknowledge a wrongful arrest due to an erroneous algorithm.

While it’s been suspected that documented racial bias with facial recognition algorithms has led to false arrests, it’s been difficult to prove.

On Wednesday, the ACLU lodged a complaint against the Detroit police after black male Robert Williams was arrested on his front lawn “as his wife Melissa looked on and as his daughters wept from the trauma”. Williams was held in a “crowded and filthy” cell overnight without being given any reason.

Detroit Police arrested Williams for allegedly stealing five watches valued at $3800 from a store in October 2018. A blurry CCTV image was matched by a facial recognition algorithm to Williams’ driver’s license photo.

During an interrogation the day after his arrest, the police admitted that “the computer must have gotten it wrong”. Williams was kept incarcerated until after dark “at which point he was released out the front door, on a cold and rainy January night, where he was forced to wait outside on the curb for approximately an hour while his wife scrambled to find child care for the children so that she could come pick him up.”

Speaking to the NY Times, a Detroit police spokesperson said the department “does not make arrests based solely on facial recognition,” and claims witness interviews and a photo lineup were used.

However, a response from the Wayne County prosecutor’s office confirms the department used facial recognition to identify Williams using the security footage and an eyewitness to the crime was not shown the alleged photo lineup.

In its complaint, the ACLU demands that Detroit police end the use of facial recognition “as the facts of Mr. Williams’ case prove both that the technology is flawed and that DPD investigators are not competent in making use of such technology.”

This week, Boston became the latest city to ban facial recognition technology for municipal use. Boston follows an increasing number of cities like San Francisco, Oakland, and California who’ve banned the technology over human rights concerns.

“Facial recognition is inherently dangerous and inherently oppressive. It cannot be reformed or regulated. It must be abolished,” said Evan Greer, deputy director of the digital rights group Fight for the Future.

“Boston just became the latest major city to stop the use of this extraordinary and toxic surveillance technology. Every other city should follow suit.”

Cases like Mr Williams’ are certainly strengthening such calls. Over 1,000 experts signed an open letter this week against the use of AI for the next chilling step, crime prediction.

(Photo by ev on Unsplash)

The post The ACLU uncovers the first known wrongful arrest due to AI error appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/25/aclu-uncovers-wrongful-arrest-ai-error/feed/ 0
Over 1,000 researchers sign letter opposing ‘crime predicting’ AI https://news.deepgeniusai.com/2020/06/24/over-1000-researchers-sign-letter-crime-predicting-ai/ https://news.deepgeniusai.com/2020/06/24/over-1000-researchers-sign-letter-crime-predicting-ai/#respond Wed, 24 Jun 2020 12:24:25 +0000 https://news.deepgeniusai.com/?p=9706 More than 1,000 researchers, academics, and experts have signed an open letter opposing the use of AI to predict crime. Anyone who has watched the sci-fi classic Minority Report will be concerned about attempts to predict crime before it happens. In an ideal scenario, crime prediction could help determine where to allocate police resources –... Read more »

The post Over 1,000 researchers sign letter opposing ‘crime predicting’ AI appeared first on AI News.

]]>
More than 1,000 researchers, academics, and experts have signed an open letter opposing the use of AI to predict crime.

Anyone who has watched the sci-fi classic Minority Report will be concerned about attempts to predict crime before it happens. In an ideal scenario, crime prediction could help determine where to allocate police resources – but the reality will be very different.

The researchers are speaking out ahead of an imminent publication titled ‘A Deep Neural Network Model to Predict Criminality Using Image Processing’. In the paper, the authors claim to be able to predict whether a person will become a criminal based on automated facial recognition.

“By automating the identification of potential threats without bias, our aim is to produce tools for crime prevention, law enforcement, and military applications that are less impacted by implicit biases and emotional responses,” says Harrisburg University Professor and co-author of the paper Nathaniel J.S. Ashby.

“Our next step is finding strategic partners to advance this mission.”

Finding willing partners may prove to be a challenge. Signatories of the open letter include employees working on AI from tech giants including Microsoft, Google, and Facebook.

In their letter, the signatories highlight the many issues of today’s AI technologies which make dabbling in crime prediction so dangerous.

Chief among the concerns is the well-documented racial bias of algorithms. Every current facial recognition system is more accurate when detecting white males and often incorrectly flags members of the BAME community as criminals more often when used in a law enforcement setting.

However, even if the inaccuracies with facial recognition algorithms are addressed, the researchers highlight the problems with the current justice system which have been put in the spotlight in recent weeks following the murder of George Floyd.

In their letter, the researchers explain:

“Research of this nature — and its accompanying claims to accuracy — rest on the assumption that data regarding criminal arrest and conviction can serve as reliable, neutral indicators of underlying criminal activity. Yet these records are far from neutral.

As numerous scholars have demonstrated, historical court and arrest data reflect the policies and practices of the criminal justice system. These data reflect who police choose to arrest, how judges choose to rule, and which people are granted longer or more lenient sentences.

Countless studies have shown that people of color are treated more harshly than similarly situated white people at every stage of the legal system, which results in serious distortions in the data. Thus, any software built within the existing criminal legal framework will inevitably echo those same prejudices and fundamental inaccuracies when it comes to determining if a person has the ‘face of a criminal.’”

Among the co-authors of the disputed paper is Jonathan W. Korn, a Ph.D. student who is highlighted as an NYPD veteran. Korn says that AI which can predict criminality would be “a significant advantage for law enforcement agencies.”

While such a system would make the lives of law enforcement officers easier, it would do so at the cost of privacy and the automation of racial profiling.

“Machine learning programs are not neutral; research agendas and the data sets they work with often inherit dominant cultural beliefs about the world,” warn the letter’s authors.

“The uncritical acceptance of default assumptions inevitably leads to discriminatory design in algorithmic systems, reproducing ideas which normalise social hierarchies and legitimise violence against marginalised groups.”

You can find the full open letter here.

(Photo by Bill Oxford on Unsplash)

The post Over 1,000 researchers sign letter opposing ‘crime predicting’ AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/24/over-1000-researchers-sign-letter-crime-predicting-ai/feed/ 0
Amazon uses AI-powered displays to enforce social distancing in warehouses https://news.deepgeniusai.com/2020/06/17/amazon-ai-displays-enforce-social-distancing-warehouses/ https://news.deepgeniusai.com/2020/06/17/amazon-ai-displays-enforce-social-distancing-warehouses/#respond Wed, 17 Jun 2020 15:43:00 +0000 https://news.deepgeniusai.com/?p=9696 Amazon has turned to an AI-powered solution to help maintain social distancing in its vast warehouses. Companies around the world are having to look at new ways of safely continuing business as we adapt to the “new normal” of life with the coronavirus. Amazon has used its AI expertise to create what it calls the... Read more »

The post Amazon uses AI-powered displays to enforce social distancing in warehouses appeared first on AI News.

]]>
Amazon has turned to an AI-powered solution to help maintain social distancing in its vast warehouses.

Companies around the world are having to look at new ways of safely continuing business as we adapt to the “new normal” of life with the coronavirus.

Amazon has used its AI expertise to create what it calls the Distance Assistant. Using a time-of-flight sensor, often found in modern smartphones, the AI measures the distance between employees.

The AI is used to differentiate people from their background and what it sees is displayed on a 50-inch screen for workers to quickly see whether they’re adhering to keeping a safe distance.

Augmented reality is used to overlay either a green or red circle underneath each employee. As you can probably guess – a green circle means that the employee is a safe distance from others, while a red circle indicates that person needs to give others some personal space.

The whole solution is run locally and does not require access to the cloud to function. Amazon says it’s only deployed Distance Assistant in a handful of facilities so far but plans to roll out “hundreds” more “over the next few weeks.”

While the solution appears rather draconian, it’s a clever – and arguably necessary – way of helping to keep people safe until a vaccine for the virus is hopefully found. However, it will strengthen concerns that the coronavirus will be used to normalise increased surveillance and erode privacy.

Amazon claims it will be making Distance Assistant open-source to help other companies adapt to the coronavirus pandemic and keep their employees safe.

The post Amazon uses AI-powered displays to enforce social distancing in warehouses appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/17/amazon-ai-displays-enforce-social-distancing-warehouses/feed/ 0
The EU’s privacy watchdog takes aim at Clearview AI’s facial recognition https://news.deepgeniusai.com/2020/06/11/eu-privacy-watchdog-aim-clearview-ai-facial-recognition/ https://news.deepgeniusai.com/2020/06/11/eu-privacy-watchdog-aim-clearview-ai-facial-recognition/#respond Thu, 11 Jun 2020 14:33:29 +0000 https://news.deepgeniusai.com/?p=9688 The European Data Protection Board (EDPB) believes use of Clearview AI’s controversial facial recognition system would be illegal. Clearview AI’s facial recognition system is used by over 2,200 law enforcement agencies around the world and even commercial businesses like Best Buy and Macy’s, according to a recent leak. The EDPB has now ruled that any... Read more »

The post The EU’s privacy watchdog takes aim at Clearview AI’s facial recognition appeared first on AI News.

]]>
The European Data Protection Board (EDPB) believes use of Clearview AI’s controversial facial recognition system would be illegal.

Clearview AI’s facial recognition system is used by over 2,200 law enforcement agencies around the world and even commercial businesses like Best Buy and Macy’s, according to a recent leak.

The EDPB has now ruled that any use of the service by law enforcement in Europe would “likely not be consistent with the EU data protection regime.”

Furthermore, the watchdog “has doubts as to whether any Union or Member State law provides a legal basis for using a service such as the one offered by Clearview AI.”

Clearview AI scrapes billions of photos from across the internet for its powerful system, a practice which has come under fire by privacy campaigners. “Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland argued recently.

The American Civil Liberties Union (ACLU) launched a lawsuit against Clearview AI last month after calling it a “nightmare scenario” for privacy.

“Companies like Clearview will end privacy as we know it, and must be stopped,” said Nathan Freed Wessler, senior staff attorney with the ACLU’s Speech, Privacy, and Technology Project.

Aside from the company’s practices, concerns have been raised about Clearview AI’s extensive ties with the far-right. Ekeland himself has gained notoriety as “The Troll’s Lawyer” for defending clients such as neo-Nazi troll Andrew Auernheimer.

Backlash over Clearview AI forced the company to announce it will no longer offer its services to private companies. The EU’s ruling will limit Clearview AI’s potential customers even further.

Concerns have grown in recent weeks about facial recognition services amid protests over racial discrimination. Facial recognition services have been repeatedly found to falsely flag minorities; stoking fears they’ll lead to automated racial profiling.

IBM and Amazon have both announced this week they’ll no longer provide facial recognition services to law enforcement and have called on Congress to increase regulation to help ensure future deployments meet ethical standards.

(Photo by Christian Lue on Unsplash)

The post The EU’s privacy watchdog takes aim at Clearview AI’s facial recognition appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/11/eu-privacy-watchdog-aim-clearview-ai-facial-recognition/feed/ 0
ACLU sues Clearview AI calling it a ‘nightmare scenario’ for privacy https://news.deepgeniusai.com/2020/05/29/aclu-clearview-ai-nightmare-scenario-privacy/ https://news.deepgeniusai.com/2020/05/29/aclu-clearview-ai-nightmare-scenario-privacy/#comments Fri, 29 May 2020 13:48:55 +0000 https://news.deepgeniusai.com/?p=9660 The American Civil Liberties Union (ACLU) is suing controversial facial recognition provider Clearview AI over privacy concerns. “Companies like Clearview will end privacy as we know it, and must be stopped,” said Nathan Freed Wessler, senior staff attorney with the ACLU’s Speech, Privacy, and Technology Project. “The ACLU is taking its fight to defend privacy... Read more »

The post ACLU sues Clearview AI calling it a ‘nightmare scenario’ for privacy appeared first on AI News.

]]>
The American Civil Liberties Union (ACLU) is suing controversial facial recognition provider Clearview AI over privacy concerns.

“Companies like Clearview will end privacy as we know it, and must be stopped,” said Nathan Freed Wessler, senior staff attorney with the ACLU’s Speech, Privacy, and Technology Project.

“The ACLU is taking its fight to defend privacy rights against the growing threat of this unregulated surveillance technology to the courts, even as we double down on our work in legislatures and city councils nationwide.”

Clearview AI has repeatedly come under fire due to its practice of scraping billions of photos from across the internet and storing them in a database for powerful facial recognition services.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland said recently.

The company’s facial recognition system is used by over 2,200 law enforcement agencies around the world – and even commercial businesses like Best Buy and Macy’s, according to a recent leak.

In a press release, the ACLU wrote:

“The New York Times revealed the company was secretly capturing untold numbers of biometric identifiers for purposes of surveillance and tracking, without notice to the individuals affected.

The company’s actions embodied the nightmare scenario privacy advocates long warned of, and accomplished what many companies — such as Google — refused to try due to ethical concerns.”

However, even more concerning is Clearview AI’s extensive ties with the far-right.

Clearview AI founder Hoan Ton-That claims to have since disassociated from far-right views, movements, and individuals. Ekeland, meanwhile, has gained notoriety as “The Troll’s Lawyer” for defending clients such as neo-Nazi troll Andrew Auernheimer.

The ACLU says its lawsuit represents the first “to force any face recognition surveillance company to answer directly to groups representing survivors of domestic violence and sexual assault, undocumented immigrants, and other vulnerable communities uniquely harmed by face recognition surveillance.”

Facial recognition technologies have become a key focus for the ACLU.

Back in March, AI News reported the ACLU was suing the US government for blocking a probe into the use of facial recognition technology at airports. In 2018, the union caught our attention for highlighting the inaccuracy of Amazon’s facial recognition algorithm – especially when identifying people of colour and females.

“Clearview’s actions represent one of the largest threats to personal privacy by a private company our country has faced,” said Jay Edelson of Edelson PC, lead counsel handling this case on a pro bono basis.

“If a well-funded, politically connected company can simply amass information to track all of us, we are living in a different America.”

The post ACLU sues Clearview AI calling it a ‘nightmare scenario’ for privacy appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/05/29/aclu-clearview-ai-nightmare-scenario-privacy/feed/ 1