privacy – AI News https://news.deepgeniusai.com Artificial Intelligence News Thu, 24 Dec 2020 10:09:18 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png privacy – AI News https://news.deepgeniusai.com 32 32 Google is telling its scientists to give AI a ‘positive’ spin https://news.deepgeniusai.com/2020/12/24/google-telling-scientists-give-ai-positive-spin/ https://news.deepgeniusai.com/2020/12/24/google-telling-scientists-give-ai-positive-spin/#respond Thu, 24 Dec 2020 10:09:16 +0000 https://news.deepgeniusai.com/?p=10136 Google has reportedly been telling its scientists to give AI a “positive” spin in research papers. Documents obtained by Reuters suggest that, in at least three cases, Google’s researchers were requested to refrain from being critical of AI technology. A “sensitive topics” review was established by Google earlier this year to catch papers which cast... Read more »

The post Google is telling its scientists to give AI a ‘positive’ spin appeared first on AI News.

]]>
Google has reportedly been telling its scientists to give AI a “positive” spin in research papers.

Documents obtained by Reuters suggest that, in at least three cases, Google’s researchers were requested to refrain from being critical of AI technology.

A “sensitive topics” review was established by Google earlier this year to catch papers which cast a negative light on AI ahead of their publication.

Google asks its scientists to consult with legal, policy, and public relations teams prior to publishing anything on topics which could be deemed sensitive like sentiment analysis and categorisations of people based on race and/or political affiliation.

The new review means that papers from Google’s expert researchers which raise questions about AI developments may never be published. Reuters says four staff researchers believe Google is interfering with studies into potential technology harms.

Google recently faced scrutiny after firing leading AI ethics researcher Timnit Gebru.

Gebru is considered a pioneer in the field and researched the risks and inequalities found in large language models. She claims to have been fired by Google over an unpublished paper and sending an email critical of the company’s practices.

In an internal email countering Gebru’s claims, Head of Google Research Jeff Dean wrote:

“We’ve approved dozens of papers that Timnit and/or the other Googlers have authored and then published, but as you know, papers often require changes during the internal review process (or are even deemed unsuitable for submission). 

Unfortunately, this particular paper was only shared with a day’s notice before its deadline — we require two weeks for this sort of review — and then instead of awaiting reviewer feedback, it was approved for submission and submitted.

A cross-functional team then reviewed the paper as part of our regular process and the authors were informed that it didn’t meet our bar for publication and were given feedback about why.”

While it’s one word against another, it’s not a great look for Google.

“Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues,” Reuters reported one of Google’s documents as saying.

On its public-facing website, Google says that its scientists have “substantial” freedom—but that’s increasingly appearing like it’s not the case.

(Photo by Mitchell Luo on Unsplash)

The post Google is telling its scientists to give AI a ‘positive’ spin appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/24/google-telling-scientists-give-ai-positive-spin/feed/ 0
University College London: Deepfakes are the ‘most serious’ AI crime threat https://news.deepgeniusai.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/ https://news.deepgeniusai.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/#respond Thu, 06 Aug 2020 12:41:52 +0000 https://news.deepgeniusai.com/?p=9794 Researchers from University College London have released a ranking of what experts believe to be the most serious AI crime threats. The researchers first created a list of 20 expected ways AI will be used by criminals within the next 15 years. 31 experts were then asked to rank them by potential severity. Deepfakes –... Read more »

The post University College London: Deepfakes are the ‘most serious’ AI crime threat appeared first on AI News.

]]>
Researchers from University College London have released a ranking of what experts believe to be the most serious AI crime threats.

The researchers first created a list of 20 expected ways AI will be used by criminals within the next 15 years. 31 experts were then asked to rank them by potential severity.

Deepfakes – AI-generated images, videos, and articles – ranked top of the list as the most serious threat.

New and dangerous territory

It’s of little surprise to see deepfakes rank so highly, given the existing issues with disinformation campaigns.

Most fake content today at least must be created by humans, such as those working in the likes of Russia’s infamous “troll farms”. Human-generated disinformation campaigns take time to produce to a convincing standard and often have patterns which make them easier to trace. 

Automating the production of fake content en masse, to influence things such as democratic votes and public opinion, is entering into new and dangerous territory.

One of the most high-profile deepfake cases so far was that of US house speaker Nancy Pelosi. In 2018, a deepfake video circulated on social media which made Pelosi appear drunk and slurring her words. Pelosi criticised Facebook’s response, or lack thereof, and later told California’s KQED: “I think they have proven — by not taking down something they know is false — that they were willing enablers of the Russian interference in our election.”

The deepfake of Pelosi was unsophisticated and likely created to be amusing rather than malicious, but it’s an early warning of how such fakes could be used to cause reputational damage – or worse. Just imagine the potential consequences a fake video of the president announcing an imminent strike on somewhere like North Korea could have.

Deepfakes also have obvious potential to be used for fraud purposes, to pretend to be someone else to access things like bank accounts and sensitive information.

Then there’s the issue of blackmail. Deep learning has already been used to put the faces of celebrities on adult performers. While fake, the threat to release such videos – and the embarrassment caused – could lead to some paying a ransom to keep it from being made public.

“People now conduct large parts of their lives online and their online activity can make and break reputations,” comments first author Dr Matthew Caldwell of UCL Computer Science. “Such an online environment, where data is property and information power, is ideally suited for exploitation by AI-based criminal activity.”

All in all, it’s easy to see why experts are so concerned about deepfakes.

As part of a bid to persuade Facebook to change its policies on deepfakes, Israeli startup Canny AI created a deepfake of Facebook CEO Mark Zuckerberg last year which made it appear like he said: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

Other AI crime threats

There were four other major AI crime threats identified by the researchers: the use of driverless cars as weapons, automated spear fishing, harvesting information for blackmail, and the disruption of AI-controlled systems.

“As the capabilities of AI-based technologies expand, so too has their potential for criminal exploitation,” explained senior author Professor Lewis Griffin of UCL Computer Science. “To adequately prepare for possible AI threats, we need to identify what these threats might be, and how they may impact our lives.”

Of medium concern were of things such as the sale of items and services wrongly called AI, such as security screening and targeted advertising solutions. The researchers believe leading people to believe they’re AI-powered could be lucrative.

Among the lesser concerns is things such as so-called “burglar bots” which could get in through access points of a property to unlock them or search for data. The researchers believe this poses less of a threat because they can be easily prevented through methods such as letterbox cages.

Similarly, the researchers note the potential for AI-based stalking is damaging for individuals but isn’t considered a major threat as it could not operate at scale.

You can find the researchers’ full paper in the Crime Science Journal here.

(Photo by Bill Oxford on Unsplash)

The post University College London: Deepfakes are the ‘most serious’ AI crime threat appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/feed/ 0
Researchers create AI bot to protect the identities of BLM protesters https://news.deepgeniusai.com/2020/07/29/researchers-create-ai-bot-protect-identities-blm-protesters/ https://news.deepgeniusai.com/2020/07/29/researchers-create-ai-bot-protect-identities-blm-protesters/#respond Wed, 29 Jul 2020 16:09:37 +0000 https://news.deepgeniusai.com/?p=9776 Researchers from Stanford have created an AI-powered bot to automatically cover up the faces of Black Lives Matter protesters in photos. Everyone should have the right to protest. And, if done legally, to do so without fear of having things like their future job prospects ruined because they’ve been snapped at a demonstration – from... Read more »

The post Researchers create AI bot to protect the identities of BLM protesters appeared first on AI News.

]]>
Researchers from Stanford have created an AI-powered bot to automatically cover up the faces of Black Lives Matter protesters in photos.

Everyone should have the right to protest. And, if done legally, to do so without fear of having things like their future job prospects ruined because they’ve been snapped at a demonstration – from which a select few may have gone on to do criminal acts such as arson and looting.

With images from the protests being widely shared on social media to raise awareness, police have been using the opportunity to add the people featured within them to facial recognition databases.

“Over the past weeks, we have seen an increasing number of arrests at BLM protests, with images circulating around the web enabling automatic identification of those individuals and subsequent arrests to hamper protest activity,” the researchers explain.

Software has been available for some time to blur faces, but recent AI advancements have proved that it’s possible to deblur such images.

Researchers from Stanford Machine Learning set out to develop an automated tool which prevents the real identity of those in an image from being revealed.

The result of their work is BLMPrivacyBot:

Rather than blur the faces, the bot automatically covers them up with the black fist emoji which has become synonymous with the Black Lives Matter movement. The researchers hope such a solution will be built-in to social media platforms, but admit it’s unlikely.

The researchers trained the model for their AI bot on a dataset consisting of around 1.2 million people called QNRF. However, they warn it’s not foolproof as an individual could be identified through other means such as what clothing they’re wearing.

To use the BLMPrivacyBot, you can either send an image to its Twitter handle or upload a photo to the web interface here. The open source repo is available if you want to look at the inner workings.

The post Researchers create AI bot to protect the identities of BLM protesters appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/29/researchers-create-ai-bot-protect-identities-blm-protesters/feed/ 0
UK and Australia launch joint probe into Clearview AI’s mass data scraping https://news.deepgeniusai.com/2020/07/10/uk-australia-probe-clearview-ai-mass-data-scraping/ https://news.deepgeniusai.com/2020/07/10/uk-australia-probe-clearview-ai-mass-data-scraping/#respond Fri, 10 Jul 2020 14:49:51 +0000 https://news.deepgeniusai.com/?p=9745 The UK and Australia have launched a joint probe into the controversial “data scraping” practices of Clearview AI. Clearview AI has repeatedly made headlines, and rarely for good reason. The company’s facial recognition technology is impressive but relies on scraping billions of people’s data from across the web. “Common law has never recognised a right... Read more »

The post UK and Australia launch joint probe into Clearview AI’s mass data scraping appeared first on AI News.

]]>
The UK and Australia have launched a joint probe into the controversial “data scraping” practices of Clearview AI.

Clearview AI has repeatedly made headlines, and rarely for good reason. The company’s facial recognition technology is impressive but relies on scraping billions of people’s data from across the web.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland argued recently.

Regulators in the UK and Australia seem to have a different perspective than Ekeland and have announced a joint probe into Clearview AI’s practices.

“The Office of the Australian Information Commissioner (OAIC) and the UK’s Information Commissioner’s Office (ICO) have opened a joint investigation into the personal information handling practices of Clearview Inc., focusing on the company’s use of ‘scraped’ data and biometrics of individuals,” the ICO wrote in a statement.

“The investigation highlights the importance of enforcement cooperation in protecting the personal information of Australian and UK citizens in a globalized data environment.”

A similar probe was launched by the EU’s privacy watchdog last month.

The European Data Protection Board ruled that any use of the service by law enforcement in Europe would “likely not be consistent with the EU data protection regime” and that it “has doubts as to whether any Union or Member State law provides a legal basis for using a service such as the one offered by Clearview AI.”

Clearview AI’s facial recognition system is used by over 2,200 law enforcement agencies around the world. A recent leak suggests it’s even being used by commercial businesses like Best Buy and Macy’s. In May, Clearview said it would stop working with non–law enforcement entities.

The American Civil Liberties Union (ACLU) launched a lawsuit against Clearview AI in May after calling it a “nightmare scenario” for privacy.

Aside from the company’s practices, concerns have been raised about Clearview AI’s extensive ties with the far-right. Ekeland himself has gained notoriety as “The Troll’s Lawyer” for defending clients such as neo-Nazi troll Andrew Auernheimer.

“Companies like Clearview will end privacy as we know it, and must be stopped,” said Nathan Freed Wessler, senior staff attorney with the ACLU’s Speech, Privacy, and Technology Project.

(Photo by The Creative Exchange on Unsplash)

The post UK and Australia launch joint probe into Clearview AI’s mass data scraping appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/10/uk-australia-probe-clearview-ai-mass-data-scraping/feed/ 0
The ACLU uncovers the first known wrongful arrest due to AI error https://news.deepgeniusai.com/2020/06/25/aclu-uncovers-wrongful-arrest-ai-error/ https://news.deepgeniusai.com/2020/06/25/aclu-uncovers-wrongful-arrest-ai-error/#respond Thu, 25 Jun 2020 12:05:26 +0000 https://news.deepgeniusai.com/?p=9711 The ACLU (American Civil Liberties Union) has forced the police to acknowledge a wrongful arrest due to an erroneous algorithm. While it’s been suspected that documented racial bias with facial recognition algorithms has led to false arrests, it’s been difficult to prove. On Wednesday, the ACLU lodged a complaint against the Detroit police after black... Read more »

The post The ACLU uncovers the first known wrongful arrest due to AI error appeared first on AI News.

]]>
The ACLU (American Civil Liberties Union) has forced the police to acknowledge a wrongful arrest due to an erroneous algorithm.

While it’s been suspected that documented racial bias with facial recognition algorithms has led to false arrests, it’s been difficult to prove.

On Wednesday, the ACLU lodged a complaint against the Detroit police after black male Robert Williams was arrested on his front lawn “as his wife Melissa looked on and as his daughters wept from the trauma”. Williams was held in a “crowded and filthy” cell overnight without being given any reason.

Detroit Police arrested Williams for allegedly stealing five watches valued at $3800 from a store in October 2018. A blurry CCTV image was matched by a facial recognition algorithm to Williams’ driver’s license photo.

During an interrogation the day after his arrest, the police admitted that “the computer must have gotten it wrong”. Williams was kept incarcerated until after dark “at which point he was released out the front door, on a cold and rainy January night, where he was forced to wait outside on the curb for approximately an hour while his wife scrambled to find child care for the children so that she could come pick him up.”

Speaking to the NY Times, a Detroit police spokesperson said the department “does not make arrests based solely on facial recognition,” and claims witness interviews and a photo lineup were used.

However, a response from the Wayne County prosecutor’s office confirms the department used facial recognition to identify Williams using the security footage and an eyewitness to the crime was not shown the alleged photo lineup.

In its complaint, the ACLU demands that Detroit police end the use of facial recognition “as the facts of Mr. Williams’ case prove both that the technology is flawed and that DPD investigators are not competent in making use of such technology.”

This week, Boston became the latest city to ban facial recognition technology for municipal use. Boston follows an increasing number of cities like San Francisco, Oakland, and California who’ve banned the technology over human rights concerns.

“Facial recognition is inherently dangerous and inherently oppressive. It cannot be reformed or regulated. It must be abolished,” said Evan Greer, deputy director of the digital rights group Fight for the Future.

“Boston just became the latest major city to stop the use of this extraordinary and toxic surveillance technology. Every other city should follow suit.”

Cases like Mr Williams’ are certainly strengthening such calls. Over 1,000 experts signed an open letter this week against the use of AI for the next chilling step, crime prediction.

(Photo by ev on Unsplash)

The post The ACLU uncovers the first known wrongful arrest due to AI error appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/25/aclu-uncovers-wrongful-arrest-ai-error/feed/ 0
Amazon uses AI-powered displays to enforce social distancing in warehouses https://news.deepgeniusai.com/2020/06/17/amazon-ai-displays-enforce-social-distancing-warehouses/ https://news.deepgeniusai.com/2020/06/17/amazon-ai-displays-enforce-social-distancing-warehouses/#respond Wed, 17 Jun 2020 15:43:00 +0000 https://news.deepgeniusai.com/?p=9696 Amazon has turned to an AI-powered solution to help maintain social distancing in its vast warehouses. Companies around the world are having to look at new ways of safely continuing business as we adapt to the “new normal” of life with the coronavirus. Amazon has used its AI expertise to create what it calls the... Read more »

The post Amazon uses AI-powered displays to enforce social distancing in warehouses appeared first on AI News.

]]>
Amazon has turned to an AI-powered solution to help maintain social distancing in its vast warehouses.

Companies around the world are having to look at new ways of safely continuing business as we adapt to the “new normal” of life with the coronavirus.

Amazon has used its AI expertise to create what it calls the Distance Assistant. Using a time-of-flight sensor, often found in modern smartphones, the AI measures the distance between employees.

The AI is used to differentiate people from their background and what it sees is displayed on a 50-inch screen for workers to quickly see whether they’re adhering to keeping a safe distance.

Augmented reality is used to overlay either a green or red circle underneath each employee. As you can probably guess – a green circle means that the employee is a safe distance from others, while a red circle indicates that person needs to give others some personal space.

The whole solution is run locally and does not require access to the cloud to function. Amazon says it’s only deployed Distance Assistant in a handful of facilities so far but plans to roll out “hundreds” more “over the next few weeks.”

While the solution appears rather draconian, it’s a clever – and arguably necessary – way of helping to keep people safe until a vaccine for the virus is hopefully found. However, it will strengthen concerns that the coronavirus will be used to normalise increased surveillance and erode privacy.

Amazon claims it will be making Distance Assistant open-source to help other companies adapt to the coronavirus pandemic and keep their employees safe.

The post Amazon uses AI-powered displays to enforce social distancing in warehouses appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/17/amazon-ai-displays-enforce-social-distancing-warehouses/feed/ 0
The EU’s privacy watchdog takes aim at Clearview AI’s facial recognition https://news.deepgeniusai.com/2020/06/11/eu-privacy-watchdog-aim-clearview-ai-facial-recognition/ https://news.deepgeniusai.com/2020/06/11/eu-privacy-watchdog-aim-clearview-ai-facial-recognition/#respond Thu, 11 Jun 2020 14:33:29 +0000 https://news.deepgeniusai.com/?p=9688 The European Data Protection Board (EDPB) believes use of Clearview AI’s controversial facial recognition system would be illegal. Clearview AI’s facial recognition system is used by over 2,200 law enforcement agencies around the world and even commercial businesses like Best Buy and Macy’s, according to a recent leak. The EDPB has now ruled that any... Read more »

The post The EU’s privacy watchdog takes aim at Clearview AI’s facial recognition appeared first on AI News.

]]>
The European Data Protection Board (EDPB) believes use of Clearview AI’s controversial facial recognition system would be illegal.

Clearview AI’s facial recognition system is used by over 2,200 law enforcement agencies around the world and even commercial businesses like Best Buy and Macy’s, according to a recent leak.

The EDPB has now ruled that any use of the service by law enforcement in Europe would “likely not be consistent with the EU data protection regime.”

Furthermore, the watchdog “has doubts as to whether any Union or Member State law provides a legal basis for using a service such as the one offered by Clearview AI.”

Clearview AI scrapes billions of photos from across the internet for its powerful system, a practice which has come under fire by privacy campaigners. “Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland argued recently.

The American Civil Liberties Union (ACLU) launched a lawsuit against Clearview AI last month after calling it a “nightmare scenario” for privacy.

“Companies like Clearview will end privacy as we know it, and must be stopped,” said Nathan Freed Wessler, senior staff attorney with the ACLU’s Speech, Privacy, and Technology Project.

Aside from the company’s practices, concerns have been raised about Clearview AI’s extensive ties with the far-right. Ekeland himself has gained notoriety as “The Troll’s Lawyer” for defending clients such as neo-Nazi troll Andrew Auernheimer.

Backlash over Clearview AI forced the company to announce it will no longer offer its services to private companies. The EU’s ruling will limit Clearview AI’s potential customers even further.

Concerns have grown in recent weeks about facial recognition services amid protests over racial discrimination. Facial recognition services have been repeatedly found to falsely flag minorities; stoking fears they’ll lead to automated racial profiling.

IBM and Amazon have both announced this week they’ll no longer provide facial recognition services to law enforcement and have called on Congress to increase regulation to help ensure future deployments meet ethical standards.

(Photo by Christian Lue on Unsplash)

The post The EU’s privacy watchdog takes aim at Clearview AI’s facial recognition appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/11/eu-privacy-watchdog-aim-clearview-ai-facial-recognition/feed/ 0
ACLU sues Clearview AI calling it a ‘nightmare scenario’ for privacy https://news.deepgeniusai.com/2020/05/29/aclu-clearview-ai-nightmare-scenario-privacy/ https://news.deepgeniusai.com/2020/05/29/aclu-clearview-ai-nightmare-scenario-privacy/#comments Fri, 29 May 2020 13:48:55 +0000 https://news.deepgeniusai.com/?p=9660 The American Civil Liberties Union (ACLU) is suing controversial facial recognition provider Clearview AI over privacy concerns. “Companies like Clearview will end privacy as we know it, and must be stopped,” said Nathan Freed Wessler, senior staff attorney with the ACLU’s Speech, Privacy, and Technology Project. “The ACLU is taking its fight to defend privacy... Read more »

The post ACLU sues Clearview AI calling it a ‘nightmare scenario’ for privacy appeared first on AI News.

]]>
The American Civil Liberties Union (ACLU) is suing controversial facial recognition provider Clearview AI over privacy concerns.

“Companies like Clearview will end privacy as we know it, and must be stopped,” said Nathan Freed Wessler, senior staff attorney with the ACLU’s Speech, Privacy, and Technology Project.

“The ACLU is taking its fight to defend privacy rights against the growing threat of this unregulated surveillance technology to the courts, even as we double down on our work in legislatures and city councils nationwide.”

Clearview AI has repeatedly come under fire due to its practice of scraping billions of photos from across the internet and storing them in a database for powerful facial recognition services.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland said recently.

The company’s facial recognition system is used by over 2,200 law enforcement agencies around the world – and even commercial businesses like Best Buy and Macy’s, according to a recent leak.

In a press release, the ACLU wrote:

“The New York Times revealed the company was secretly capturing untold numbers of biometric identifiers for purposes of surveillance and tracking, without notice to the individuals affected.

The company’s actions embodied the nightmare scenario privacy advocates long warned of, and accomplished what many companies — such as Google — refused to try due to ethical concerns.”

However, even more concerning is Clearview AI’s extensive ties with the far-right.

Clearview AI founder Hoan Ton-That claims to have since disassociated from far-right views, movements, and individuals. Ekeland, meanwhile, has gained notoriety as “The Troll’s Lawyer” for defending clients such as neo-Nazi troll Andrew Auernheimer.

The ACLU says its lawsuit represents the first “to force any face recognition surveillance company to answer directly to groups representing survivors of domestic violence and sexual assault, undocumented immigrants, and other vulnerable communities uniquely harmed by face recognition surveillance.”

Facial recognition technologies have become a key focus for the ACLU.

Back in March, AI News reported the ACLU was suing the US government for blocking a probe into the use of facial recognition technology at airports. In 2018, the union caught our attention for highlighting the inaccuracy of Amazon’s facial recognition algorithm – especially when identifying people of colour and females.

“Clearview’s actions represent one of the largest threats to personal privacy by a private company our country has faced,” said Jay Edelson of Edelson PC, lead counsel handling this case on a pro bono basis.

“If a well-funded, politically connected company can simply amass information to track all of us, we are living in a different America.”

The post ACLU sues Clearview AI calling it a ‘nightmare scenario’ for privacy appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/05/29/aclu-clearview-ai-nightmare-scenario-privacy/feed/ 1
Clearview AI has been found to have extensive far-right ties https://news.deepgeniusai.com/2020/04/08/clearview-ai-found-extensive-ties-far-right/ https://news.deepgeniusai.com/2020/04/08/clearview-ai-found-extensive-ties-far-right/#comments Wed, 08 Apr 2020 10:41:19 +0000 https://news.deepgeniusai.com/?p=9533 Controversial facial recognition firm Clearview AI has been found to have extensive ties to far-right individuals and movements. Clearview AI has come under scrutiny for scraping billions of photos from across the internet and storing them in a database for powerful facial recognition services. Privacy activists criticise the practice as the people in those images... Read more »

The post Clearview AI has been found to have extensive far-right ties appeared first on AI News.

]]>
Controversial facial recognition firm Clearview AI has been found to have extensive ties to far-right individuals and movements.

Clearview AI has come under scrutiny for scraping billions of photos from across the internet and storing them in a database for powerful facial recognition services. Privacy activists criticise the practice as the people in those images never gave their consent.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland said recently. “It’s kind of a bizarre argument to make because [your face is the] most public thing out there.”

The company’s facial recognition system is used by over 600 law enforcement agencies. Furthermore, a recent leak revealed its client list also includes commercial businesses like Best Buy and Macy’s.

As if the company’s system wasn’t dystopian enough, an extensive investigation by The Huffington Post has revealed extensive links to some rather unsavoury people and movements.

Clearview AI founder Hoan Ton-That reportedly attended a 2016 dinner with white supremacist Richard Spencer that was organised by Jeff Giesea, a financier of the “alt-right” and associate of Palantir founder Peter Thiel.

Ton-That was also part of a Slack channel run by far-right activist Chuck Johnson, known for running crowdfunding platform WeSearchr that was predominately used by white supremacists. The Slack channel also included the webmaster of neo-Nazi website Daily Stormer, conspiracy theorist Mike Cernovich, and self-avowed “internet troll” Andrew Auernheimer (Auernheimer was among the first clients of Clearview AI lawyer Ekeland).

In January 2017, Chuck Johnson bragged on Facebook that he was “building algorithms to ID all the illegal immigrants for the deportation squads.” A source for Huffington Post said they’d seen Johnson discussing that project with a “bunch of really important people” at Trump’s hotel in DC and introducing them to a man that was likely Ton-That.

According to ex-Breitbart editor and former alt-right member Katie McHugh, Johnson asked to be put in touch with Trump advisor Stephen Miller to pitch a “way to identify every illegal alien in the country.”

Back when Clearview AI was known as Smartcheckr, the firm contracted Douglass Mackey who pitched the company’s technology to anti-Semitic congressional candidate Paul Nehlen for extreme campaign opposition research. Mackey was later found to be the overseer of a racist propaganda operation under the pseudonym of Ricky Vaughn. Ton-That told Huffington Post that Mackey was only contracted for three weeks and wasn’t authorised to make the offer to Nehlen.

An employee of Clearview AI, Marko Jukic, marketed the company’s technology to police departments. Jukic “published many thousands of extremist words on neoreactionary blogs,” according to Huffington Post.

Jukic’s publishings advocated the segregation of Jews, the “generous use” of racial profiling, using military force to “pacify” the “ghettos,” normalising the use of racist terminology, the replacement of democracy with authoritarianism, the assassination of journalists, and praising the ethnonationalism of Putin’s Russia while musing the collapse of the US because of “America’s diversity problem”.

As the founder of Clearview AI, Ton-That claims to have disassociated from far-right views, movements, and individuals. He told Huffington Post that growing up on the internet did not “serve him well” and “there was a period when I explored a range of ideas—not out of belief in any of them, but out of a desire to search for self and place in the world. I have finally found it, and the mission to help make America a safer place.”

You can read Huffington Post’s full investigation into Clearview AI’s far-right links here.

The post Clearview AI has been found to have extensive far-right ties appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/04/08/clearview-ai-found-extensive-ties-far-right/feed/ 1
Met Police commissioner dismisses critics of facial recognition systems https://news.deepgeniusai.com/2020/02/25/met-police-commissioner-critics-facial-recognition-systems/ https://news.deepgeniusai.com/2020/02/25/met-police-commissioner-critics-facial-recognition-systems/#respond Tue, 25 Feb 2020 17:09:27 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6426 The chief commissioner of the Metropolitan Police has dismissed critics of law enforcement using facial recognition systems. Met Commissioner Cressida Dick was speaking at the Royal United Services Institute think tank on Monday. Much of Dick’s speech was spent on making the case for British police to use modern technologies to tackle crime. Dick accused... Read more »

The post Met Police commissioner dismisses critics of facial recognition systems appeared first on AI News.

]]>
The chief commissioner of the Metropolitan Police has dismissed critics of law enforcement using facial recognition systems.

Met Commissioner Cressida Dick was speaking at the Royal United Services Institute think tank on Monday. Much of Dick’s speech was spent on making the case for British police to use modern technologies to tackle crime.

Dick accused critics of police facial recognition technology as being “highly inaccurate or highly ill-informed.”

Needless to say, this angered said critics who believe Dick is the one who is ill-informed by ignoring an independent report which suggests the technology in question only works in just 19 percent of cases.

“I would say it is for critics to justify to the victims of crimes why police should not be allowed to use tech lawfully and proportionally to catch criminals,” Dick argued.

Dick says she welcomes a public debate about facial recognition but attacked organisations such as Big Brother Watch and Liberty who brought the attention to the wider public.

“It’s unhelpful for the Met to reduce a serious debate on facial recognition to unfounded accusations of ‘fake news’,” Big Brother Watch tweeted. “Dick would do better to acknowledge and engage with the real, serious concerns – including those in the damning independent report that she ignored.”

Liberty tweeted a similar response: “Fact: Met started using facial recognition after ignoring its own review of two-year trial that said its use of the tech didn’t respect human rights. Another fact: scaremongering and deriding criticisms instead of engaging shows how flimsy their basis for using it really is.”

Met Police tests of facial recognition technology so far have been nothing short of a complete failure.

An initial trial, at the 2016 Notting Hill Carnival, led to not a single person being identified. A follow-up trial the following year led to no legitimate matches but 35 false positives.

Ironically, the legality of the trials has been called into question. An independent report by Professor Peter Fussey and Dr Daragh Murray last year concluded the six trials they were given access to were probably illegal since they had not accounted for human rights compliance.

Dr Murray said: “This report raises significant concerns regarding the human rights law compliance of the trials.

“The legal basis for the trials was unclear and is unlikely to satisfy the ‘in accordance with the law’ test established by human rights law.

“It does not appear that an effective effort was made to identify human rights harms or to establish the necessity of LFR.

“Ultimately, the impression is that human rights compliance was not built into the Metropolitan Police’s systems from the outset, and was not an integral part of the process.”

You can find a copy of the full report here (PDF)

(Image Credit: Met police helmet by Matt Brown under CC BY 2.0 license)

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Met Police commissioner dismisses critics of facial recognition systems appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/02/25/met-police-commissioner-critics-facial-recognition-systems/feed/ 0