Surveillance – AI News https://news.deepgeniusai.com Artificial Intelligence News Mon, 11 Jan 2021 17:12:10 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Surveillance – AI News https://news.deepgeniusai.com 32 32 Police use of Clearview AI’s facial recognition increased 26% after Capitol raid https://news.deepgeniusai.com/2021/01/11/police-use-clearview-ai-facial-recognition-increased-26-capitol-raid/ https://news.deepgeniusai.com/2021/01/11/police-use-clearview-ai-facial-recognition-increased-26-capitol-raid/#respond Mon, 11 Jan 2021 17:12:08 +0000 https://news.deepgeniusai.com/?p=10153 Clearview AI reports that police use of the company’s highly-controversial facial recognition system jumped 26 percent following the raid on the Capitol. The facial recognition system relies on scraping the data of people from across the web without their explicit consent, a practice which has naturally raised some eyebrows—including the ACLU’s which called it a... Read more »

The post Police use of Clearview AI’s facial recognition increased 26% after Capitol raid appeared first on AI News.

]]>
Clearview AI reports that police use of the company’s highly-controversial facial recognition system jumped 26 percent following the raid on the Capitol.

The facial recognition system relies on scraping the data of people from across the web without their explicit consent, a practice which has naturally raised some eyebrows—including the ACLU’s which called it a “nightmare scenario” for privacy.

Around three billion images are said to have been scraped for Clearview AI’s system.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland once argued.

Whether you call them protestors or domestic terrorists, the Trump supporters who raided the US Capitol Building last week – incited by the president to halt democracy and overturn the votes of millions of Americans – committed clear criminal offences that were bipartisanly condemned.

In comments to New York Times, Clearview AI CEO Hoan Ton-That claimed the company’s witnesses “a 26 percent increase of searches over our usual weekday search volume” on January 7th, following the riots.

Given the number of individuals involved, law enforcement has a gargantuan task to identify and locate the people that went far beyond exercising their right to peaceful protest and invaded a federal building, caused huge amounts of damage, and threatened elected representatives and staff.

The FBI has issued public appeals, but it’s little surprise that law enforcement is turning to automated means—regardless of the controversy. According to Clearview AI, approximately 2,400 agencies across the US use the company’s facial recognition technology.

Last year, the UK and Australia launched a joint probe into Clearview AI’s practices.

“The Office of the Australian Information Commissioner (OAIC) and the UK’s Information Commissioner’s Office (ICO) have opened a joint investigation into the personal information handling practices of Clearview Inc., focusing on the company’s use of ‘scraped’ data and biometrics of individuals,” the ICO wrote in a statement.

A similar probe was also launched by the EU’s privacy watchdog. The European Data Protection Board ruled that any use of the service by law enforcement in Europe would “likely not be consistent with the EU data protection regime” and that it “has doubts as to whether any Union or Member State law provides a legal basis for using a service such as the one offered by Clearview AI.”

Clearview AI has already been forced to suspend operations in Canada after the federal Office of the Privacy Commissioner of Canada opened an investigation into the company.

While Clearview AI’s facial recognition tech continues to have widespread use in the US, some police departments have taken the independent decision to ban officers from using such systems due to the well-documented inaccuracies which particularly affect minority communities.

The post Police use of Clearview AI’s facial recognition increased 26% after Capitol raid appeared first on AI News.

]]>
https://news.deepgeniusai.com/2021/01/11/police-use-clearview-ai-facial-recognition-increased-26-capitol-raid/feed/ 0
UK and Australia launch joint probe into Clearview AI’s mass data scraping https://news.deepgeniusai.com/2020/07/10/uk-australia-probe-clearview-ai-mass-data-scraping/ https://news.deepgeniusai.com/2020/07/10/uk-australia-probe-clearview-ai-mass-data-scraping/#respond Fri, 10 Jul 2020 14:49:51 +0000 https://news.deepgeniusai.com/?p=9745 The UK and Australia have launched a joint probe into the controversial “data scraping” practices of Clearview AI. Clearview AI has repeatedly made headlines, and rarely for good reason. The company’s facial recognition technology is impressive but relies on scraping billions of people’s data from across the web. “Common law has never recognised a right... Read more »

The post UK and Australia launch joint probe into Clearview AI’s mass data scraping appeared first on AI News.

]]>
The UK and Australia have launched a joint probe into the controversial “data scraping” practices of Clearview AI.

Clearview AI has repeatedly made headlines, and rarely for good reason. The company’s facial recognition technology is impressive but relies on scraping billions of people’s data from across the web.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland argued recently.

Regulators in the UK and Australia seem to have a different perspective than Ekeland and have announced a joint probe into Clearview AI’s practices.

“The Office of the Australian Information Commissioner (OAIC) and the UK’s Information Commissioner’s Office (ICO) have opened a joint investigation into the personal information handling practices of Clearview Inc., focusing on the company’s use of ‘scraped’ data and biometrics of individuals,” the ICO wrote in a statement.

“The investigation highlights the importance of enforcement cooperation in protecting the personal information of Australian and UK citizens in a globalized data environment.”

A similar probe was launched by the EU’s privacy watchdog last month.

The European Data Protection Board ruled that any use of the service by law enforcement in Europe would “likely not be consistent with the EU data protection regime” and that it “has doubts as to whether any Union or Member State law provides a legal basis for using a service such as the one offered by Clearview AI.”

Clearview AI’s facial recognition system is used by over 2,200 law enforcement agencies around the world. A recent leak suggests it’s even being used by commercial businesses like Best Buy and Macy’s. In May, Clearview said it would stop working with non–law enforcement entities.

The American Civil Liberties Union (ACLU) launched a lawsuit against Clearview AI in May after calling it a “nightmare scenario” for privacy.

Aside from the company’s practices, concerns have been raised about Clearview AI’s extensive ties with the far-right. Ekeland himself has gained notoriety as “The Troll’s Lawyer” for defending clients such as neo-Nazi troll Andrew Auernheimer.

“Companies like Clearview will end privacy as we know it, and must be stopped,” said Nathan Freed Wessler, senior staff attorney with the ACLU’s Speech, Privacy, and Technology Project.

(Photo by The Creative Exchange on Unsplash)

The post UK and Australia launch joint probe into Clearview AI’s mass data scraping appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/10/uk-australia-probe-clearview-ai-mass-data-scraping/feed/ 0
Detroit Police chief says AI face recognition doesn’t work ‘96% of the time’ https://news.deepgeniusai.com/2020/06/30/detroit-police-chief-ai-face-recognition/ https://news.deepgeniusai.com/2020/06/30/detroit-police-chief-ai-face-recognition/#respond Tue, 30 Jun 2020 09:45:29 +0000 https://news.deepgeniusai.com/?p=9720 Detroit Police chief Editor Craig has acknowledged that AI-powered face recognition doesn’t work the vast majority of times. “If we would use the software only [for subject identification], we would not solve the case 95-97 percent of the time,” Craig said. “If we were just to use the technology by itself to identify someone, I... Read more »

The post Detroit Police chief says AI face recognition doesn’t work ‘96% of the time’ appeared first on AI News.

]]>
Detroit Police chief Editor Craig has acknowledged that AI-powered face recognition doesn’t work the vast majority of times.

“If we would use the software only [for subject identification], we would not solve the case 95-97 percent of the time,” Craig said. “If we were just to use the technology by itself to identify someone, I would say 96 percent of the time it would misidentify.”

Craig’s comments arrive just days after the ACLU (American Civil Liberties Union) lodged a complaint against the Detroit police following the harrowing wrongful arrest of black male Robert Williams due to a facial recognition error.

Detroit Police arrested Williams for allegedly stealing five watches valued at $3800 from a store in October 2018. A blurry CCTV image was matched by a facial recognition algorithm to Williams’ driver’s license photo.

Current AI algorithms are known to have a racism issue. Extensive studies have repeatedly shown that facial recognition algorithms are almost 100 percent accurate when used on white males, but have serious problems when it comes to darker skin colours and the fairer sex.

This racism issue was shown again this week after an AI designed to upscale blurry photos, such as those often taken from security cameras, was applied to a variety of people from the BAME communities.

Here’s a particularly famous one:

And some other examples:

Last week, Boston followed in the footsteps of an increasing number of cities like San Francisco, Oakland, and California in banning facial recognition technology over human rights concerns.

“Facial recognition is inherently dangerous and inherently oppressive. It cannot be reformed or regulated. It must be abolished,” said Evan Greer, deputy director of the digital rights group Fight for the Future.

Over the other side of the pond, facial recognition tests in the UK so far have also been nothing short of a complete failure. An initial trial at the 2016 Notting Hill Carnival led to not a single person being identified. A follow-up trial the following year led to no legitimate matches but 35 false positives.

An independent report into the Met Police’s facial recognition trials, conducted last year by Professor Peter Fussey and Dr Daragh Murray, concluded that it was only verifiably accurate in just 19 percent of cases.

The next chilling step for AI in surveillance is using it to predict crime. Following news of an imminent publication called ‘A Deep Neural Network Model to Predict Criminality Using Image Processing,’ over 1000 experts signed an open letter last week opposing the use of AI for such purposes.

“Machine learning programs are not neutral; research agendas and the data sets they work with often inherit dominant cultural beliefs about the world,” warned the letter’s authors.

The acknowledgement from Detroit’s police chief that current facial recognition technologies do not work in around 96 percent of cases should be reason enough to halt its use, especially for law enforcement, at least until serious improvements are made.

(Photo by Joshua Hoehne on Unsplash)

The post Detroit Police chief says AI face recognition doesn’t work ‘96% of the time’ appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/30/detroit-police-chief-ai-face-recognition/feed/ 0
The ACLU uncovers the first known wrongful arrest due to AI error https://news.deepgeniusai.com/2020/06/25/aclu-uncovers-wrongful-arrest-ai-error/ https://news.deepgeniusai.com/2020/06/25/aclu-uncovers-wrongful-arrest-ai-error/#respond Thu, 25 Jun 2020 12:05:26 +0000 https://news.deepgeniusai.com/?p=9711 The ACLU (American Civil Liberties Union) has forced the police to acknowledge a wrongful arrest due to an erroneous algorithm. While it’s been suspected that documented racial bias with facial recognition algorithms has led to false arrests, it’s been difficult to prove. On Wednesday, the ACLU lodged a complaint against the Detroit police after black... Read more »

The post The ACLU uncovers the first known wrongful arrest due to AI error appeared first on AI News.

]]>
The ACLU (American Civil Liberties Union) has forced the police to acknowledge a wrongful arrest due to an erroneous algorithm.

While it’s been suspected that documented racial bias with facial recognition algorithms has led to false arrests, it’s been difficult to prove.

On Wednesday, the ACLU lodged a complaint against the Detroit police after black male Robert Williams was arrested on his front lawn “as his wife Melissa looked on and as his daughters wept from the trauma”. Williams was held in a “crowded and filthy” cell overnight without being given any reason.

Detroit Police arrested Williams for allegedly stealing five watches valued at $3800 from a store in October 2018. A blurry CCTV image was matched by a facial recognition algorithm to Williams’ driver’s license photo.

During an interrogation the day after his arrest, the police admitted that “the computer must have gotten it wrong”. Williams was kept incarcerated until after dark “at which point he was released out the front door, on a cold and rainy January night, where he was forced to wait outside on the curb for approximately an hour while his wife scrambled to find child care for the children so that she could come pick him up.”

Speaking to the NY Times, a Detroit police spokesperson said the department “does not make arrests based solely on facial recognition,” and claims witness interviews and a photo lineup were used.

However, a response from the Wayne County prosecutor’s office confirms the department used facial recognition to identify Williams using the security footage and an eyewitness to the crime was not shown the alleged photo lineup.

In its complaint, the ACLU demands that Detroit police end the use of facial recognition “as the facts of Mr. Williams’ case prove both that the technology is flawed and that DPD investigators are not competent in making use of such technology.”

This week, Boston became the latest city to ban facial recognition technology for municipal use. Boston follows an increasing number of cities like San Francisco, Oakland, and California who’ve banned the technology over human rights concerns.

“Facial recognition is inherently dangerous and inherently oppressive. It cannot be reformed or regulated. It must be abolished,” said Evan Greer, deputy director of the digital rights group Fight for the Future.

“Boston just became the latest major city to stop the use of this extraordinary and toxic surveillance technology. Every other city should follow suit.”

Cases like Mr Williams’ are certainly strengthening such calls. Over 1,000 experts signed an open letter this week against the use of AI for the next chilling step, crime prediction.

(Photo by ev on Unsplash)

The post The ACLU uncovers the first known wrongful arrest due to AI error appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/25/aclu-uncovers-wrongful-arrest-ai-error/feed/ 0
Over 1,000 researchers sign letter opposing ‘crime predicting’ AI https://news.deepgeniusai.com/2020/06/24/over-1000-researchers-sign-letter-crime-predicting-ai/ https://news.deepgeniusai.com/2020/06/24/over-1000-researchers-sign-letter-crime-predicting-ai/#respond Wed, 24 Jun 2020 12:24:25 +0000 https://news.deepgeniusai.com/?p=9706 More than 1,000 researchers, academics, and experts have signed an open letter opposing the use of AI to predict crime. Anyone who has watched the sci-fi classic Minority Report will be concerned about attempts to predict crime before it happens. In an ideal scenario, crime prediction could help determine where to allocate police resources –... Read more »

The post Over 1,000 researchers sign letter opposing ‘crime predicting’ AI appeared first on AI News.

]]>
More than 1,000 researchers, academics, and experts have signed an open letter opposing the use of AI to predict crime.

Anyone who has watched the sci-fi classic Minority Report will be concerned about attempts to predict crime before it happens. In an ideal scenario, crime prediction could help determine where to allocate police resources – but the reality will be very different.

The researchers are speaking out ahead of an imminent publication titled ‘A Deep Neural Network Model to Predict Criminality Using Image Processing’. In the paper, the authors claim to be able to predict whether a person will become a criminal based on automated facial recognition.

“By automating the identification of potential threats without bias, our aim is to produce tools for crime prevention, law enforcement, and military applications that are less impacted by implicit biases and emotional responses,” says Harrisburg University Professor and co-author of the paper Nathaniel J.S. Ashby.

“Our next step is finding strategic partners to advance this mission.”

Finding willing partners may prove to be a challenge. Signatories of the open letter include employees working on AI from tech giants including Microsoft, Google, and Facebook.

In their letter, the signatories highlight the many issues of today’s AI technologies which make dabbling in crime prediction so dangerous.

Chief among the concerns is the well-documented racial bias of algorithms. Every current facial recognition system is more accurate when detecting white males and often incorrectly flags members of the BAME community as criminals more often when used in a law enforcement setting.

However, even if the inaccuracies with facial recognition algorithms are addressed, the researchers highlight the problems with the current justice system which have been put in the spotlight in recent weeks following the murder of George Floyd.

In their letter, the researchers explain:

“Research of this nature — and its accompanying claims to accuracy — rest on the assumption that data regarding criminal arrest and conviction can serve as reliable, neutral indicators of underlying criminal activity. Yet these records are far from neutral.

As numerous scholars have demonstrated, historical court and arrest data reflect the policies and practices of the criminal justice system. These data reflect who police choose to arrest, how judges choose to rule, and which people are granted longer or more lenient sentences.

Countless studies have shown that people of color are treated more harshly than similarly situated white people at every stage of the legal system, which results in serious distortions in the data. Thus, any software built within the existing criminal legal framework will inevitably echo those same prejudices and fundamental inaccuracies when it comes to determining if a person has the ‘face of a criminal.’”

Among the co-authors of the disputed paper is Jonathan W. Korn, a Ph.D. student who is highlighted as an NYPD veteran. Korn says that AI which can predict criminality would be “a significant advantage for law enforcement agencies.”

While such a system would make the lives of law enforcement officers easier, it would do so at the cost of privacy and the automation of racial profiling.

“Machine learning programs are not neutral; research agendas and the data sets they work with often inherit dominant cultural beliefs about the world,” warn the letter’s authors.

“The uncritical acceptance of default assumptions inevitably leads to discriminatory design in algorithmic systems, reproducing ideas which normalise social hierarchies and legitimise violence against marginalised groups.”

You can find the full open letter here.

(Photo by Bill Oxford on Unsplash)

The post Over 1,000 researchers sign letter opposing ‘crime predicting’ AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/24/over-1000-researchers-sign-letter-crime-predicting-ai/feed/ 0
Amazon uses AI-powered displays to enforce social distancing in warehouses https://news.deepgeniusai.com/2020/06/17/amazon-ai-displays-enforce-social-distancing-warehouses/ https://news.deepgeniusai.com/2020/06/17/amazon-ai-displays-enforce-social-distancing-warehouses/#respond Wed, 17 Jun 2020 15:43:00 +0000 https://news.deepgeniusai.com/?p=9696 Amazon has turned to an AI-powered solution to help maintain social distancing in its vast warehouses. Companies around the world are having to look at new ways of safely continuing business as we adapt to the “new normal” of life with the coronavirus. Amazon has used its AI expertise to create what it calls the... Read more »

The post Amazon uses AI-powered displays to enforce social distancing in warehouses appeared first on AI News.

]]>
Amazon has turned to an AI-powered solution to help maintain social distancing in its vast warehouses.

Companies around the world are having to look at new ways of safely continuing business as we adapt to the “new normal” of life with the coronavirus.

Amazon has used its AI expertise to create what it calls the Distance Assistant. Using a time-of-flight sensor, often found in modern smartphones, the AI measures the distance between employees.

The AI is used to differentiate people from their background and what it sees is displayed on a 50-inch screen for workers to quickly see whether they’re adhering to keeping a safe distance.

Augmented reality is used to overlay either a green or red circle underneath each employee. As you can probably guess – a green circle means that the employee is a safe distance from others, while a red circle indicates that person needs to give others some personal space.

The whole solution is run locally and does not require access to the cloud to function. Amazon says it’s only deployed Distance Assistant in a handful of facilities so far but plans to roll out “hundreds” more “over the next few weeks.”

While the solution appears rather draconian, it’s a clever – and arguably necessary – way of helping to keep people safe until a vaccine for the virus is hopefully found. However, it will strengthen concerns that the coronavirus will be used to normalise increased surveillance and erode privacy.

Amazon claims it will be making Distance Assistant open-source to help other companies adapt to the coronavirus pandemic and keep their employees safe.

The post Amazon uses AI-powered displays to enforce social distancing in warehouses appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/17/amazon-ai-displays-enforce-social-distancing-warehouses/feed/ 0
World’s oldest defence think tank concludes British spies need AI https://news.deepgeniusai.com/2020/04/28/world-oldest-defence-think-tank-british-spies-ai/ https://news.deepgeniusai.com/2020/04/28/world-oldest-defence-think-tank-british-spies-ai/#comments Tue, 28 Apr 2020 12:11:03 +0000 https://news.deepgeniusai.com/?p=9572 The Royal United Services Institute (RUSI) says in an intelligence report that British spies will need to use AI to counter threats. Based in Westminster, the RUSI is the world’s oldest think tank on international defence and security. Founded in 1831 by the first Duke of Wellington, Sir Arthur Wellesley, the RUSI remains a highly... Read more »

The post World’s oldest defence think tank concludes British spies need AI appeared first on AI News.

]]>
The Royal United Services Institute (RUSI) says in an intelligence report that British spies will need to use AI to counter threats.

Based in Westminster, the RUSI is the world’s oldest think tank on international defence and security. Founded in 1831 by the first Duke of Wellington, Sir Arthur Wellesley, the RUSI remains a highly respected institution that’s as relevant today as ever.

AI is rapidly advancing the capabilities of adversaries. In its report, the RUSI says that hackers – both state-sponsored and independent – are likely to use AI for cyberattacks on the web and political systems.

Adversaries “will undoubtedly seek to use AI to attack the UK”, the RUSI notes.

Threats could emerge in a variety of ways. Deepfakes, which use a neural network to generate convincing fake videos and images, are one example of a threat already being posed today. With the US elections coming up, there’s concerns deepfakes of political figures could be used for voter manipulation.

AI could also be used for powerful new malware which mutates to avoid detection. Such malware could even infect and take control of emerging technologies such as driverless cars, smart city infrastructure, and drones.

The RUSI believes that humans will struggle to counter AI threats alone and will need the assistance of automation.

“Adoption of AI is not just important to help intelligence agencies manage the technical challenge of information overload,” said Alexander Babuta, one of the report’s authors. “It is highly likely that malicious actors will use AI to attack the UK in numerous ways, and the intelligence community will need to develop new AI-based defence measures.”

GCHQ, the UK’s service which focuses on signals intelligence , commissioned the RUSI’s independent report. Ken McCallum, the new head of MI5 – the UK’s domestic counter-intelligence and security agency – has already said that greater use of AI will be one of his priorities.

The RUSI believes AI will be of little value for “predictive intelligence” to do things such as predicting when a terrorist act is likely to occur before it happens. Highlighting counter-terrorism specifically, the RUSI says such cases are too infrequent to look for patterns compared to other criminal acts. Reasons for terrorist acts can also change very quickly dependent on world events.

All of this raises concerns about the automation of discrimination. The RUSI calls for more of an “augmented” intelligence – whereby technology assists sifting through large amounts of data, but decisions are ultimately taken by humans – rather than leaving it all up to the machines.

In terms of global positioning, the RUSI recognises the UK’s strength in AI with talent emerging from the country’s world-leading universities and capabilities in the GCHQ, bodies like the Alan Turing Institute, the Centre for Data Ethics and Innovation, and even more in the private sector.

While it’s widely-acknowledged countries like the US and China have far more resources overall to throw at AI advancements, the RUSI believes the UK has the potential to be a leader in the technology within a much-needed ethical framework. However, they say it’s important not to be too preoccupied with the possible downsides.

“There is a risk of stifling innovation if we become overly-focused on hypothetical worst-case outcomes and speculations over some dystopian future AI-driven surveillance network,” argues Babuta.

“Legitimate ethical concerns will be overshadowed unless we focus on likely and realistic uses of AI in the short-to-medium term.”

You can find a copy of the RUSI’s full report here (PDF)

(Photo by Chris Yang on Unsplash)

The post World’s oldest defence think tank concludes British spies need AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/04/28/world-oldest-defence-think-tank-british-spies-ai/feed/ 1
Aussie police use Clearview AI’s facial recognition to fight child exploitation https://news.deepgeniusai.com/2020/04/15/aussie-police-clearview-ai-facial-recognition-child-exploitation/ https://news.deepgeniusai.com/2020/04/15/aussie-police-clearview-ai-facial-recognition-child-exploitation/#respond Wed, 15 Apr 2020 15:17:52 +0000 https://news.deepgeniusai.com/?p=9548 Police in Australia have employed the use of Clearview AI’s controversial facial recognition to tackle child exploitation. The Australian Federal Police (AFP) admitted to using Clearview AI’s system despite not having a legislative framework in place for the technology. Deputy commissioner Karl Kent said that the AFP trialled the facial recognition system but has not... Read more »

The post Aussie police use Clearview AI’s facial recognition to fight child exploitation appeared first on AI News.

]]>
Police in Australia have employed the use of Clearview AI’s controversial facial recognition to tackle child exploitation.

The Australian Federal Police (AFP) admitted to using Clearview AI’s system despite not having a legislative framework in place for the technology.

Deputy commissioner Karl Kent said that the AFP trialled the facial recognition system but has not entered any formal arrangements with Clearview AI to procure their technology.

In a statement, opposition party Labor called for Home Affairs Minister Peter Dutton to explain whether the AFP’s investigations into child exploitation were jeopardised by the use of Clearview AI’s technology without legal authorisation:

“Peter Dutton must immediately explain what knowledge he had of Australian Federal Police officers using the Clearview AI facial recognition tool despite the absence of any legislative framework in relation to the use of identity-matching services.”

Clearview AI’s facial recognition was used specifically by the AFP-led Australian Centre to Counter Child Exploitation (ACCCE) to support their vital work.

“The trial was to assess the capability of the Clearview AI system in the context of countering child exploitation,” wrote the AFP.

ACCCE’s testing took place between 2nd November 2019 and 22nd January 2020.

“Searches included images of known individuals, and unknown individuals related to current or past investigations relating to child exploitation,” the AFP said. “Outside of the ACCCE Operational Command there was no visibility that this trial had commenced.”

Clearview AI’s facial recognition has come under stiff opposition due to its controversial practices and extensive links to the far-right.

Hoan Ton-That, founder of Clearview AI, claims to have disassociated from far-right views, movements, and individuals. Ton-That told Huffington Post recently that growing up on the internet did not “serve him well” and “there was a period when I explored a range of ideas—not out of belief in any of them, but out of a desire to search for self and place in the world.”

Clearview AI’s facial recognition system uses a large database consisting of billions of scraped images from across the web. Activists believe the system infringes on people’s right to privacy as they never gave permission for their images to be stored and used in such a way.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland said recently. “It’s kind of a bizarre argument to make because [your face is the] most public thing out there.”

(Photo by Joey Csunyo on Unsplash)

The post Aussie police use Clearview AI’s facial recognition to fight child exploitation appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/04/15/aussie-police-clearview-ai-facial-recognition-child-exploitation/feed/ 0
Clearview AI has been found to have extensive far-right ties https://news.deepgeniusai.com/2020/04/08/clearview-ai-found-extensive-ties-far-right/ https://news.deepgeniusai.com/2020/04/08/clearview-ai-found-extensive-ties-far-right/#comments Wed, 08 Apr 2020 10:41:19 +0000 https://news.deepgeniusai.com/?p=9533 Controversial facial recognition firm Clearview AI has been found to have extensive ties to far-right individuals and movements. Clearview AI has come under scrutiny for scraping billions of photos from across the internet and storing them in a database for powerful facial recognition services. Privacy activists criticise the practice as the people in those images... Read more »

The post Clearview AI has been found to have extensive far-right ties appeared first on AI News.

]]>
Controversial facial recognition firm Clearview AI has been found to have extensive ties to far-right individuals and movements.

Clearview AI has come under scrutiny for scraping billions of photos from across the internet and storing them in a database for powerful facial recognition services. Privacy activists criticise the practice as the people in those images never gave their consent.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland said recently. “It’s kind of a bizarre argument to make because [your face is the] most public thing out there.”

The company’s facial recognition system is used by over 600 law enforcement agencies. Furthermore, a recent leak revealed its client list also includes commercial businesses like Best Buy and Macy’s.

As if the company’s system wasn’t dystopian enough, an extensive investigation by The Huffington Post has revealed extensive links to some rather unsavoury people and movements.

Clearview AI founder Hoan Ton-That reportedly attended a 2016 dinner with white supremacist Richard Spencer that was organised by Jeff Giesea, a financier of the “alt-right” and associate of Palantir founder Peter Thiel.

Ton-That was also part of a Slack channel run by far-right activist Chuck Johnson, known for running crowdfunding platform WeSearchr that was predominately used by white supremacists. The Slack channel also included the webmaster of neo-Nazi website Daily Stormer, conspiracy theorist Mike Cernovich, and self-avowed “internet troll” Andrew Auernheimer (Auernheimer was among the first clients of Clearview AI lawyer Ekeland).

In January 2017, Chuck Johnson bragged on Facebook that he was “building algorithms to ID all the illegal immigrants for the deportation squads.” A source for Huffington Post said they’d seen Johnson discussing that project with a “bunch of really important people” at Trump’s hotel in DC and introducing them to a man that was likely Ton-That.

According to ex-Breitbart editor and former alt-right member Katie McHugh, Johnson asked to be put in touch with Trump advisor Stephen Miller to pitch a “way to identify every illegal alien in the country.”

Back when Clearview AI was known as Smartcheckr, the firm contracted Douglass Mackey who pitched the company’s technology to anti-Semitic congressional candidate Paul Nehlen for extreme campaign opposition research. Mackey was later found to be the overseer of a racist propaganda operation under the pseudonym of Ricky Vaughn. Ton-That told Huffington Post that Mackey was only contracted for three weeks and wasn’t authorised to make the offer to Nehlen.

An employee of Clearview AI, Marko Jukic, marketed the company’s technology to police departments. Jukic “published many thousands of extremist words on neoreactionary blogs,” according to Huffington Post.

Jukic’s publishings advocated the segregation of Jews, the “generous use” of racial profiling, using military force to “pacify” the “ghettos,” normalising the use of racist terminology, the replacement of democracy with authoritarianism, the assassination of journalists, and praising the ethnonationalism of Putin’s Russia while musing the collapse of the US because of “America’s diversity problem”.

As the founder of Clearview AI, Ton-That claims to have disassociated from far-right views, movements, and individuals. He told Huffington Post that growing up on the internet did not “serve him well” and “there was a period when I explored a range of ideas—not out of belief in any of them, but out of a desire to search for self and place in the world. I have finally found it, and the mission to help make America a safer place.”

You can read Huffington Post’s full investigation into Clearview AI’s far-right links here.

The post Clearview AI has been found to have extensive far-right ties appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/04/08/clearview-ai-found-extensive-ties-far-right/feed/ 1
Met Police commissioner dismisses critics of facial recognition systems https://news.deepgeniusai.com/2020/02/25/met-police-commissioner-critics-facial-recognition-systems/ https://news.deepgeniusai.com/2020/02/25/met-police-commissioner-critics-facial-recognition-systems/#respond Tue, 25 Feb 2020 17:09:27 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6426 The chief commissioner of the Metropolitan Police has dismissed critics of law enforcement using facial recognition systems. Met Commissioner Cressida Dick was speaking at the Royal United Services Institute think tank on Monday. Much of Dick’s speech was spent on making the case for British police to use modern technologies to tackle crime. Dick accused... Read more »

The post Met Police commissioner dismisses critics of facial recognition systems appeared first on AI News.

]]>
The chief commissioner of the Metropolitan Police has dismissed critics of law enforcement using facial recognition systems.

Met Commissioner Cressida Dick was speaking at the Royal United Services Institute think tank on Monday. Much of Dick’s speech was spent on making the case for British police to use modern technologies to tackle crime.

Dick accused critics of police facial recognition technology as being “highly inaccurate or highly ill-informed.”

Needless to say, this angered said critics who believe Dick is the one who is ill-informed by ignoring an independent report which suggests the technology in question only works in just 19 percent of cases.

“I would say it is for critics to justify to the victims of crimes why police should not be allowed to use tech lawfully and proportionally to catch criminals,” Dick argued.

Dick says she welcomes a public debate about facial recognition but attacked organisations such as Big Brother Watch and Liberty who brought the attention to the wider public.

“It’s unhelpful for the Met to reduce a serious debate on facial recognition to unfounded accusations of ‘fake news’,” Big Brother Watch tweeted. “Dick would do better to acknowledge and engage with the real, serious concerns – including those in the damning independent report that she ignored.”

Liberty tweeted a similar response: “Fact: Met started using facial recognition after ignoring its own review of two-year trial that said its use of the tech didn’t respect human rights. Another fact: scaremongering and deriding criticisms instead of engaging shows how flimsy their basis for using it really is.”

Met Police tests of facial recognition technology so far have been nothing short of a complete failure.

An initial trial, at the 2016 Notting Hill Carnival, led to not a single person being identified. A follow-up trial the following year led to no legitimate matches but 35 false positives.

Ironically, the legality of the trials has been called into question. An independent report by Professor Peter Fussey and Dr Daragh Murray last year concluded the six trials they were given access to were probably illegal since they had not accounted for human rights compliance.

Dr Murray said: “This report raises significant concerns regarding the human rights law compliance of the trials.

“The legal basis for the trials was unclear and is unlikely to satisfy the ‘in accordance with the law’ test established by human rights law.

“It does not appear that an effective effort was made to identify human rights harms or to establish the necessity of LFR.

“Ultimately, the impression is that human rights compliance was not built into the Metropolitan Police’s systems from the outset, and was not an integral part of the process.”

You can find a copy of the full report here (PDF)

(Image Credit: Met police helmet by Matt Brown under CC BY 2.0 license)

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Met Police commissioner dismisses critics of facial recognition systems appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/02/25/met-police-commissioner-critics-facial-recognition-systems/feed/ 0