surveillance – AI News https://news.deepgeniusai.com Artificial Intelligence News Wed, 29 Jul 2020 16:10:46 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png surveillance – AI News https://news.deepgeniusai.com 32 32 Researchers create AI bot to protect the identities of BLM protesters https://news.deepgeniusai.com/2020/07/29/researchers-create-ai-bot-protect-identities-blm-protesters/ https://news.deepgeniusai.com/2020/07/29/researchers-create-ai-bot-protect-identities-blm-protesters/#respond Wed, 29 Jul 2020 16:09:37 +0000 https://news.deepgeniusai.com/?p=9776 Researchers from Stanford have created an AI-powered bot to automatically cover up the faces of Black Lives Matter protesters in photos. Everyone should have the right to protest. And, if done legally, to do so without fear of having things like their future job prospects ruined because they’ve been snapped at a demonstration – from... Read more »

The post Researchers create AI bot to protect the identities of BLM protesters appeared first on AI News.

]]>
Researchers from Stanford have created an AI-powered bot to automatically cover up the faces of Black Lives Matter protesters in photos.

Everyone should have the right to protest. And, if done legally, to do so without fear of having things like their future job prospects ruined because they’ve been snapped at a demonstration – from which a select few may have gone on to do criminal acts such as arson and looting.

With images from the protests being widely shared on social media to raise awareness, police have been using the opportunity to add the people featured within them to facial recognition databases.

“Over the past weeks, we have seen an increasing number of arrests at BLM protests, with images circulating around the web enabling automatic identification of those individuals and subsequent arrests to hamper protest activity,” the researchers explain.

Software has been available for some time to blur faces, but recent AI advancements have proved that it’s possible to deblur such images.

Researchers from Stanford Machine Learning set out to develop an automated tool which prevents the real identity of those in an image from being revealed.

The result of their work is BLMPrivacyBot:

Rather than blur the faces, the bot automatically covers them up with the black fist emoji which has become synonymous with the Black Lives Matter movement. The researchers hope such a solution will be built-in to social media platforms, but admit it’s unlikely.

The researchers trained the model for their AI bot on a dataset consisting of around 1.2 million people called QNRF. However, they warn it’s not foolproof as an individual could be identified through other means such as what clothing they’re wearing.

To use the BLMPrivacyBot, you can either send an image to its Twitter handle or upload a photo to the web interface here. The open source repo is available if you want to look at the inner workings.

The post Researchers create AI bot to protect the identities of BLM protesters appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/29/researchers-create-ai-bot-protect-identities-blm-protesters/feed/ 0
UK and Australia launch joint probe into Clearview AI’s mass data scraping https://news.deepgeniusai.com/2020/07/10/uk-australia-probe-clearview-ai-mass-data-scraping/ https://news.deepgeniusai.com/2020/07/10/uk-australia-probe-clearview-ai-mass-data-scraping/#respond Fri, 10 Jul 2020 14:49:51 +0000 https://news.deepgeniusai.com/?p=9745 The UK and Australia have launched a joint probe into the controversial “data scraping” practices of Clearview AI. Clearview AI has repeatedly made headlines, and rarely for good reason. The company’s facial recognition technology is impressive but relies on scraping billions of people’s data from across the web. “Common law has never recognised a right... Read more »

The post UK and Australia launch joint probe into Clearview AI’s mass data scraping appeared first on AI News.

]]>
The UK and Australia have launched a joint probe into the controversial “data scraping” practices of Clearview AI.

Clearview AI has repeatedly made headlines, and rarely for good reason. The company’s facial recognition technology is impressive but relies on scraping billions of people’s data from across the web.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland argued recently.

Regulators in the UK and Australia seem to have a different perspective than Ekeland and have announced a joint probe into Clearview AI’s practices.

“The Office of the Australian Information Commissioner (OAIC) and the UK’s Information Commissioner’s Office (ICO) have opened a joint investigation into the personal information handling practices of Clearview Inc., focusing on the company’s use of ‘scraped’ data and biometrics of individuals,” the ICO wrote in a statement.

“The investigation highlights the importance of enforcement cooperation in protecting the personal information of Australian and UK citizens in a globalized data environment.”

A similar probe was launched by the EU’s privacy watchdog last month.

The European Data Protection Board ruled that any use of the service by law enforcement in Europe would “likely not be consistent with the EU data protection regime” and that it “has doubts as to whether any Union or Member State law provides a legal basis for using a service such as the one offered by Clearview AI.”

Clearview AI’s facial recognition system is used by over 2,200 law enforcement agencies around the world. A recent leak suggests it’s even being used by commercial businesses like Best Buy and Macy’s. In May, Clearview said it would stop working with non–law enforcement entities.

The American Civil Liberties Union (ACLU) launched a lawsuit against Clearview AI in May after calling it a “nightmare scenario” for privacy.

Aside from the company’s practices, concerns have been raised about Clearview AI’s extensive ties with the far-right. Ekeland himself has gained notoriety as “The Troll’s Lawyer” for defending clients such as neo-Nazi troll Andrew Auernheimer.

“Companies like Clearview will end privacy as we know it, and must be stopped,” said Nathan Freed Wessler, senior staff attorney with the ACLU’s Speech, Privacy, and Technology Project.

(Photo by The Creative Exchange on Unsplash)

The post UK and Australia launch joint probe into Clearview AI’s mass data scraping appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/10/uk-australia-probe-clearview-ai-mass-data-scraping/feed/ 0
Detroit Police chief says AI face recognition doesn’t work ‘96% of the time’ https://news.deepgeniusai.com/2020/06/30/detroit-police-chief-ai-face-recognition/ https://news.deepgeniusai.com/2020/06/30/detroit-police-chief-ai-face-recognition/#respond Tue, 30 Jun 2020 09:45:29 +0000 https://news.deepgeniusai.com/?p=9720 Detroit Police chief Editor Craig has acknowledged that AI-powered face recognition doesn’t work the vast majority of times. “If we would use the software only [for subject identification], we would not solve the case 95-97 percent of the time,” Craig said. “If we were just to use the technology by itself to identify someone, I... Read more »

The post Detroit Police chief says AI face recognition doesn’t work ‘96% of the time’ appeared first on AI News.

]]>
Detroit Police chief Editor Craig has acknowledged that AI-powered face recognition doesn’t work the vast majority of times.

“If we would use the software only [for subject identification], we would not solve the case 95-97 percent of the time,” Craig said. “If we were just to use the technology by itself to identify someone, I would say 96 percent of the time it would misidentify.”

Craig’s comments arrive just days after the ACLU (American Civil Liberties Union) lodged a complaint against the Detroit police following the harrowing wrongful arrest of black male Robert Williams due to a facial recognition error.

Detroit Police arrested Williams for allegedly stealing five watches valued at $3800 from a store in October 2018. A blurry CCTV image was matched by a facial recognition algorithm to Williams’ driver’s license photo.

Current AI algorithms are known to have a racism issue. Extensive studies have repeatedly shown that facial recognition algorithms are almost 100 percent accurate when used on white males, but have serious problems when it comes to darker skin colours and the fairer sex.

This racism issue was shown again this week after an AI designed to upscale blurry photos, such as those often taken from security cameras, was applied to a variety of people from the BAME communities.

Here’s a particularly famous one:

And some other examples:

Last week, Boston followed in the footsteps of an increasing number of cities like San Francisco, Oakland, and California in banning facial recognition technology over human rights concerns.

“Facial recognition is inherently dangerous and inherently oppressive. It cannot be reformed or regulated. It must be abolished,” said Evan Greer, deputy director of the digital rights group Fight for the Future.

Over the other side of the pond, facial recognition tests in the UK so far have also been nothing short of a complete failure. An initial trial at the 2016 Notting Hill Carnival led to not a single person being identified. A follow-up trial the following year led to no legitimate matches but 35 false positives.

An independent report into the Met Police’s facial recognition trials, conducted last year by Professor Peter Fussey and Dr Daragh Murray, concluded that it was only verifiably accurate in just 19 percent of cases.

The next chilling step for AI in surveillance is using it to predict crime. Following news of an imminent publication called ‘A Deep Neural Network Model to Predict Criminality Using Image Processing,’ over 1000 experts signed an open letter last week opposing the use of AI for such purposes.

“Machine learning programs are not neutral; research agendas and the data sets they work with often inherit dominant cultural beliefs about the world,” warned the letter’s authors.

The acknowledgement from Detroit’s police chief that current facial recognition technologies do not work in around 96 percent of cases should be reason enough to halt its use, especially for law enforcement, at least until serious improvements are made.

(Photo by Joshua Hoehne on Unsplash)

The post Detroit Police chief says AI face recognition doesn’t work ‘96% of the time’ appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/30/detroit-police-chief-ai-face-recognition/feed/ 0
The ACLU uncovers the first known wrongful arrest due to AI error https://news.deepgeniusai.com/2020/06/25/aclu-uncovers-wrongful-arrest-ai-error/ https://news.deepgeniusai.com/2020/06/25/aclu-uncovers-wrongful-arrest-ai-error/#respond Thu, 25 Jun 2020 12:05:26 +0000 https://news.deepgeniusai.com/?p=9711 The ACLU (American Civil Liberties Union) has forced the police to acknowledge a wrongful arrest due to an erroneous algorithm. While it’s been suspected that documented racial bias with facial recognition algorithms has led to false arrests, it’s been difficult to prove. On Wednesday, the ACLU lodged a complaint against the Detroit police after black... Read more »

The post The ACLU uncovers the first known wrongful arrest due to AI error appeared first on AI News.

]]>
The ACLU (American Civil Liberties Union) has forced the police to acknowledge a wrongful arrest due to an erroneous algorithm.

While it’s been suspected that documented racial bias with facial recognition algorithms has led to false arrests, it’s been difficult to prove.

On Wednesday, the ACLU lodged a complaint against the Detroit police after black male Robert Williams was arrested on his front lawn “as his wife Melissa looked on and as his daughters wept from the trauma”. Williams was held in a “crowded and filthy” cell overnight without being given any reason.

Detroit Police arrested Williams for allegedly stealing five watches valued at $3800 from a store in October 2018. A blurry CCTV image was matched by a facial recognition algorithm to Williams’ driver’s license photo.

During an interrogation the day after his arrest, the police admitted that “the computer must have gotten it wrong”. Williams was kept incarcerated until after dark “at which point he was released out the front door, on a cold and rainy January night, where he was forced to wait outside on the curb for approximately an hour while his wife scrambled to find child care for the children so that she could come pick him up.”

Speaking to the NY Times, a Detroit police spokesperson said the department “does not make arrests based solely on facial recognition,” and claims witness interviews and a photo lineup were used.

However, a response from the Wayne County prosecutor’s office confirms the department used facial recognition to identify Williams using the security footage and an eyewitness to the crime was not shown the alleged photo lineup.

In its complaint, the ACLU demands that Detroit police end the use of facial recognition “as the facts of Mr. Williams’ case prove both that the technology is flawed and that DPD investigators are not competent in making use of such technology.”

This week, Boston became the latest city to ban facial recognition technology for municipal use. Boston follows an increasing number of cities like San Francisco, Oakland, and California who’ve banned the technology over human rights concerns.

“Facial recognition is inherently dangerous and inherently oppressive. It cannot be reformed or regulated. It must be abolished,” said Evan Greer, deputy director of the digital rights group Fight for the Future.

“Boston just became the latest major city to stop the use of this extraordinary and toxic surveillance technology. Every other city should follow suit.”

Cases like Mr Williams’ are certainly strengthening such calls. Over 1,000 experts signed an open letter this week against the use of AI for the next chilling step, crime prediction.

(Photo by ev on Unsplash)

The post The ACLU uncovers the first known wrongful arrest due to AI error appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/25/aclu-uncovers-wrongful-arrest-ai-error/feed/ 0
The EU’s privacy watchdog takes aim at Clearview AI’s facial recognition https://news.deepgeniusai.com/2020/06/11/eu-privacy-watchdog-aim-clearview-ai-facial-recognition/ https://news.deepgeniusai.com/2020/06/11/eu-privacy-watchdog-aim-clearview-ai-facial-recognition/#respond Thu, 11 Jun 2020 14:33:29 +0000 https://news.deepgeniusai.com/?p=9688 The European Data Protection Board (EDPB) believes use of Clearview AI’s controversial facial recognition system would be illegal. Clearview AI’s facial recognition system is used by over 2,200 law enforcement agencies around the world and even commercial businesses like Best Buy and Macy’s, according to a recent leak. The EDPB has now ruled that any... Read more »

The post The EU’s privacy watchdog takes aim at Clearview AI’s facial recognition appeared first on AI News.

]]>
The European Data Protection Board (EDPB) believes use of Clearview AI’s controversial facial recognition system would be illegal.

Clearview AI’s facial recognition system is used by over 2,200 law enforcement agencies around the world and even commercial businesses like Best Buy and Macy’s, according to a recent leak.

The EDPB has now ruled that any use of the service by law enforcement in Europe would “likely not be consistent with the EU data protection regime.”

Furthermore, the watchdog “has doubts as to whether any Union or Member State law provides a legal basis for using a service such as the one offered by Clearview AI.”

Clearview AI scrapes billions of photos from across the internet for its powerful system, a practice which has come under fire by privacy campaigners. “Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland argued recently.

The American Civil Liberties Union (ACLU) launched a lawsuit against Clearview AI last month after calling it a “nightmare scenario” for privacy.

“Companies like Clearview will end privacy as we know it, and must be stopped,” said Nathan Freed Wessler, senior staff attorney with the ACLU’s Speech, Privacy, and Technology Project.

Aside from the company’s practices, concerns have been raised about Clearview AI’s extensive ties with the far-right. Ekeland himself has gained notoriety as “The Troll’s Lawyer” for defending clients such as neo-Nazi troll Andrew Auernheimer.

Backlash over Clearview AI forced the company to announce it will no longer offer its services to private companies. The EU’s ruling will limit Clearview AI’s potential customers even further.

Concerns have grown in recent weeks about facial recognition services amid protests over racial discrimination. Facial recognition services have been repeatedly found to falsely flag minorities; stoking fears they’ll lead to automated racial profiling.

IBM and Amazon have both announced this week they’ll no longer provide facial recognition services to law enforcement and have called on Congress to increase regulation to help ensure future deployments meet ethical standards.

(Photo by Christian Lue on Unsplash)

The post The EU’s privacy watchdog takes aim at Clearview AI’s facial recognition appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/11/eu-privacy-watchdog-aim-clearview-ai-facial-recognition/feed/ 0
World’s oldest defence think tank concludes British spies need AI https://news.deepgeniusai.com/2020/04/28/world-oldest-defence-think-tank-british-spies-ai/ https://news.deepgeniusai.com/2020/04/28/world-oldest-defence-think-tank-british-spies-ai/#comments Tue, 28 Apr 2020 12:11:03 +0000 https://news.deepgeniusai.com/?p=9572 The Royal United Services Institute (RUSI) says in an intelligence report that British spies will need to use AI to counter threats. Based in Westminster, the RUSI is the world’s oldest think tank on international defence and security. Founded in 1831 by the first Duke of Wellington, Sir Arthur Wellesley, the RUSI remains a highly... Read more »

The post World’s oldest defence think tank concludes British spies need AI appeared first on AI News.

]]>
The Royal United Services Institute (RUSI) says in an intelligence report that British spies will need to use AI to counter threats.

Based in Westminster, the RUSI is the world’s oldest think tank on international defence and security. Founded in 1831 by the first Duke of Wellington, Sir Arthur Wellesley, the RUSI remains a highly respected institution that’s as relevant today as ever.

AI is rapidly advancing the capabilities of adversaries. In its report, the RUSI says that hackers – both state-sponsored and independent – are likely to use AI for cyberattacks on the web and political systems.

Adversaries “will undoubtedly seek to use AI to attack the UK”, the RUSI notes.

Threats could emerge in a variety of ways. Deepfakes, which use a neural network to generate convincing fake videos and images, are one example of a threat already being posed today. With the US elections coming up, there’s concerns deepfakes of political figures could be used for voter manipulation.

AI could also be used for powerful new malware which mutates to avoid detection. Such malware could even infect and take control of emerging technologies such as driverless cars, smart city infrastructure, and drones.

The RUSI believes that humans will struggle to counter AI threats alone and will need the assistance of automation.

“Adoption of AI is not just important to help intelligence agencies manage the technical challenge of information overload,” said Alexander Babuta, one of the report’s authors. “It is highly likely that malicious actors will use AI to attack the UK in numerous ways, and the intelligence community will need to develop new AI-based defence measures.”

GCHQ, the UK’s service which focuses on signals intelligence , commissioned the RUSI’s independent report. Ken McCallum, the new head of MI5 – the UK’s domestic counter-intelligence and security agency – has already said that greater use of AI will be one of his priorities.

The RUSI believes AI will be of little value for “predictive intelligence” to do things such as predicting when a terrorist act is likely to occur before it happens. Highlighting counter-terrorism specifically, the RUSI says such cases are too infrequent to look for patterns compared to other criminal acts. Reasons for terrorist acts can also change very quickly dependent on world events.

All of this raises concerns about the automation of discrimination. The RUSI calls for more of an “augmented” intelligence – whereby technology assists sifting through large amounts of data, but decisions are ultimately taken by humans – rather than leaving it all up to the machines.

In terms of global positioning, the RUSI recognises the UK’s strength in AI with talent emerging from the country’s world-leading universities and capabilities in the GCHQ, bodies like the Alan Turing Institute, the Centre for Data Ethics and Innovation, and even more in the private sector.

While it’s widely-acknowledged countries like the US and China have far more resources overall to throw at AI advancements, the RUSI believes the UK has the potential to be a leader in the technology within a much-needed ethical framework. However, they say it’s important not to be too preoccupied with the possible downsides.

“There is a risk of stifling innovation if we become overly-focused on hypothetical worst-case outcomes and speculations over some dystopian future AI-driven surveillance network,” argues Babuta.

“Legitimate ethical concerns will be overshadowed unless we focus on likely and realistic uses of AI in the short-to-medium term.”

You can find a copy of the RUSI’s full report here (PDF)

(Photo by Chris Yang on Unsplash)

The post World’s oldest defence think tank concludes British spies need AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/04/28/world-oldest-defence-think-tank-british-spies-ai/feed/ 1
Clearview AI has been found to have extensive far-right ties https://news.deepgeniusai.com/2020/04/08/clearview-ai-found-extensive-ties-far-right/ https://news.deepgeniusai.com/2020/04/08/clearview-ai-found-extensive-ties-far-right/#comments Wed, 08 Apr 2020 10:41:19 +0000 https://news.deepgeniusai.com/?p=9533 Controversial facial recognition firm Clearview AI has been found to have extensive ties to far-right individuals and movements. Clearview AI has come under scrutiny for scraping billions of photos from across the internet and storing them in a database for powerful facial recognition services. Privacy activists criticise the practice as the people in those images... Read more »

The post Clearview AI has been found to have extensive far-right ties appeared first on AI News.

]]>
Controversial facial recognition firm Clearview AI has been found to have extensive ties to far-right individuals and movements.

Clearview AI has come under scrutiny for scraping billions of photos from across the internet and storing them in a database for powerful facial recognition services. Privacy activists criticise the practice as the people in those images never gave their consent.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland said recently. “It’s kind of a bizarre argument to make because [your face is the] most public thing out there.”

The company’s facial recognition system is used by over 600 law enforcement agencies. Furthermore, a recent leak revealed its client list also includes commercial businesses like Best Buy and Macy’s.

As if the company’s system wasn’t dystopian enough, an extensive investigation by The Huffington Post has revealed extensive links to some rather unsavoury people and movements.

Clearview AI founder Hoan Ton-That reportedly attended a 2016 dinner with white supremacist Richard Spencer that was organised by Jeff Giesea, a financier of the “alt-right” and associate of Palantir founder Peter Thiel.

Ton-That was also part of a Slack channel run by far-right activist Chuck Johnson, known for running crowdfunding platform WeSearchr that was predominately used by white supremacists. The Slack channel also included the webmaster of neo-Nazi website Daily Stormer, conspiracy theorist Mike Cernovich, and self-avowed “internet troll” Andrew Auernheimer (Auernheimer was among the first clients of Clearview AI lawyer Ekeland).

In January 2017, Chuck Johnson bragged on Facebook that he was “building algorithms to ID all the illegal immigrants for the deportation squads.” A source for Huffington Post said they’d seen Johnson discussing that project with a “bunch of really important people” at Trump’s hotel in DC and introducing them to a man that was likely Ton-That.

According to ex-Breitbart editor and former alt-right member Katie McHugh, Johnson asked to be put in touch with Trump advisor Stephen Miller to pitch a “way to identify every illegal alien in the country.”

Back when Clearview AI was known as Smartcheckr, the firm contracted Douglass Mackey who pitched the company’s technology to anti-Semitic congressional candidate Paul Nehlen for extreme campaign opposition research. Mackey was later found to be the overseer of a racist propaganda operation under the pseudonym of Ricky Vaughn. Ton-That told Huffington Post that Mackey was only contracted for three weeks and wasn’t authorised to make the offer to Nehlen.

An employee of Clearview AI, Marko Jukic, marketed the company’s technology to police departments. Jukic “published many thousands of extremist words on neoreactionary blogs,” according to Huffington Post.

Jukic’s publishings advocated the segregation of Jews, the “generous use” of racial profiling, using military force to “pacify” the “ghettos,” normalising the use of racist terminology, the replacement of democracy with authoritarianism, the assassination of journalists, and praising the ethnonationalism of Putin’s Russia while musing the collapse of the US because of “America’s diversity problem”.

As the founder of Clearview AI, Ton-That claims to have disassociated from far-right views, movements, and individuals. He told Huffington Post that growing up on the internet did not “serve him well” and “there was a period when I explored a range of ideas—not out of belief in any of them, but out of a desire to search for self and place in the world. I have finally found it, and the mission to help make America a safer place.”

You can read Huffington Post’s full investigation into Clearview AI’s far-right links here.

The post Clearview AI has been found to have extensive far-right ties appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/04/08/clearview-ai-found-extensive-ties-far-right/feed/ 1
Met Police commissioner dismisses critics of facial recognition systems https://news.deepgeniusai.com/2020/02/25/met-police-commissioner-critics-facial-recognition-systems/ https://news.deepgeniusai.com/2020/02/25/met-police-commissioner-critics-facial-recognition-systems/#respond Tue, 25 Feb 2020 17:09:27 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6426 The chief commissioner of the Metropolitan Police has dismissed critics of law enforcement using facial recognition systems. Met Commissioner Cressida Dick was speaking at the Royal United Services Institute think tank on Monday. Much of Dick’s speech was spent on making the case for British police to use modern technologies to tackle crime. Dick accused... Read more »

The post Met Police commissioner dismisses critics of facial recognition systems appeared first on AI News.

]]>
The chief commissioner of the Metropolitan Police has dismissed critics of law enforcement using facial recognition systems.

Met Commissioner Cressida Dick was speaking at the Royal United Services Institute think tank on Monday. Much of Dick’s speech was spent on making the case for British police to use modern technologies to tackle crime.

Dick accused critics of police facial recognition technology as being “highly inaccurate or highly ill-informed.”

Needless to say, this angered said critics who believe Dick is the one who is ill-informed by ignoring an independent report which suggests the technology in question only works in just 19 percent of cases.

“I would say it is for critics to justify to the victims of crimes why police should not be allowed to use tech lawfully and proportionally to catch criminals,” Dick argued.

Dick says she welcomes a public debate about facial recognition but attacked organisations such as Big Brother Watch and Liberty who brought the attention to the wider public.

“It’s unhelpful for the Met to reduce a serious debate on facial recognition to unfounded accusations of ‘fake news’,” Big Brother Watch tweeted. “Dick would do better to acknowledge and engage with the real, serious concerns – including those in the damning independent report that she ignored.”

Liberty tweeted a similar response: “Fact: Met started using facial recognition after ignoring its own review of two-year trial that said its use of the tech didn’t respect human rights. Another fact: scaremongering and deriding criticisms instead of engaging shows how flimsy their basis for using it really is.”

Met Police tests of facial recognition technology so far have been nothing short of a complete failure.

An initial trial, at the 2016 Notting Hill Carnival, led to not a single person being identified. A follow-up trial the following year led to no legitimate matches but 35 false positives.

Ironically, the legality of the trials has been called into question. An independent report by Professor Peter Fussey and Dr Daragh Murray last year concluded the six trials they were given access to were probably illegal since they had not accounted for human rights compliance.

Dr Murray said: “This report raises significant concerns regarding the human rights law compliance of the trials.

“The legal basis for the trials was unclear and is unlikely to satisfy the ‘in accordance with the law’ test established by human rights law.

“It does not appear that an effective effort was made to identify human rights harms or to establish the necessity of LFR.

“Ultimately, the impression is that human rights compliance was not built into the Metropolitan Police’s systems from the outset, and was not an integral part of the process.”

You can find a copy of the full report here (PDF)

(Image Credit: Met police helmet by Matt Brown under CC BY 2.0 license)

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Met Police commissioner dismisses critics of facial recognition systems appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/02/25/met-police-commissioner-critics-facial-recognition-systems/feed/ 0
UK police are concerned AI will lead to bias and over-reliance on automation https://news.deepgeniusai.com/2019/09/17/uk-police-concerned-ai-bias-automation/ https://news.deepgeniusai.com/2019/09/17/uk-police-concerned-ai-bias-automation/#respond Tue, 17 Sep 2019 10:00:22 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6021 British police have expressed concern that using AI in their operations may lead to increased bias and an over-reliance on automation. A study commissioned by UK government advisory body the Centre for Data Ethics and Innovation warned that police felt AI may “amplify” prejudices. 50 experts were interviewed by the Royal United Services Institute (RUSI)... Read more »

The post UK police are concerned AI will lead to bias and over-reliance on automation appeared first on AI News.

]]>
British police have expressed concern that using AI in their operations may lead to increased bias and an over-reliance on automation.

A study commissioned by UK government advisory body the Centre for Data Ethics and Innovation warned that police felt AI may “amplify” prejudices.

50 experts were interviewed by the Royal United Services Institute (RUSI) for the research, including senior police officers.

Racial profiling continues to be a huge problem. More young black men are stopped than young white men. The experts interviewed by the RUSI are worried these human prejudices could make their way into algorithms if they’re trained on existing police data.

It’s also noted how individuals from disadvantaged backgrounds tend to use more public transport. With data likely to be collected from the use of public transport, this increases the likelihood of those individuals being flagged.

The accuracy of facial recognition algorithms has often been questioned. Earlier this year, the Algorithmic Justice League tested all the major technologies and found that algorithms particularly struggled with darker-skinned females.

A similar report published by the American Civil Liberties Union focused on Amazon’s so-called Rekognition facial recognition system. When tested against members of congress, it incorrectly flagged those with darker skin more often.

Both findings show the potentially devastating societal impact if such technology was rolled out publicly today. It’s good to hear British authorities are at least aware of the potential complications.

The RUSI reports that experts in the study want to see clearer guidelines established for acceptable use of the technology. They hope this will provide confidence to police forces to adopt such potentially beneficial new technologies, but in a safe and responsible way.

“For many years police forces have looked to be innovative in their use of technology to protect the public and prevent harm and we continue to explore new approaches to achieve these aims,” Assistant Chief Constable Jonathan Drake told BBC News.

“But our values mean we police by consent, so anytime we use new technology we consult with interested parties to ensure any new tactics are fair, ethical and producing the best results for the public.”

You can find the full results of the RUSI’s study here.

The post UK police are concerned AI will lead to bias and over-reliance on automation appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/09/17/uk-police-concerned-ai-bias-automation/feed/ 0
Spy connected with targets on LinkedIn using AI-generated pic https://news.deepgeniusai.com/2019/06/14/spy-targets-linkedin-ai-generated-pic/ https://news.deepgeniusai.com/2019/06/14/spy-targets-linkedin-ai-generated-pic/#respond Fri, 14 Jun 2019 14:41:27 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5757 A spy used a profile pic generated by AI to connect with targets on LinkedIn, further showing how blurred the lines between real and fake are becoming. Katie Jones was supposed to be a thirty-something redhead who worked at a leading think tank and had serious connections. Connections included people with political weight like a... Read more »

The post Spy connected with targets on LinkedIn using AI-generated pic appeared first on AI News.

]]>
A spy used a profile pic generated by AI to connect with targets on LinkedIn, further showing how blurred the lines between real and fake are becoming.

Katie Jones was supposed to be a thirty-something redhead who worked at a leading think tank and had serious connections.

Connections included people with political weight like a deputy assistant secretary of state, to groups ranging from the centrist Brookings Institution to the right-wing Heritage Foundation. It’s the connections not on Miss Jones’ profile, however, that people should be concerned about.

The Associated Press (AP) found the profile was entirely fake and typical of espionage campaigns on the networking site. “It smells a lot like some sort of state-run operation,” said Jonas Parello-Plesner, program director at Denmark-based think tank Alliance of Democracies Foundation.

Parello-Plesner was targeted in an espionage attack over LinkedIn a few years ago, showing it’s not a new phenomenon. However, advancements in AI are making the creation of convincing fake profiles easier than ever.

A closer examination of Jones’ alleged profile pic conducted by AP’s Raphael Satter highlighted there are still some telltale signs of a fake image:

The issue of deepfakes is becoming ever more apparent. Earlier this week, AI News reported Facebook CEO Mark Zuckerberg had become the victim of a deepfake video just a month after the social media site refused to remove a deepfake video of House Speaker Nancy Pelosi.

In the deepfake of Zuckerberg, he is portrayed to say: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

While many people are learning not to believe all they read, it will take some time for people not to trust their eyes. In a world where politicians can be convincingly made to say anything using just a computer, that has major ramifications for society.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Spy connected with targets on LinkedIn using AI-generated pic appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/06/14/spy-targets-linkedin-ai-generated-pic/feed/ 0