legal – AI News https://news.deepgeniusai.com Artificial Intelligence News Thu, 06 Aug 2020 12:45:25 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png legal – AI News https://news.deepgeniusai.com 32 32 University College London: Deepfakes are the ‘most serious’ AI crime threat https://news.deepgeniusai.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/ https://news.deepgeniusai.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/#respond Thu, 06 Aug 2020 12:41:52 +0000 https://news.deepgeniusai.com/?p=9794 Researchers from University College London have released a ranking of what experts believe to be the most serious AI crime threats. The researchers first created a list of 20 expected ways AI will be used by criminals within the next 15 years. 31 experts were then asked to rank them by potential severity. Deepfakes –... Read more »

The post University College London: Deepfakes are the ‘most serious’ AI crime threat appeared first on AI News.

]]>
Researchers from University College London have released a ranking of what experts believe to be the most serious AI crime threats.

The researchers first created a list of 20 expected ways AI will be used by criminals within the next 15 years. 31 experts were then asked to rank them by potential severity.

Deepfakes – AI-generated images, videos, and articles – ranked top of the list as the most serious threat.

New and dangerous territory

It’s of little surprise to see deepfakes rank so highly, given the existing issues with disinformation campaigns.

Most fake content today at least must be created by humans, such as those working in the likes of Russia’s infamous “troll farms”. Human-generated disinformation campaigns take time to produce to a convincing standard and often have patterns which make them easier to trace. 

Automating the production of fake content en masse, to influence things such as democratic votes and public opinion, is entering into new and dangerous territory.

One of the most high-profile deepfake cases so far was that of US house speaker Nancy Pelosi. In 2018, a deepfake video circulated on social media which made Pelosi appear drunk and slurring her words. Pelosi criticised Facebook’s response, or lack thereof, and later told California’s KQED: “I think they have proven — by not taking down something they know is false — that they were willing enablers of the Russian interference in our election.”

The deepfake of Pelosi was unsophisticated and likely created to be amusing rather than malicious, but it’s an early warning of how such fakes could be used to cause reputational damage – or worse. Just imagine the potential consequences a fake video of the president announcing an imminent strike on somewhere like North Korea could have.

Deepfakes also have obvious potential to be used for fraud purposes, to pretend to be someone else to access things like bank accounts and sensitive information.

Then there’s the issue of blackmail. Deep learning has already been used to put the faces of celebrities on adult performers. While fake, the threat to release such videos – and the embarrassment caused – could lead to some paying a ransom to keep it from being made public.

“People now conduct large parts of their lives online and their online activity can make and break reputations,” comments first author Dr Matthew Caldwell of UCL Computer Science. “Such an online environment, where data is property and information power, is ideally suited for exploitation by AI-based criminal activity.”

All in all, it’s easy to see why experts are so concerned about deepfakes.

As part of a bid to persuade Facebook to change its policies on deepfakes, Israeli startup Canny AI created a deepfake of Facebook CEO Mark Zuckerberg last year which made it appear like he said: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

Other AI crime threats

There were four other major AI crime threats identified by the researchers: the use of driverless cars as weapons, automated spear fishing, harvesting information for blackmail, and the disruption of AI-controlled systems.

“As the capabilities of AI-based technologies expand, so too has their potential for criminal exploitation,” explained senior author Professor Lewis Griffin of UCL Computer Science. “To adequately prepare for possible AI threats, we need to identify what these threats might be, and how they may impact our lives.”

Of medium concern were of things such as the sale of items and services wrongly called AI, such as security screening and targeted advertising solutions. The researchers believe leading people to believe they’re AI-powered could be lucrative.

Among the lesser concerns is things such as so-called “burglar bots” which could get in through access points of a property to unlock them or search for data. The researchers believe this poses less of a threat because they can be easily prevented through methods such as letterbox cages.

Similarly, the researchers note the potential for AI-based stalking is damaging for individuals but isn’t considered a major threat as it could not operate at scale.

You can find the researchers’ full paper in the Crime Science Journal here.

(Photo by Bill Oxford on Unsplash)

The post University College London: Deepfakes are the ‘most serious’ AI crime threat appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/feed/ 0
UK and Australia launch joint probe into Clearview AI’s mass data scraping https://news.deepgeniusai.com/2020/07/10/uk-australia-probe-clearview-ai-mass-data-scraping/ https://news.deepgeniusai.com/2020/07/10/uk-australia-probe-clearview-ai-mass-data-scraping/#respond Fri, 10 Jul 2020 14:49:51 +0000 https://news.deepgeniusai.com/?p=9745 The UK and Australia have launched a joint probe into the controversial “data scraping” practices of Clearview AI. Clearview AI has repeatedly made headlines, and rarely for good reason. The company’s facial recognition technology is impressive but relies on scraping billions of people’s data from across the web. “Common law has never recognised a right... Read more »

The post UK and Australia launch joint probe into Clearview AI’s mass data scraping appeared first on AI News.

]]>
The UK and Australia have launched a joint probe into the controversial “data scraping” practices of Clearview AI.

Clearview AI has repeatedly made headlines, and rarely for good reason. The company’s facial recognition technology is impressive but relies on scraping billions of people’s data from across the web.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland argued recently.

Regulators in the UK and Australia seem to have a different perspective than Ekeland and have announced a joint probe into Clearview AI’s practices.

“The Office of the Australian Information Commissioner (OAIC) and the UK’s Information Commissioner’s Office (ICO) have opened a joint investigation into the personal information handling practices of Clearview Inc., focusing on the company’s use of ‘scraped’ data and biometrics of individuals,” the ICO wrote in a statement.

“The investigation highlights the importance of enforcement cooperation in protecting the personal information of Australian and UK citizens in a globalized data environment.”

A similar probe was launched by the EU’s privacy watchdog last month.

The European Data Protection Board ruled that any use of the service by law enforcement in Europe would “likely not be consistent with the EU data protection regime” and that it “has doubts as to whether any Union or Member State law provides a legal basis for using a service such as the one offered by Clearview AI.”

Clearview AI’s facial recognition system is used by over 2,200 law enforcement agencies around the world. A recent leak suggests it’s even being used by commercial businesses like Best Buy and Macy’s. In May, Clearview said it would stop working with non–law enforcement entities.

The American Civil Liberties Union (ACLU) launched a lawsuit against Clearview AI in May after calling it a “nightmare scenario” for privacy.

Aside from the company’s practices, concerns have been raised about Clearview AI’s extensive ties with the far-right. Ekeland himself has gained notoriety as “The Troll’s Lawyer” for defending clients such as neo-Nazi troll Andrew Auernheimer.

“Companies like Clearview will end privacy as we know it, and must be stopped,” said Nathan Freed Wessler, senior staff attorney with the ACLU’s Speech, Privacy, and Technology Project.

(Photo by The Creative Exchange on Unsplash)

The post UK and Australia launch joint probe into Clearview AI’s mass data scraping appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/10/uk-australia-probe-clearview-ai-mass-data-scraping/feed/ 0
California introduces legislation to stop political and porn deepfakes https://news.deepgeniusai.com/2019/10/07/california-introduces-legislation-stop-political-porn-deepfakes/ https://news.deepgeniusai.com/2019/10/07/california-introduces-legislation-stop-political-porn-deepfakes/#respond Mon, 07 Oct 2019 11:48:28 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6083 Deepfake videos have the potential to do unprecedented amounts of harm so California has introduced two bills designed to limit them. For those unaware, deepfakes use machine learning technology in order to make a person appear like they’re convincingly doing or saying things which they’re not. There are two main concerns about deepfake videos: Personal... Read more »

The post California introduces legislation to stop political and porn deepfakes appeared first on AI News.

]]>
Deepfake videos have the potential to do unprecedented amounts of harm so California has introduced two bills designed to limit them.

For those unaware, deepfakes use machine learning technology in order to make a person appear like they’re convincingly doing or saying things which they’re not.

There are two main concerns about deepfake videos:

  • Personal defamation – An individual is made to appear in a sexual and/or humiliating scene either for blackmail purposes or to stain that person’s image.
  • Manipulation – An influential person, typically a politician, is made to appear they’ve said something in order to sway public opinion and perhaps even vote a certain way.

Many celebrities have become victims of deepfake porn. One of the bills signed into law by the state of California last week allows victims to sue anyone who puts their image into a pornographic video without consent.

Earlier this year, Facebook CEO Mark Zuckerberg became the victim of a deepfake. Zuckerberg was portrayed to say: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

Clearly, to most of us, the Zuckerberg video is a fake. The video was actually created by Isreali startup Canny AI as part of a commissioned art installation called Spectre that was on display at the Sheffield Doc/Fest in the UK.

A month prior to the Zuckerberg video, Facebook refused to remove a deepfake video of House Speaker Nancy Pelosi which aimed to portray her as intoxicated. If deepfakes are allowed to go viral on huge social media platforms like Facebook, it will pose huge societal problems.

Pelosi later told California’s KQED: “I think they have proven — by not taking down something they know is false — that they were willing enablers of the Russian interference in our election.”

California’s second bill legislates against posting any manipulated video of a political candidate, albeit only within 60 days of an election.

California Assembly representative Marc Berman said:

“Voters have a right to know when video, audio, and images that they are being shown, to try to influence their vote in an upcoming election, have been manipulated and do not represent reality.

[That] makes deepfake technology a powerful and dangerous new tool in the arsenal of those who want to wage misinformation campaigns to confuse voters.”

While many people now know not to trust everything they read, most of us are still accustomed to believing what we see with our eyes. That’s what poses the biggest threat with deepfake videos.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post California introduces legislation to stop political and porn deepfakes appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/10/07/california-introduces-legislation-stop-political-porn-deepfakes/feed/ 0
AI Experts: Dear Amazon, stop selling facial recognition to law enforcement https://news.deepgeniusai.com/2019/04/04/ai-experts-amazon-stop-facial-recognition-law/ https://news.deepgeniusai.com/2019/04/04/ai-experts-amazon-stop-facial-recognition-law/#respond Thu, 04 Apr 2019 14:02:16 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5462 A group of AI experts have signed an open letter to Amazon demanding the company stops selling facial recognition to law enforcement following bias findings. Back in January, AI News reported on findings by Algorithmic Justice League founder Joy Buolamwini who researched some of the world’s most popular facial recognition algorithms. Buolamwini found most of... Read more »

The post AI Experts: Dear Amazon, stop selling facial recognition to law enforcement appeared first on AI News.

]]>
A group of AI experts have signed an open letter to Amazon demanding the company stops selling facial recognition to law enforcement following bias findings.

Back in January, AI News reported on findings by Algorithmic Justice League founder Joy Buolamwini who researched some of the world’s most popular facial recognition algorithms.

Buolamwini found most of the algorithms were biased and misidentified subjects with darker skin colours and/or females more often.

Here were the results in descending order of accuracy:

Microsoft

  • Lighter Males (100 percent)
  • Lighter Females (98.3 percent)
  • Darker Males (94 percent)
  • Darker Females (79.2 percent)

Face++

  • Darker Males (99.3 percent)
  • Lighter Males (99.2 percent)
  • Lighter Females (94 percent)
  • Darker Females (65.5 percent)

IBM

  • Lighter Males (99.7 percent)
  • Lighter Females (92.9 percent)
  • Darker Males (88 percent)
  • Darker Females (65.3 percent)

Amazon executives rebuked the findings and claimed a lower level of accuracy was used than what they recommend for law enforcement use.

“The answer to anxieties over new technology is not to run ‘tests’ inconsistent with how the service is designed to be used, and to amplify the test’s false and misleading conclusions through the news media,” Matt Wood, GM of AI for Amazon’s cloud-computing division, wrote in a January blog post.

Signatories of the open letter came to Buolamwini’s defense, including AI pioneer Yoshua Bengio who is a recent winner of the Turing Award.

“In contrast to Dr. Wood’s claims, bias found in one system is cause for concern in the other, particularly in use cases that could severely impact people’s lives, such as law enforcement applications,” they wrote.

Despite having the most accurate facial recognition, Microsoft has rightly not been content at that and has further improved its accuracy since Buolamwini’s work. The firm supports a policy requiring signs to be visible wherever facial recognition is used.

IBM has also made huge strides in levelling the accuracy of their algorithms to represent all parts of society. Earlier this year, the company unveiled a new one million image dataset more representative of the diversity in society.

When Buolamwini reassessed IBM’s algorithm, the accuracy when assessing darker males jumped from 88 percent to 99.4 percent, for darker females from 65.3 percent to 83.5 percent, for lighter females from 92.9 percent to 97.6 percent, and for lighter males it stayed the same at 97 percent.

Buolamwini commented: “So for everybody who watched my TED Talk and said: ‘Isn’t the reason you weren’t detected because of, you know, physics? Your skin reflectance, contrast, et cetera,’ — the laws of physics did not change between December 2017, when I did the study, and 2018, when they launched the new results.”

“What did change is they made it a priority.”

Aside from potentially automating societal problems like racial profiling, inaccurate facial recognition could be the difference between life and death. For example, a recent study (PDF) found that driverless cars observing the road for pedestrians had a more difficult time detecting individuals with darker skin colours.

Everyone, not just AI experts, should be pressuring companies to ensure biases are kept well away from algorithms.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post AI Experts: Dear Amazon, stop selling facial recognition to law enforcement appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/04/04/ai-experts-amazon-stop-facial-recognition-law/feed/ 0
AI is sentencing people based on their ‘risk’ assessment https://news.deepgeniusai.com/2019/01/22/ai-sentencing-people-risk-assessment/ https://news.deepgeniusai.com/2019/01/22/ai-sentencing-people-risk-assessment/#respond Tue, 22 Jan 2019 10:42:12 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4489 AI-powered tools for determining the risk of an individual are being used to make incarceration and sentencing decisions. During the Data for Black Lives conference last weekend, several experts shared how AI is evolving America’s controversial prison system. America imprisons more people than any other nation. It’s not just a result of the population of... Read more »

The post AI is sentencing people based on their ‘risk’ assessment appeared first on AI News.

]]>
AI-powered tools for determining the risk of an individual are being used to make incarceration and sentencing decisions.

During the Data for Black Lives conference last weekend, several experts shared how AI is evolving America’s controversial prison system.

America imprisons more people than any other nation. It’s not just a result of the population of the country, the incarceration per head is the highest in the world at ~716 per 100,000 of the national population. The second largest, Russia, incarcerates ~455 per 100,000 population.

Black males are, by far, America’s most incarcerated:

AI has been proven to have bias problems. Last year, the American Civil Liberties Union found that Amazon’s facial recognition technology disproportionately flagged those with darker skin colours as criminals more often.

The bias is not intentional but a result of a wider problem in STEM career diversity. In the West, the fields are dominated by white males.

A 2010 study by researchers at NIST and the University of Texas in Dallas found (PDF) algorithms designed and tested in East Asia are better at recognising East Asians, while those developed in Western countries are more accurate when detecting Caucasians.

Deploying such inherently-biased AIs is bound to exacerbate societal problems. Most concerning, US courtrooms are using AI tools for ‘risk’ assessments to make sentencing decisions.

Using a defendant’s profile, the AI generates a recidivism score – a number which aims to estimate if an individual will reoffend. A judge then uses that score to make decisions such as the severity of their sentence, what services the individual should be provided, and if a person should be held in jail before trial.

Last July, a statement (PDF) was signed by over 100 civil rights organisations – including the ACLU – calling for AI to be kept clear of risk assessments.

When the bias problem with AIs is solved, their use in the justice system could improve trust in decisions. Current questions over whether a judge was prejudiced in their sentencing will be reduced. However, we’re yet to be anywhere near that point.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post AI is sentencing people based on their ‘risk’ assessment appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/01/22/ai-sentencing-people-risk-assessment/feed/ 0