video – AI News https://news.deepgeniusai.com Artificial Intelligence News Wed, 25 Mar 2020 05:20:35 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png video – AI News https://news.deepgeniusai.com 32 32 California introduces legislation to stop political and porn deepfakes https://news.deepgeniusai.com/2019/10/07/california-introduces-legislation-stop-political-porn-deepfakes/ https://news.deepgeniusai.com/2019/10/07/california-introduces-legislation-stop-political-porn-deepfakes/#respond Mon, 07 Oct 2019 11:48:28 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6083 Deepfake videos have the potential to do unprecedented amounts of harm so California has introduced two bills designed to limit them. For those unaware, deepfakes use machine learning technology in order to make a person appear like they’re convincingly doing or saying things which they’re not. There are two main concerns about deepfake videos: Personal... Read more »

The post California introduces legislation to stop political and porn deepfakes appeared first on AI News.

]]>
Deepfake videos have the potential to do unprecedented amounts of harm so California has introduced two bills designed to limit them.

For those unaware, deepfakes use machine learning technology in order to make a person appear like they’re convincingly doing or saying things which they’re not.

There are two main concerns about deepfake videos:

  • Personal defamation – An individual is made to appear in a sexual and/or humiliating scene either for blackmail purposes or to stain that person’s image.
  • Manipulation – An influential person, typically a politician, is made to appear they’ve said something in order to sway public opinion and perhaps even vote a certain way.

Many celebrities have become victims of deepfake porn. One of the bills signed into law by the state of California last week allows victims to sue anyone who puts their image into a pornographic video without consent.

Earlier this year, Facebook CEO Mark Zuckerberg became the victim of a deepfake. Zuckerberg was portrayed to say: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

Clearly, to most of us, the Zuckerberg video is a fake. The video was actually created by Isreali startup Canny AI as part of a commissioned art installation called Spectre that was on display at the Sheffield Doc/Fest in the UK.

A month prior to the Zuckerberg video, Facebook refused to remove a deepfake video of House Speaker Nancy Pelosi which aimed to portray her as intoxicated. If deepfakes are allowed to go viral on huge social media platforms like Facebook, it will pose huge societal problems.

Pelosi later told California’s KQED: “I think they have proven — by not taking down something they know is false — that they were willing enablers of the Russian interference in our election.”

California’s second bill legislates against posting any manipulated video of a political candidate, albeit only within 60 days of an election.

California Assembly representative Marc Berman said:

“Voters have a right to know when video, audio, and images that they are being shown, to try to influence their vote in an upcoming election, have been manipulated and do not represent reality.

[That] makes deepfake technology a powerful and dangerous new tool in the arsenal of those who want to wage misinformation campaigns to confuse voters.”

While many people now know not to trust everything they read, most of us are still accustomed to believing what we see with our eyes. That’s what poses the biggest threat with deepfake videos.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post California introduces legislation to stop political and porn deepfakes appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/10/07/california-introduces-legislation-stop-political-porn-deepfakes/feed/ 0
Joy Buolamwini: Fighting algorithmic bias needs to be ‘a priority’ https://news.deepgeniusai.com/2019/01/24/joy-buolamwini-algorithmic-bias-priority/ https://news.deepgeniusai.com/2019/01/24/joy-buolamwini-algorithmic-bias-priority/#comments Thu, 24 Jan 2019 15:09:20 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4584 Algorithmic Justice League founder Joy Buolamwini gave a speech during the World Economic Forum this week on the need to fight AI bias. Buolamwini is also an MIT Media Lab researcher and went somewhat viral for her TED Talk in 2016 titled ‘How I’m fighting bias in algorithms’. Her latest speech included a presentation in... Read more »

The post Joy Buolamwini: Fighting algorithmic bias needs to be ‘a priority’ appeared first on AI News.

]]>
Algorithmic Justice League founder Joy Buolamwini gave a speech during the World Economic Forum this week on the need to fight AI bias.

Buolamwini is also an MIT Media Lab researcher and went somewhat viral for her TED Talk in 2016 titled ‘How I’m fighting bias in algorithms’.

Her latest speech included a presentation in which Buolamwini went over an analysis of the current popular facial recognition algorithms.

Here were the overall accuracy results when guessing the gender of a face:

  • Microsoft: 93.7 percent
  • Face++: 90 percent
  • IBM: 87.9 percent

Shown in this way, there appears to be little problem. Of course, society is a lot more diverse and algorithms need to be accurate for all.

When separated between males and females, a greater disparity becomes apparent:

  • Microsoft: 89.3 percent (females), 97.4 percent (males)
  • Face++: 78.7 percent (females), 99.3 percent (males)
  • IBM: 79.7 percent (females), 94.4 percent (males)

Here we begin to see the underrepresentation of females in STEM careers begin to come into effect. China-based Face++ suffers the worst, likely a result of the country’s more severe gender gap (PDF) over the US.

Splitting between skin type also increases the disparity:

  • Microsoft: 87.1 percent (darker), 99.3 percent (lighter)
  • Face++: 83.5 percent (darker), 95.3 percent (lighter)
  • IBM: 77.6 percent (darker), 96.8 percent (lighter)

The difference here is likely again to do with a racial disparity in STEM careers. A gap between 12-19 percent is observed between darker and lighter skin tones.

So far, the results are in line with a 2010 study by researchers at NIST and the University of Texas in Dallas. The researchers found (PDF) algorithms designed and tested in East Asia are better at recognising East Asians, while those developed in Western countries are more accurate when detecting Caucasians.

“We did something that hadn’t been done in the field before, which was doing intersectional analysis,” explains Buolamwini. “If we only do single axis analysis – we only look at skin type, only look at gender… – we’re going to miss important trends.”

Here is where the results get most concerning. Results are in descending order from most accurate to least:

Microsoft

Lighter Males (100 percent)

Lighter Females (98.3 percent)

Darker Males (94 percent)

Darker Females (79.2 percent)

Face++

Darker Males (99.3 percent)

Lighter Males (99.2 percent)

Lighter Females (94 percent)

Darker Females (65.5 percent)

IBM

Lighter Males (99.7 percent)

Lighter Females (92.9 percent)

Darker Males (88 percent)

Darker Females (65.3 percent)

The lack of accuracy with regards to females with darker skin tones is of particular note. Two of the three algorithms would get it wrong in approximately one-third of occasions.

Just imagine surveillance being used with these algorithms. Lighter skinned males would be recognised in most cases, but darker skinned females would be stopped often. That could be a lot of mistakes in areas with high footfall such as airports.

Prior to making her results public, Buolamwini sent the results to each company. IBM responded the same day and said their developers would address the issue.

When she reassessed IBM’s algorithm, the accuracy when assessing darker males jumped from 88 percent to 99.4 percent, for darker females from 65.3 percent to 83.5 percent, for lighter females from 92.9 percent to 97.6 percent, and for lighter males it stayed the same at 97 percent.

Buolamwini commented: “So for everybody who watched my TED Talk and said: ‘Isn’t the reason you weren’t detected because of, you know, physics? Your skin reflectance, contrast, et cetera,’ — the laws of physics did not change between December 2017, when I did the study, and 2018, when they launched the new results.”

“What did change is they made it a priority.”

You can watch Buolamwini’s full presentation at the WEF here.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Joy Buolamwini: Fighting algorithmic bias needs to be ‘a priority’ appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/01/24/joy-buolamwini-algorithmic-bias-priority/feed/ 4