deepfakes – AI News https://news.deepgeniusai.com Artificial Intelligence News Thu, 06 Aug 2020 12:45:25 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png deepfakes – AI News https://news.deepgeniusai.com 32 32 University College London: Deepfakes are the ‘most serious’ AI crime threat https://news.deepgeniusai.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/ https://news.deepgeniusai.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/#respond Thu, 06 Aug 2020 12:41:52 +0000 https://news.deepgeniusai.com/?p=9794 Researchers from University College London have released a ranking of what experts believe to be the most serious AI crime threats. The researchers first created a list of 20 expected ways AI will be used by criminals within the next 15 years. 31 experts were then asked to rank them by potential severity. Deepfakes –... Read more »

The post University College London: Deepfakes are the ‘most serious’ AI crime threat appeared first on AI News.

]]>
Researchers from University College London have released a ranking of what experts believe to be the most serious AI crime threats.

The researchers first created a list of 20 expected ways AI will be used by criminals within the next 15 years. 31 experts were then asked to rank them by potential severity.

Deepfakes – AI-generated images, videos, and articles – ranked top of the list as the most serious threat.

New and dangerous territory

It’s of little surprise to see deepfakes rank so highly, given the existing issues with disinformation campaigns.

Most fake content today at least must be created by humans, such as those working in the likes of Russia’s infamous “troll farms”. Human-generated disinformation campaigns take time to produce to a convincing standard and often have patterns which make them easier to trace. 

Automating the production of fake content en masse, to influence things such as democratic votes and public opinion, is entering into new and dangerous territory.

One of the most high-profile deepfake cases so far was that of US house speaker Nancy Pelosi. In 2018, a deepfake video circulated on social media which made Pelosi appear drunk and slurring her words. Pelosi criticised Facebook’s response, or lack thereof, and later told California’s KQED: “I think they have proven — by not taking down something they know is false — that they were willing enablers of the Russian interference in our election.”

The deepfake of Pelosi was unsophisticated and likely created to be amusing rather than malicious, but it’s an early warning of how such fakes could be used to cause reputational damage – or worse. Just imagine the potential consequences a fake video of the president announcing an imminent strike on somewhere like North Korea could have.

Deepfakes also have obvious potential to be used for fraud purposes, to pretend to be someone else to access things like bank accounts and sensitive information.

Then there’s the issue of blackmail. Deep learning has already been used to put the faces of celebrities on adult performers. While fake, the threat to release such videos – and the embarrassment caused – could lead to some paying a ransom to keep it from being made public.

“People now conduct large parts of their lives online and their online activity can make and break reputations,” comments first author Dr Matthew Caldwell of UCL Computer Science. “Such an online environment, where data is property and information power, is ideally suited for exploitation by AI-based criminal activity.”

All in all, it’s easy to see why experts are so concerned about deepfakes.

As part of a bid to persuade Facebook to change its policies on deepfakes, Israeli startup Canny AI created a deepfake of Facebook CEO Mark Zuckerberg last year which made it appear like he said: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

Other AI crime threats

There were four other major AI crime threats identified by the researchers: the use of driverless cars as weapons, automated spear fishing, harvesting information for blackmail, and the disruption of AI-controlled systems.

“As the capabilities of AI-based technologies expand, so too has their potential for criminal exploitation,” explained senior author Professor Lewis Griffin of UCL Computer Science. “To adequately prepare for possible AI threats, we need to identify what these threats might be, and how they may impact our lives.”

Of medium concern were of things such as the sale of items and services wrongly called AI, such as security screening and targeted advertising solutions. The researchers believe leading people to believe they’re AI-powered could be lucrative.

Among the lesser concerns is things such as so-called “burglar bots” which could get in through access points of a property to unlock them or search for data. The researchers believe this poses less of a threat because they can be easily prevented through methods such as letterbox cages.

Similarly, the researchers note the potential for AI-based stalking is damaging for individuals but isn’t considered a major threat as it could not operate at scale.

You can find the researchers’ full paper in the Crime Science Journal here.

(Photo by Bill Oxford on Unsplash)

The post University College London: Deepfakes are the ‘most serious’ AI crime threat appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/feed/ 0
Deepfake app puts your face on GIFs while limiting data collection https://news.deepgeniusai.com/2020/01/14/deepfake-app-face-gifs-data-collection/ https://news.deepgeniusai.com/2020/01/14/deepfake-app-face-gifs-data-collection/#comments Tue, 14 Jan 2020 15:11:41 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6356 A new app called Doublicat allows users to superimpose their face into popular GIFs using deep learning technology. In the name of research, here’s one I made earlier: Doublicat uses a Generative Adversarial Network (GAN) to do its magic. The GAN is called RefaceAI and is developed by a company of the same name. RefaceAI... Read more »

The post Deepfake app puts your face on GIFs while limiting data collection appeared first on AI News.

]]>
A new app called Doublicat allows users to superimpose their face into popular GIFs using deep learning technology.

In the name of research, here’s one I made earlier:

Doublicat uses a Generative Adversarial Network (GAN) to do its magic. The GAN is called RefaceAI and is developed by a company of the same name.

RefaceAI was previously used in a face swapping app called Reflect. Elon Musk once used Reflect to put his face on Dwayne Johnson’s body. 

The app is a lot of fun, but – after concerns about viral Russian app FaceApp – many will be wondering just how much data is being collected in return.

Doublicat’s developers are upfront with asking for consent to store your photos upon first opening the app and this is confirmed in their privacy policy: “We may collect the photos, that you take with your camera while using our application.”

However, Doublicat says that photos are only stored on their server for 24 hours before they’re deleted. “The rest of the time your photos used in Doublicat application are stored locally on your mobile device and may be removed any time by either deleting these photos from your mobile device’s file system.”

The app also collects data about facial features but only the vector representations of each person’s face is stored. Doublicat assures that the facial recognition data collected “is not biometric data” and is deleted from their servers within 30 calendar days.

“In no way will Doublicat use your uploaded content for face recognition as Doublicat does not introduce the face recognition technologies or other technical means for processing biometric data for the unique identification or authentication of a user.”

The amount of data Doublicat can collect is limited compared to some alternatives. Apps such as Zao require users to 3D model their face whereas Doublicat only takes a front-facing picture.

RefaceAI is now looking to launch an app which can make deepfake videos rather than just GIFs. The move is likely to be controversial given the concerns around deepfakes and how such videos could be used for things such as political manipulation.

A fake video of Nancy Pelosi, Speaker of the United States House of Representatives, went viral last year after purportedly showing her slurring her words as if she was intoxicated. The clip shows how even a relatively unsophisticated video (it wasn’t an actual deepfake in this case) could be used to cause reputational damage and even swing votes.

A report from the NYU Stern Center for Business and Human Rights last September, covered by our sister publication MarketingTech, highlighted the various ways disinformation could be used ahead of this year’s presidential elections. One of the eight predictions is that deepfake videos will be used “to portray candidates saying and doing things they never said or did”.

Earlier this month, Facebook announced new policies around deepfakes. Any deepfake video that is designed to be misleading will be banned. The problem with the rules is they don’t cover videos altered for parody or those edited “solely to omit or change the order of words,” which will not sound encouraging to anyone wanting a firm stance against manipulation.

Doublicat is available for Android and iOS.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Deepfake app puts your face on GIFs while limiting data collection appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/01/14/deepfake-app-face-gifs-data-collection/feed/ 2
Facebook pledges crackdown on deepfakes ahead of the US presidential election https://news.deepgeniusai.com/2020/01/08/facebook-crackdown-deepfakes-us-presidential-election/ https://news.deepgeniusai.com/2020/01/08/facebook-crackdown-deepfakes-us-presidential-election/#respond Wed, 08 Jan 2020 18:04:20 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6331 Facebook has pledged to crack down on misleading deepfakes ahead of the US presidential election later this year. Voter manipulation is a concern for any functioning democracy and deepfakes provide a whole new challenge for social media platforms. A fake video of Nancy Pelosi, Speaker of the United States House of Representatives, went viral last... Read more »

The post Facebook pledges crackdown on deepfakes ahead of the US presidential election appeared first on AI News.

]]>
Facebook has pledged to crack down on misleading deepfakes ahead of the US presidential election later this year.

Voter manipulation is a concern for any functioning democracy and deepfakes provide a whole new challenge for social media platforms.

A fake video of Nancy Pelosi, Speaker of the United States House of Representatives, went viral last year after purportedly showing her slurring her words as if she was intoxicated. The clip shows how even a relatively unsophisticated video (it wasn’t an actual deepfake) could be used to cause reputational damage and swing votes.

Facebook refused to remove the video of Nancy Pelosi and instead said it would display an article from a third-party fact-checking website highlighting that it’s been edited and take measures to limit its reach. The approach, of course, was heavily criticised.

The new rules from Facebook claim that deepfake videos that are designed to be misleading will be banned. The problem with the rules is they don’t cover videos altered for parody or those edited “solely to omit or change the order of words,” which will not sound encouraging to those wanting a firm stance against manipulation.

In the age of “fake news,” many people have become aware not to necessarily believe what they read. Likewise, an increasing number of people also know how easily images are manipulated. Deepfake videos pose such a concern because the wider public are not yet aware enough of their existence or how to spot them.

A report from the NYU Stern Center for Business and Human Rights last September, covered by our sister publication MarketingTech, highlighted the various ways disinformation could be used ahead of this year’s presidential elections.

One of the eight predictions is that deepfake videos will be used “to portray candidates saying and doing things they never said or did”. Another prediction is that Iran and China may join Russia as sources of disinformation, the former perhaps now being even more likely given recent escalations between the US and Iran and the desire for non-military retaliation.

Legislation is being introduced to criminalise the production of deepfakes without disclosing that they’ve been modified, but the best approach is to limit them from being widely shared in the first place.

“A better approach, and one that avoids the danger of overreaching government censorship, would be for the social media platforms to improve their AI-screening technology, enhance human review, and remove deepfakes before they can do much damage,” the report suggests.

The month after Facebook refused to remove the edited video of Pelosi, a deepfake created by Israeli startup Canny AI aimed to raise awareness of the issue by making it appear like Facebook CEO Mark Zuckerberg said: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

Canny AI’s deepfake was designed to be clearly fake but it shows how easy it’s becoming to manipulate people’s views. In a tense world, it’s not hard to imagine what devastation could be caused simply by releasing a deepfake of a political leader declaring war or planning to launch a missile.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Facebook pledges crackdown on deepfakes ahead of the US presidential election appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/01/08/facebook-crackdown-deepfakes-us-presidential-election/feed/ 0
Experts discuss the current biggest threats posed by AI https://news.deepgeniusai.com/2019/12/04/experts-discuss-current-biggest-threats-ai/ https://news.deepgeniusai.com/2019/12/04/experts-discuss-current-biggest-threats-ai/#respond Wed, 04 Dec 2019 17:49:00 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6277 Several experts have given their thoughts on what threats AI poses, and unsurprisingly fake content is the current biggest danger. The experts, who were speaking on Tuesday at the WSJ Pro Cybersecurity Executive Forum in New York, believe that AI-generated content is of pressing concern to our societies. Camille François, chief innovation officer at social... Read more »

The post Experts discuss the current biggest threats posed by AI appeared first on AI News.

]]>
Several experts have given their thoughts on what threats AI poses, and unsurprisingly fake content is the current biggest danger.

The experts, who were speaking on Tuesday at the WSJ Pro Cybersecurity Executive Forum in New York, believe that AI-generated content is of pressing concern to our societies.

Camille François, chief innovation officer at social media analytics firm Graphika, says that deepfake articles pose the greatest danger.

We’ve already seen what human-generated “fake news” and disinformation campaigns can do, so it won’t be of much surprise to many that involving AI in that process is a leading threat.

François highlights that fake articles and disinformation campaigns today rely on a lot of manual work to create and spread a false message.

“When you look at disinformation campaigns, the amount of manual labour that goes into creating fake websites and fake blogs is gigantic,” François said.

“If you can just simply automate believable and engaging text, then it’s really flooding the internet with garbage in a very automated and scalable way. So that I’m pretty worried about.”

In February, OpenAI unveiled its GPT-2 tool which generates convincing fake text. The AI was trained on 40 gigabytes of text spanning eight million websites.

OpenAI decided against publicly releasing GPT-2 fearing the damage it could do. However, in August, two graduates decided to recreate OpenAI’s text generator.

The graduates said they do not believe their work currently poses a risk to society and released it to show the world what was possible without being a company or government with huge amounts of resources.

“This allows everyone to have an important conversation about security, and researchers to help secure against future potential abuses,” said Vanya Cohen, one of the graduates, to Wired.

Speaking on the same panel as François at the WSJ event, Celeste Fralick, chief data scientist and senior principal engineer at McAfee, recommended that companies partner with firms specialising in detecting deepfakes.

Among the scariest AI-related cybersecurity threats is “adversarial machine learning attacks” whereby a hacker finds and exploits a vulnerability in an AI system.

Fralick provides the example of an experiment by Dawn Song, a professor at the University of California, Berkeley, in which a driverless car was fooled into believing a stop sign was a 45 MPH speed limit sign just by using stickers.

According to Fralick, McAfee itself has performed similar experiments and discovered further vulnerabilities. In one, a 35 MPH speed limit sign was once again modified to fool a driverless car’s AI.

“We extended the middle portion of the three, so the car didn’t recognise it as 35; it recognised it as 85,” she said.

Both panellists believe entire workforces need to be educated about the threats posed by AI in addition to employing strategies for countering attacks.

There is “a great urgency to make sure people have basic AI literacy,” François concludes.

? , , , AI &

The post Experts discuss the current biggest threats posed by AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/12/04/experts-discuss-current-biggest-threats-ai/feed/ 0
California introduces legislation to stop political and porn deepfakes https://news.deepgeniusai.com/2019/10/07/california-introduces-legislation-stop-political-porn-deepfakes/ https://news.deepgeniusai.com/2019/10/07/california-introduces-legislation-stop-political-porn-deepfakes/#respond Mon, 07 Oct 2019 11:48:28 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6083 Deepfake videos have the potential to do unprecedented amounts of harm so California has introduced two bills designed to limit them. For those unaware, deepfakes use machine learning technology in order to make a person appear like they’re convincingly doing or saying things which they’re not. There are two main concerns about deepfake videos: Personal... Read more »

The post California introduces legislation to stop political and porn deepfakes appeared first on AI News.

]]>
Deepfake videos have the potential to do unprecedented amounts of harm so California has introduced two bills designed to limit them.

For those unaware, deepfakes use machine learning technology in order to make a person appear like they’re convincingly doing or saying things which they’re not.

There are two main concerns about deepfake videos:

  • Personal defamation – An individual is made to appear in a sexual and/or humiliating scene either for blackmail purposes or to stain that person’s image.
  • Manipulation – An influential person, typically a politician, is made to appear they’ve said something in order to sway public opinion and perhaps even vote a certain way.

Many celebrities have become victims of deepfake porn. One of the bills signed into law by the state of California last week allows victims to sue anyone who puts their image into a pornographic video without consent.

Earlier this year, Facebook CEO Mark Zuckerberg became the victim of a deepfake. Zuckerberg was portrayed to say: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

Clearly, to most of us, the Zuckerberg video is a fake. The video was actually created by Isreali startup Canny AI as part of a commissioned art installation called Spectre that was on display at the Sheffield Doc/Fest in the UK.

A month prior to the Zuckerberg video, Facebook refused to remove a deepfake video of House Speaker Nancy Pelosi which aimed to portray her as intoxicated. If deepfakes are allowed to go viral on huge social media platforms like Facebook, it will pose huge societal problems.

Pelosi later told California’s KQED: “I think they have proven — by not taking down something they know is false — that they were willing enablers of the Russian interference in our election.”

California’s second bill legislates against posting any manipulated video of a political candidate, albeit only within 60 days of an election.

California Assembly representative Marc Berman said:

“Voters have a right to know when video, audio, and images that they are being shown, to try to influence their vote in an upcoming election, have been manipulated and do not represent reality.

[That] makes deepfake technology a powerful and dangerous new tool in the arsenal of those who want to wage misinformation campaigns to confuse voters.”

While many people now know not to trust everything they read, most of us are still accustomed to believing what we see with our eyes. That’s what poses the biggest threat with deepfake videos.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post California introduces legislation to stop political and porn deepfakes appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/10/07/california-introduces-legislation-stop-political-porn-deepfakes/feed/ 0
Adobe has trained an AI to detect photoshopped images https://news.deepgeniusai.com/2019/06/17/adobe-ai-detect-photoshopped-images/ https://news.deepgeniusai.com/2019/06/17/adobe-ai-detect-photoshopped-images/#respond Mon, 17 Jun 2019 13:56:51 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5764 Adobe has trained an AI to detect when images have been photoshopped which could help in the fight against deepfakes. The software giant partnered with researchers from the University of California on the AI. A convolutional neural network (CNN) was trained to spot changes made to images using Photoshop’s Face-Aware Liquify feature. Face-Aware Liquify is... Read more »

The post Adobe has trained an AI to detect photoshopped images appeared first on AI News.

]]>
Adobe has trained an AI to detect when images have been photoshopped which could help in the fight against deepfakes.

The software giant partnered with researchers from the University of California on the AI. A convolutional neural network (CNN) was trained to spot changes made to images using Photoshop’s Face-Aware Liquify feature.

Face-Aware Liquify is a feature designed to adjust and exaggerate facial features. Photoshop automatically detects facial features to provide an easy way to adjust them as required.

Here’s an example of Face-Aware Liquify in action:

© Adobe

Features like Face-Aware Liquify are relatively harmless when used for purposes like making someone appear happier in a photo or ad. However, such features can also be exploited – for example, making a political opponent appear to express emotions like anger or disgust in a bid to sway voter opinion.

“This new research is part of a broader effort across Adobe to better detect image, video, audio and document manipulations,” Adobe wrote in a blog post on Friday.

Last week, AI News reported that a deepfake video of Facebook CEO Mark Zuckerberg had gone viral. Zuckerberg appeared to say: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

The deepfake wasn’t intended to be malicious but rather part of a commissioned art installation by Canny AI designed to highlight the dangers of such fake videos. Other videos on display included President Trump and Kim Kardashian, individuals with huge amounts of influence.

A month prior to the release of the Zuckerberg deepfake, a video of House Speaker Nancy Pelosi was being spread on Facebook which portrayed her as being intoxicated. Facebook was criticised for refusing to remove the video.

Pelosi later told California’s KQED: “I think they have proven — by not taking down something they know is false — that they were willing enablers of the Russian interference in our election.”

In another example of the growing dangers of deepfakes, a spy was caught using an AI-generated profile pic on LinkedIn to connect with unbeknownst targets.

Humans are pretty much hardwired to believe what they see with their eyes, that’s what makes deepfakes so dangerous. AIs like the one created by Adobe and the team at the University of California will be vital to help counter the deepfake threat.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Adobe has trained an AI to detect photoshopped images appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/06/17/adobe-ai-detect-photoshopped-images/feed/ 0
McAfee: Keep an eye on the humans pulling the levers, not the AIs https://news.deepgeniusai.com/2019/03/06/mcafee-keep-eye-humans-ais/ https://news.deepgeniusai.com/2019/03/06/mcafee-keep-eye-humans-ais/#respond Wed, 06 Mar 2019 17:14:56 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5301 Security firm McAfee has warned that it’s more likely humans will use AI for malicious purposes rather than it going rogue itself. It’s become a cliché metaphor, but people are still concerned a self-thinking killer AI like SkyNet from the film Terminator will be created. McAfee CTO Steve Grobman spoke at this year’s RSA conference... Read more »

The post McAfee: Keep an eye on the humans pulling the levers, not the AIs appeared first on AI News.

]]>
Security firm McAfee has warned that it’s more likely humans will use AI for malicious purposes rather than it going rogue itself.

It’s become a cliché metaphor, but people are still concerned a self-thinking killer AI like SkyNet from the film Terminator will be created.

McAfee CTO Steve Grobman spoke at this year’s RSA conference in San Francisco and warned the wrong humans in control of powerful AIs are his company’s primary concern.

To provide an example of how AIs could be used for good or bad purposes, Grobman handed over to McAfee Chief Data Scientist Dr Celeste Fralick.

Fralick explained how McAfee has attempted to predict crime in San Francisco using historic data combined with a machine learning model. The AI recommends where police could be deployed to have the best chance of apprehending criminals.

Most law-abiding citizens would agree this is a positive use of AI. However, in the hands of criminals it could be used to pinpoint where to commit a crime and have the best chance of avoiding capture.

In another demo at the conference, Fralick showed a video where his words were being spoken by Grobman in an example of a ‘DeepFake’.

“I used freely available, recorded public comments by you to create and train a machine learning model that let me develop a deepfake video with my words coming out of your mouth,” Fralick explained. “It just shows one way that AI and machine learning can be used to create massive chaos.

DeepFakes are opening up wide range of new threats including fraud through impersonation. Another is the potential for blackmail, with sexually-explicit fakes being threatened to be released to embarass an individual.

“We can’t allow fear to impede our progress, but it’s how we manage the innovation that is the real story,” Grobman concluded.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post McAfee: Keep an eye on the humans pulling the levers, not the AIs appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/03/06/mcafee-keep-eye-humans-ais/feed/ 0