deepfake – AI News https://news.deepgeniusai.com Artificial Intelligence News Wed, 25 Mar 2020 05:21:48 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png deepfake – AI News https://news.deepgeniusai.com 32 32 Deepfake shows Nixon announcing the moon landing failed https://news.deepgeniusai.com/2020/02/06/deepfake-nixon-moon-landing-failed/ https://news.deepgeniusai.com/2020/02/06/deepfake-nixon-moon-landing-failed/#respond Thu, 06 Feb 2020 16:42:59 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6403 In the latest creepy deepfake, former US President Nixon is shown to announce that the first moon landing failed. Nixon was known to be a divisive figure but certainly recognisable. The video shows Nixon in the Oval Office, surrounded by flags, giving a presidential address to an eagerly awaiting world. However, unlike the actual first... Read more »

The post Deepfake shows Nixon announcing the moon landing failed appeared first on AI News.

]]>

In the latest creepy deepfake, former US President Nixon is shown to announce that the first moon landing failed.

Nixon was known to be a divisive figure but certainly recognisable. The video shows Nixon in the Oval Office, surrounded by flags, giving a presidential address to an eagerly awaiting world.

However, unlike the actual first moon landing – unless you’re a subscriber to conspiracy theories – this one failed.

“These brave men, Neil Armstrong and Edwin Aldrin, know that there is no hope for their recovery,” Nixon says in his trademark growl. “But they also know that there is hope for mankind in their sacrifice.”

Here are some excerpts from the full video:

What makes the video more haunting is that the speech itself is real. Although never broadcast, it was written for Nixon by speechwriter William Safire in the eventuality the moon landing did fail.

The deepfake was created by a team from MIT’s Center for Advanced Virtuality and put on display at the IDFA documentary festival in Amsterdam.

In order to recreate Nixon’s famous voice, the MIT team partnered with technicians from Ukraine and Israel and used advanced machine learning techniques.

We’ve covered many deepfakes here on AI News. While many are amusing, there are serious concerns that deepfakes could be used for malicious purposes such as blackmail or manipulation.

Ahead of the US presidential elections, some campaigners have worked to increase the awareness of deepfakes and get social media platforms to help tackle any dangerous videos.

Back in 2018, speaker Nancy Pelosi was the victim of a deepfake that went viral across social media which made her appear drunk and slurring her words. Pelosi criticised Facebook’s response, or lack thereof, and later told California’s KQED: “I think they have proven — by not taking down something they know is false — that they were willing enablers of the Russian interference in our election.”

As part of a bid to persuade the social media giant to change its policies on deepfakes, Israeli startup Canny AI created a deepfake of Facebook CEO Mark Zuckerberg – making it appear like he said: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

Last month, Facebook pledged to crack down on deepfakes ahead of the US presidential elections. However, the new rules don’t cover videos altered for parody or those edited “solely to omit or change the order of words,” which will not sound encouraging to those wanting a firm stance against potential voter manipulation.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Deepfake shows Nixon announcing the moon landing failed appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/02/06/deepfake-nixon-moon-landing-failed/feed/ 0
Deepfake app puts your face on GIFs while limiting data collection https://news.deepgeniusai.com/2020/01/14/deepfake-app-face-gifs-data-collection/ https://news.deepgeniusai.com/2020/01/14/deepfake-app-face-gifs-data-collection/#comments Tue, 14 Jan 2020 15:11:41 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6356 A new app called Doublicat allows users to superimpose their face into popular GIFs using deep learning technology. In the name of research, here’s one I made earlier: Doublicat uses a Generative Adversarial Network (GAN) to do its magic. The GAN is called RefaceAI and is developed by a company of the same name. RefaceAI... Read more »

The post Deepfake app puts your face on GIFs while limiting data collection appeared first on AI News.

]]>
A new app called Doublicat allows users to superimpose their face into popular GIFs using deep learning technology.

In the name of research, here’s one I made earlier:

Doublicat uses a Generative Adversarial Network (GAN) to do its magic. The GAN is called RefaceAI and is developed by a company of the same name.

RefaceAI was previously used in a face swapping app called Reflect. Elon Musk once used Reflect to put his face on Dwayne Johnson’s body. 

The app is a lot of fun, but – after concerns about viral Russian app FaceApp – many will be wondering just how much data is being collected in return.

Doublicat’s developers are upfront with asking for consent to store your photos upon first opening the app and this is confirmed in their privacy policy: “We may collect the photos, that you take with your camera while using our application.”

However, Doublicat says that photos are only stored on their server for 24 hours before they’re deleted. “The rest of the time your photos used in Doublicat application are stored locally on your mobile device and may be removed any time by either deleting these photos from your mobile device’s file system.”

The app also collects data about facial features but only the vector representations of each person’s face is stored. Doublicat assures that the facial recognition data collected “is not biometric data” and is deleted from their servers within 30 calendar days.

“In no way will Doublicat use your uploaded content for face recognition as Doublicat does not introduce the face recognition technologies or other technical means for processing biometric data for the unique identification or authentication of a user.”

The amount of data Doublicat can collect is limited compared to some alternatives. Apps such as Zao require users to 3D model their face whereas Doublicat only takes a front-facing picture.

RefaceAI is now looking to launch an app which can make deepfake videos rather than just GIFs. The move is likely to be controversial given the concerns around deepfakes and how such videos could be used for things such as political manipulation.

A fake video of Nancy Pelosi, Speaker of the United States House of Representatives, went viral last year after purportedly showing her slurring her words as if she was intoxicated. The clip shows how even a relatively unsophisticated video (it wasn’t an actual deepfake in this case) could be used to cause reputational damage and even swing votes.

A report from the NYU Stern Center for Business and Human Rights last September, covered by our sister publication MarketingTech, highlighted the various ways disinformation could be used ahead of this year’s presidential elections. One of the eight predictions is that deepfake videos will be used “to portray candidates saying and doing things they never said or did”.

Earlier this month, Facebook announced new policies around deepfakes. Any deepfake video that is designed to be misleading will be banned. The problem with the rules is they don’t cover videos altered for parody or those edited “solely to omit or change the order of words,” which will not sound encouraging to anyone wanting a firm stance against manipulation.

Doublicat is available for Android and iOS.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Deepfake app puts your face on GIFs while limiting data collection appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/01/14/deepfake-app-face-gifs-data-collection/feed/ 2
Facebook pledges crackdown on deepfakes ahead of the US presidential election https://news.deepgeniusai.com/2020/01/08/facebook-crackdown-deepfakes-us-presidential-election/ https://news.deepgeniusai.com/2020/01/08/facebook-crackdown-deepfakes-us-presidential-election/#respond Wed, 08 Jan 2020 18:04:20 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6331 Facebook has pledged to crack down on misleading deepfakes ahead of the US presidential election later this year. Voter manipulation is a concern for any functioning democracy and deepfakes provide a whole new challenge for social media platforms. A fake video of Nancy Pelosi, Speaker of the United States House of Representatives, went viral last... Read more »

The post Facebook pledges crackdown on deepfakes ahead of the US presidential election appeared first on AI News.

]]>
Facebook has pledged to crack down on misleading deepfakes ahead of the US presidential election later this year.

Voter manipulation is a concern for any functioning democracy and deepfakes provide a whole new challenge for social media platforms.

A fake video of Nancy Pelosi, Speaker of the United States House of Representatives, went viral last year after purportedly showing her slurring her words as if she was intoxicated. The clip shows how even a relatively unsophisticated video (it wasn’t an actual deepfake) could be used to cause reputational damage and swing votes.

Facebook refused to remove the video of Nancy Pelosi and instead said it would display an article from a third-party fact-checking website highlighting that it’s been edited and take measures to limit its reach. The approach, of course, was heavily criticised.

The new rules from Facebook claim that deepfake videos that are designed to be misleading will be banned. The problem with the rules is they don’t cover videos altered for parody or those edited “solely to omit or change the order of words,” which will not sound encouraging to those wanting a firm stance against manipulation.

In the age of “fake news,” many people have become aware not to necessarily believe what they read. Likewise, an increasing number of people also know how easily images are manipulated. Deepfake videos pose such a concern because the wider public are not yet aware enough of their existence or how to spot them.

A report from the NYU Stern Center for Business and Human Rights last September, covered by our sister publication MarketingTech, highlighted the various ways disinformation could be used ahead of this year’s presidential elections.

One of the eight predictions is that deepfake videos will be used “to portray candidates saying and doing things they never said or did”. Another prediction is that Iran and China may join Russia as sources of disinformation, the former perhaps now being even more likely given recent escalations between the US and Iran and the desire for non-military retaliation.

Legislation is being introduced to criminalise the production of deepfakes without disclosing that they’ve been modified, but the best approach is to limit them from being widely shared in the first place.

“A better approach, and one that avoids the danger of overreaching government censorship, would be for the social media platforms to improve their AI-screening technology, enhance human review, and remove deepfakes before they can do much damage,” the report suggests.

The month after Facebook refused to remove the edited video of Pelosi, a deepfake created by Israeli startup Canny AI aimed to raise awareness of the issue by making it appear like Facebook CEO Mark Zuckerberg said: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

Canny AI’s deepfake was designed to be clearly fake but it shows how easy it’s becoming to manipulate people’s views. In a tense world, it’s not hard to imagine what devastation could be caused simply by releasing a deepfake of a political leader declaring war or planning to launch a missile.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Facebook pledges crackdown on deepfakes ahead of the US presidential election appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/01/08/facebook-crackdown-deepfakes-us-presidential-election/feed/ 0
Deepfake has Johnson and Corbyn advocating each other for Britain’s next PM https://news.deepgeniusai.com/2019/11/12/deepfake-johnson-corbyn-britain-next-pm/ https://news.deepgeniusai.com/2019/11/12/deepfake-johnson-corbyn-britain-next-pm/#respond Tue, 12 Nov 2019 14:16:24 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6185 A think tank has released two deepfake videos which appear to show election rivals Boris Johnson and Jeremy Corbyn advocating each other for Britain’s top role. The clips are produced by Future Advocacy and intend to show that people can no longer necessarily trust what they see in videos, not just to question what they... Read more »

The post Deepfake has Johnson and Corbyn advocating each other for Britain’s next PM appeared first on AI News.

]]>
A think tank has released two deepfake videos which appear to show election rivals Boris Johnson and Jeremy Corbyn advocating each other for Britain’s top role.

The clips are produced by Future Advocacy and intend to show that people can no longer necessarily trust what they see in videos, not just to question what they read and hear.

Here’s the Johnson video:

And here’s the Corbyn video:

In the era of fake news, people are becoming increasingly aware not to believe everything they read. Training the general population not to always believe what they can see with their own eyes is a lot more challenging.

At the same time, it’s also important in a democracy that media plurality is maintained and not too much influence is centralised to a handful of “trusted” outlets. Similarly, people cannot be allowed to just call something fake news to avoid scrutiny.

Future Advocacy highlights four key challenges:

  1. Detecting deepfakes – whether society can create the means for detecting a deepfake directly at the point of upload or once it has become widely disseminated.
  2. Liar’s dividend – a phenomenon in which genuine footage of controversial content can be dismissed by the subject as a deepfake, despite it being true.
  3. Regulation – what should the limitations be with regards to the creation of deepfakes and can these be practically enforced?
  4. Damage limitation – managing the impacts of deepfakes when regulation fails and the question of where responsibility should lie for damage limitation.

Areeq Chowdhury, Head of Think Tank at Future Advocacy, said:

“Deepfakes represent a genuine threat to democracy and society more widely. They can be used to fuel misinformation and totally undermine trust in audiovisual content.

Despite warnings over the past few years, politicians have so far collectively failed to address the issue of disinformation online. Instead, the response has been to defer to tech companies to do more. The responsibility for protecting our democracy lies in the corridors of Westminster not the boardrooms of Silicon Valley.

By releasing these deepfakes, we aim to use shock and humour to inform the public and put pressure on our lawmakers. This issue should be put above party politics. We urge all politicians to work together to update our laws and protect society from the threat of deepfakes, fake news, and micro-targeted political adverts online.”

Journalists are going to have to become experts in spotting fake content to maintain trust and integrity. Social media companies will also have to take some responsibility for the content they allow to spread on their platforms.

Social media moderation

Manual moderation of every piece of content that’s posted to a network like Facebook or Twitter is simply unfeasible, so automation is going to become necessary to at least flag potentially offending content.

But what constitutes offending content? That is the question social media giants are battling with in order to strike the right balance between free speech and expression while protecting their users from manipulation.

Just last night, Twitter released its draft policy on deepfakes and is currently accepting feedback on it.

The social network proposes the following steps for tweets it detects as featuring potentially manipulated content:

  • Place a notice next to tweets that share synthetic or manipulated media.
  • Warn people before they share or like tweets with synthetic or manipulated media.
  • Add a link – for example, to a news article or Twitter Moment – so that people can read more about why various sources believe the media is synthetic or manipulated.

Twitter defines deepfakes as “any photo, audio, or video that has been significantly altered or fabricated in a way that intends to mislead people or changes its original meaning.”

Twitter’s current definition sounds like it could end up flagging the internet’s favourite medium, memes, as deepfakes. However, there’s a compelling argument that memes often should at least be flagged as modified from their original intent.

Take the infamous “This is fine” meme that was actually part of a larger comic by KC Green before it was manipulated for individual purposes.

In this Vulture piece, Green gives his personal stance that he’s mostly fine with people using his work as a meme so long as they’re not monetising it for themselves or using it for political purposes.

On July 25th 2016, the official Republican Party Twitter account used Green’s work and added “Well ¯\_(ツ)_/¯ #DemsInPhilly #EnoughClinton”. Green later tweeted: “Everyone is in their right to use this is fine on social media posts, but man o man I personally would like @GOP to delete their stupid post.”

Raising awareness of deepfakes

Bill Posters is a UK artist known for creating subversive deepfakes of famous celebrities, including Donald Trump and Kim Kardashian. Posters was behind the viral deepfake of Mark Zuckerberg for the Spectre project which AI News reported on earlier this year.

Posters commented on his activism using deepfakes:

“We’ve used the biometric data of famous UK politicians to raise awareness to the fact that without greater controls and protections concerning personal data and powerful new technologies, misinformation poses a direct risk to everyone’s human rights including the rights of those in positions of power.

It’s staggering that after 3 years, the recommendations from the DCMS Select Committee enquiry into fake news or the Information Commissioner’s Office enquiry into the Cambridge Analytica scandals have not been applied to change UK laws to protect our liberty and democracy.

As a result, the conditions for computational forms of propaganda and misinformation campaigns to be amplified by social media platforms are still in effect today. We urge all political parties to come together and pass measures which safeguard future elections.”

As the UK heads towards its next major election, there is sure to be much debate around potential voter manipulation. Many have pointed towards Russian interference in Western democracies but there’s yet to be any solid evidence of that being the case.

Opposition parties, however, have criticised the incumbent government in the UK as refusing to release a report into Russian interference. Former US presidential candidate Hillary Clinton branded it “inexplicable and shameful” that the UK government has not yet published the report.

Allegations of interference and foul play will likely increase in the run-up to the election, but Future Advocacy is doing a great job in highlighting to the public that not everything you see can be believed.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Deepfake has Johnson and Corbyn advocating each other for Britain’s next PM appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/11/12/deepfake-johnson-corbyn-britain-next-pm/feed/ 0
Adobe has trained an AI to detect photoshopped images https://news.deepgeniusai.com/2019/06/17/adobe-ai-detect-photoshopped-images/ https://news.deepgeniusai.com/2019/06/17/adobe-ai-detect-photoshopped-images/#respond Mon, 17 Jun 2019 13:56:51 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5764 Adobe has trained an AI to detect when images have been photoshopped which could help in the fight against deepfakes. The software giant partnered with researchers from the University of California on the AI. A convolutional neural network (CNN) was trained to spot changes made to images using Photoshop’s Face-Aware Liquify feature. Face-Aware Liquify is... Read more »

The post Adobe has trained an AI to detect photoshopped images appeared first on AI News.

]]>
Adobe has trained an AI to detect when images have been photoshopped which could help in the fight against deepfakes.

The software giant partnered with researchers from the University of California on the AI. A convolutional neural network (CNN) was trained to spot changes made to images using Photoshop’s Face-Aware Liquify feature.

Face-Aware Liquify is a feature designed to adjust and exaggerate facial features. Photoshop automatically detects facial features to provide an easy way to adjust them as required.

Here’s an example of Face-Aware Liquify in action:

© Adobe

Features like Face-Aware Liquify are relatively harmless when used for purposes like making someone appear happier in a photo or ad. However, such features can also be exploited – for example, making a political opponent appear to express emotions like anger or disgust in a bid to sway voter opinion.

“This new research is part of a broader effort across Adobe to better detect image, video, audio and document manipulations,” Adobe wrote in a blog post on Friday.

Last week, AI News reported that a deepfake video of Facebook CEO Mark Zuckerberg had gone viral. Zuckerberg appeared to say: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

The deepfake wasn’t intended to be malicious but rather part of a commissioned art installation by Canny AI designed to highlight the dangers of such fake videos. Other videos on display included President Trump and Kim Kardashian, individuals with huge amounts of influence.

A month prior to the release of the Zuckerberg deepfake, a video of House Speaker Nancy Pelosi was being spread on Facebook which portrayed her as being intoxicated. Facebook was criticised for refusing to remove the video.

Pelosi later told California’s KQED: “I think they have proven — by not taking down something they know is false — that they were willing enablers of the Russian interference in our election.”

In another example of the growing dangers of deepfakes, a spy was caught using an AI-generated profile pic on LinkedIn to connect with unbeknownst targets.

Humans are pretty much hardwired to believe what they see with their eyes, that’s what makes deepfakes so dangerous. AIs like the one created by Adobe and the team at the University of California will be vital to help counter the deepfake threat.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Adobe has trained an AI to detect photoshopped images appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/06/17/adobe-ai-detect-photoshopped-images/feed/ 0
Spy connected with targets on LinkedIn using AI-generated pic https://news.deepgeniusai.com/2019/06/14/spy-targets-linkedin-ai-generated-pic/ https://news.deepgeniusai.com/2019/06/14/spy-targets-linkedin-ai-generated-pic/#respond Fri, 14 Jun 2019 14:41:27 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5757 A spy used a profile pic generated by AI to connect with targets on LinkedIn, further showing how blurred the lines between real and fake are becoming. Katie Jones was supposed to be a thirty-something redhead who worked at a leading think tank and had serious connections. Connections included people with political weight like a... Read more »

The post Spy connected with targets on LinkedIn using AI-generated pic appeared first on AI News.

]]>
A spy used a profile pic generated by AI to connect with targets on LinkedIn, further showing how blurred the lines between real and fake are becoming.

Katie Jones was supposed to be a thirty-something redhead who worked at a leading think tank and had serious connections.

Connections included people with political weight like a deputy assistant secretary of state, to groups ranging from the centrist Brookings Institution to the right-wing Heritage Foundation. It’s the connections not on Miss Jones’ profile, however, that people should be concerned about.

The Associated Press (AP) found the profile was entirely fake and typical of espionage campaigns on the networking site. “It smells a lot like some sort of state-run operation,” said Jonas Parello-Plesner, program director at Denmark-based think tank Alliance of Democracies Foundation.

Parello-Plesner was targeted in an espionage attack over LinkedIn a few years ago, showing it’s not a new phenomenon. However, advancements in AI are making the creation of convincing fake profiles easier than ever.

A closer examination of Jones’ alleged profile pic conducted by AP’s Raphael Satter highlighted there are still some telltale signs of a fake image:

The issue of deepfakes is becoming ever more apparent. Earlier this week, AI News reported Facebook CEO Mark Zuckerberg had become the victim of a deepfake video just a month after the social media site refused to remove a deepfake video of House Speaker Nancy Pelosi.

In the deepfake of Zuckerberg, he is portrayed to say: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

While many people are learning not to believe all they read, it will take some time for people not to trust their eyes. In a world where politicians can be convincingly made to say anything using just a computer, that has major ramifications for society.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Spy connected with targets on LinkedIn using AI-generated pic appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/06/14/spy-targets-linkedin-ai-generated-pic/feed/ 0
Zuckerberg is deepfaked a month after Facebook refused to remove others https://news.deepgeniusai.com/2019/06/12/zuckerberg-deepfake-facebook-refused/ https://news.deepgeniusai.com/2019/06/12/zuckerberg-deepfake-facebook-refused/#respond Wed, 12 Jun 2019 09:04:24 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5746 A deepfake of Facebook CEO Mark Zuckerberg is making the rounds a month after his company refused to remove similar videos. Last month, Facebook refused to remove a deepfake video of House Speaker Nancy Pelosi. Rather than making Pelosi appear to say things she never did, the video aimed to portray her as being intoxicated.... Read more »

The post Zuckerberg is deepfaked a month after Facebook refused to remove others appeared first on AI News.

]]>
A deepfake of Facebook CEO Mark Zuckerberg is making the rounds a month after his company refused to remove similar videos.

Last month, Facebook refused to remove a deepfake video of House Speaker Nancy Pelosi. Rather than making Pelosi appear to say things she never did, the video aimed to portray her as being intoxicated.

Deepfakes have the potential to spread misinformation and damage the reputation of individuals. Particularly in the world of politics and increasingly sophisticated state disinformation campaigns, it’s easy to imagine the danger they pose.

Pelosi later told California’s KQED: “I think they have proven — by not taking down something they know is false — that they were willing enablers of the Russian interference in our election.”

Governments and activists have called on social networks to assist in the detection and deletion of such videos. Now it seems that campaigners are targeting social network executives with deepfakes in a bid to make them take such videos more seriously.

In the deepfake of Zuckerberg, the Facebook CEO is portrayed to say: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

Fortunately for Zuckerberg, the deepfake is not malicious and most people will know it’s not real. The video both helps to make social networks assist more in tackling deepfakes while also raising public awareness.

Society is going through a period of serious change when it comes to knowing what’s real and fake. Many now question what they read because anyone could have written it and the issue of fake news is well-publicised. Most people, however, are still accustomed to believing that someone is saying what they are when they can see them doing it.

Zuckerberg’s deepfake was created Israeli startup Canny AI. The firm has also debuted fake videos with the likes of President Trump and Kim Kardashian as part of a commissioned art installation called Spectre that was on display at the Sheffield Doc/Fest in the UK.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Zuckerberg is deepfaked a month after Facebook refused to remove others appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/06/12/zuckerberg-deepfake-facebook-refused/feed/ 0
Trump speech ‘DeepFake’ shows a present AI threat https://news.deepgeniusai.com/2019/01/14/trump-speech-deepfake-ai-threat/ https://news.deepgeniusai.com/2019/01/14/trump-speech-deepfake-ai-threat/#comments Mon, 14 Jan 2019 12:19:09 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4424 A so-called ‘DeepFake’ video of a Trump speech was broadcast on a Fox-owned Seattle TV network, showing a very present AI threat. The station, Q13, broadcasted a doctored Trump speech in which he somehow appeared even more orange and pulled amusing faces. You can see a side-by-side comparison with the original below: Following the broadcast,... Read more »

The post Trump speech ‘DeepFake’ shows a present AI threat appeared first on AI News.

]]>
A so-called ‘DeepFake’ video of a Trump speech was broadcast on a Fox-owned Seattle TV network, showing a very present AI threat.

The station, Q13, broadcasted a doctored Trump speech in which he somehow appeared even more orange and pulled amusing faces.

You can see a side-by-side comparison with the original below:

https://www.youtube.com/watch?v=UZLs11uSg-A&feature=youtu.be

Following the broadcast, a Q13 employee was sacked. It’s unclear if the worker created the clip or whether it was just allowed to air.

The video could be the first DeepFake to be televised, but it won’t be the last. Social media provides even less filtration and enables fake clips to spread with ease.

We’ve heard much about sophisticated disinformation campaigns. At one point, the US was arguably the most prominent creator of such campaigns to influence foreign decisions.

Russia, in particular, has been linked to vast disinformation campaigns. These have primarily targeted social media with things such as their infamous Twitter bots.

According to Pew Research, just five percent of Americans have ‘a lot of trust’ in the information they get from social media. This is much lower than in national and local news organisations.

It’s not difficult to imagine an explosion in doctored videos that appear like they’re coming from trusted outlets. Combining the reach of social media with the increased trust Americans have in traditional news organisations is a dangerous concept.

While the Trump video appears to be a bit of fun, the next could be used to influence an election or big policy decision. It’s a clear example of how AI is already creating new threats.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Trump speech ‘DeepFake’ shows a present AI threat appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/01/14/trump-speech-deepfake-ai-threat/feed/ 1