Deepfakes – AI News https://news.deepgeniusai.com Artificial Intelligence News Thu, 06 Aug 2020 12:45:25 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Deepfakes – AI News https://news.deepgeniusai.com 32 32 University College London: Deepfakes are the ‘most serious’ AI crime threat https://news.deepgeniusai.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/ https://news.deepgeniusai.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/#respond Thu, 06 Aug 2020 12:41:52 +0000 https://news.deepgeniusai.com/?p=9794 Researchers from University College London have released a ranking of what experts believe to be the most serious AI crime threats. The researchers first created a list of 20 expected ways AI will be used by criminals within the next 15 years. 31 experts were then asked to rank them by potential severity. Deepfakes –... Read more »

The post University College London: Deepfakes are the ‘most serious’ AI crime threat appeared first on AI News.

]]>
Researchers from University College London have released a ranking of what experts believe to be the most serious AI crime threats.

The researchers first created a list of 20 expected ways AI will be used by criminals within the next 15 years. 31 experts were then asked to rank them by potential severity.

Deepfakes – AI-generated images, videos, and articles – ranked top of the list as the most serious threat.

New and dangerous territory

It’s of little surprise to see deepfakes rank so highly, given the existing issues with disinformation campaigns.

Most fake content today at least must be created by humans, such as those working in the likes of Russia’s infamous “troll farms”. Human-generated disinformation campaigns take time to produce to a convincing standard and often have patterns which make them easier to trace. 

Automating the production of fake content en masse, to influence things such as democratic votes and public opinion, is entering into new and dangerous territory.

One of the most high-profile deepfake cases so far was that of US house speaker Nancy Pelosi. In 2018, a deepfake video circulated on social media which made Pelosi appear drunk and slurring her words. Pelosi criticised Facebook’s response, or lack thereof, and later told California’s KQED: “I think they have proven — by not taking down something they know is false — that they were willing enablers of the Russian interference in our election.”

The deepfake of Pelosi was unsophisticated and likely created to be amusing rather than malicious, but it’s an early warning of how such fakes could be used to cause reputational damage – or worse. Just imagine the potential consequences a fake video of the president announcing an imminent strike on somewhere like North Korea could have.

Deepfakes also have obvious potential to be used for fraud purposes, to pretend to be someone else to access things like bank accounts and sensitive information.

Then there’s the issue of blackmail. Deep learning has already been used to put the faces of celebrities on adult performers. While fake, the threat to release such videos – and the embarrassment caused – could lead to some paying a ransom to keep it from being made public.

“People now conduct large parts of their lives online and their online activity can make and break reputations,” comments first author Dr Matthew Caldwell of UCL Computer Science. “Such an online environment, where data is property and information power, is ideally suited for exploitation by AI-based criminal activity.”

All in all, it’s easy to see why experts are so concerned about deepfakes.

As part of a bid to persuade Facebook to change its policies on deepfakes, Israeli startup Canny AI created a deepfake of Facebook CEO Mark Zuckerberg last year which made it appear like he said: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

Other AI crime threats

There were four other major AI crime threats identified by the researchers: the use of driverless cars as weapons, automated spear fishing, harvesting information for blackmail, and the disruption of AI-controlled systems.

“As the capabilities of AI-based technologies expand, so too has their potential for criminal exploitation,” explained senior author Professor Lewis Griffin of UCL Computer Science. “To adequately prepare for possible AI threats, we need to identify what these threats might be, and how they may impact our lives.”

Of medium concern were of things such as the sale of items and services wrongly called AI, such as security screening and targeted advertising solutions. The researchers believe leading people to believe they’re AI-powered could be lucrative.

Among the lesser concerns is things such as so-called “burglar bots” which could get in through access points of a property to unlock them or search for data. The researchers believe this poses less of a threat because they can be easily prevented through methods such as letterbox cages.

Similarly, the researchers note the potential for AI-based stalking is damaging for individuals but isn’t considered a major threat as it could not operate at scale.

You can find the researchers’ full paper in the Crime Science Journal here.

(Photo by Bill Oxford on Unsplash)

The post University College London: Deepfakes are the ‘most serious’ AI crime threat appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/feed/ 0
AI bot had to unlearn English grammar to decipher Trump speeches https://news.deepgeniusai.com/2020/05/13/ai-bot-had-to-unlearn-english-grammar-to-decipher-trump-speeches/ https://news.deepgeniusai.com/2020/05/13/ai-bot-had-to-unlearn-english-grammar-to-decipher-trump-speeches/#respond Wed, 13 May 2020 09:41:30 +0000 https://news.deepgeniusai.com/?p=9597 A developer had to recalibrate his artificial intelligence bot to account for the unconventional grammar and syntax found in President Trump’s speeches. As originally reported by the Los Angeles Times, Bill Frischling noticed in 2017 that his AI bot, Margaret, was struggling to transcribe part of the President’s speech from May 4 that year commemorating... Read more »

The post AI bot had to unlearn English grammar to decipher Trump speeches appeared first on AI News.

]]>
A developer had to recalibrate his artificial intelligence bot to account for the unconventional grammar and syntax found in President Trump’s speeches.

As originally reported by the Los Angeles Times, Bill Frischling noticed in 2017 that his AI bot, Margaret, was struggling to transcribe part of the President’s speech from May 4 that year commemorating the 75th anniversary of the Battle of the Coral Sea. In particular, Margaret crashed after this 127-word section, featuring a multitude of sub-clauses and tense shifts:

“I know there are many active duty service personnel from both nations with us in the audience, and I want to express our gratitude to each and every one of you. We are privileged to be joined by many amazing veterans from our two countries, as well – and for really from so many different conflicts, there are so many conflicts that we fought on and worked on together – and by the way in all cases succeeded on – it’s nice to win.

“It’s nice to win, and we’ve won a lot, haven’t we Mr. Prime Minister? We’ve won a lot. We’re going to keep it going, by the way. You’ve given your love and loyalty to your nations, and tonight a room of grateful patriots says thank you.”

Frischling, in the words of the Times, ‘hired a computer expert with a PhD in machine punctuation to unteach Margaret normal grammar and syntax – and teach it to decipher Trump-speak instead.’ “It was still trying to punctuate it like it was English, versus trying to punctuate it like it was Trump,” he said.

Able to transcribe the President’s speeches unhindered after that, Margaret’s job is not just to keep a database of these remarks, but analyse behavioural patterns. According to Frischling, some of the behaviours Margaret has spotted include being ‘more comfortable’ telling falsehoods by talking quickly, as well as identifying when Trump is genuinely angry, as opposed to putting it on for show.

One example came at the White House coronavirus briefing on April 23, where Trump – against all medical advice – suggested patients should be injected with disinfectant to kill the virus. When a Washington Post reporter questioned this edict, Trump’s response, according to Margaret, was borne out of genuine anger. Yet Frischling added that for many of the President’s more pre-meditated attacks on ‘fake news’ – of which the Washington Post has been a frequent case – there is little palpable anger on show.

As this publication has previously reported, the lines between real and fake news continue to be blurred – with the President himself an obvious target. In January last year, a ‘deepfake’ video of a Trump speech was broadcast on a Fox-owned Seattle TV network, with an employee later sacked for the error. In February, the President outlined an executive order, titled ‘Maintaining American Leadership in Artificial Intelligence’ exploring five key principles.

President Trump is by no means the only world leader whose sentence construction could be considered off-beat. As reported in the Times (h/t @arusbridger) last week, UK Prime Minister Boris Johnson answered a question on coronavirus testing from Keir Starmer, the leader of the opposition, thus:

“As I think is readily apparent, Mr Speaker, to everybody who has studied the, er, the situation, and I think the scientists would, er, confirm, the difficulty in mid-March was that, er, the, er, tracing capacity that we had – it had been useful… in the containment phase of the epidemic er, that capacity was no longer useful or relevant since the, er, transmission from individuals within the UK um meant that it exceeded our capacity.

“As we get the new cases down, er, we will have a team that will genuinely be able to track and, er, trace hundreds of thousands of people across the country, and thereby to drive down the epidemic. And so, er, I mean, to put it in a nutshell, it is easier, er, to do now – now that we have built up the team on the, on the way out – than it was as er, the epidemic took off.”

One can only imagine what Margaret would have made of that transcription job.

Photo by Charles Deluvio on Unsplash

 Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , and Cyber Security & .

The post AI bot had to unlearn English grammar to decipher Trump speeches appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/05/13/ai-bot-had-to-unlearn-english-grammar-to-decipher-trump-speeches/feed/ 0
World’s oldest defence think tank concludes British spies need AI https://news.deepgeniusai.com/2020/04/28/world-oldest-defence-think-tank-british-spies-ai/ https://news.deepgeniusai.com/2020/04/28/world-oldest-defence-think-tank-british-spies-ai/#comments Tue, 28 Apr 2020 12:11:03 +0000 https://news.deepgeniusai.com/?p=9572 The Royal United Services Institute (RUSI) says in an intelligence report that British spies will need to use AI to counter threats. Based in Westminster, the RUSI is the world’s oldest think tank on international defence and security. Founded in 1831 by the first Duke of Wellington, Sir Arthur Wellesley, the RUSI remains a highly... Read more »

The post World’s oldest defence think tank concludes British spies need AI appeared first on AI News.

]]>
The Royal United Services Institute (RUSI) says in an intelligence report that British spies will need to use AI to counter threats.

Based in Westminster, the RUSI is the world’s oldest think tank on international defence and security. Founded in 1831 by the first Duke of Wellington, Sir Arthur Wellesley, the RUSI remains a highly respected institution that’s as relevant today as ever.

AI is rapidly advancing the capabilities of adversaries. In its report, the RUSI says that hackers – both state-sponsored and independent – are likely to use AI for cyberattacks on the web and political systems.

Adversaries “will undoubtedly seek to use AI to attack the UK”, the RUSI notes.

Threats could emerge in a variety of ways. Deepfakes, which use a neural network to generate convincing fake videos and images, are one example of a threat already being posed today. With the US elections coming up, there’s concerns deepfakes of political figures could be used for voter manipulation.

AI could also be used for powerful new malware which mutates to avoid detection. Such malware could even infect and take control of emerging technologies such as driverless cars, smart city infrastructure, and drones.

The RUSI believes that humans will struggle to counter AI threats alone and will need the assistance of automation.

“Adoption of AI is not just important to help intelligence agencies manage the technical challenge of information overload,” said Alexander Babuta, one of the report’s authors. “It is highly likely that malicious actors will use AI to attack the UK in numerous ways, and the intelligence community will need to develop new AI-based defence measures.”

GCHQ, the UK’s service which focuses on signals intelligence , commissioned the RUSI’s independent report. Ken McCallum, the new head of MI5 – the UK’s domestic counter-intelligence and security agency – has already said that greater use of AI will be one of his priorities.

The RUSI believes AI will be of little value for “predictive intelligence” to do things such as predicting when a terrorist act is likely to occur before it happens. Highlighting counter-terrorism specifically, the RUSI says such cases are too infrequent to look for patterns compared to other criminal acts. Reasons for terrorist acts can also change very quickly dependent on world events.

All of this raises concerns about the automation of discrimination. The RUSI calls for more of an “augmented” intelligence – whereby technology assists sifting through large amounts of data, but decisions are ultimately taken by humans – rather than leaving it all up to the machines.

In terms of global positioning, the RUSI recognises the UK’s strength in AI with talent emerging from the country’s world-leading universities and capabilities in the GCHQ, bodies like the Alan Turing Institute, the Centre for Data Ethics and Innovation, and even more in the private sector.

While it’s widely-acknowledged countries like the US and China have far more resources overall to throw at AI advancements, the RUSI believes the UK has the potential to be a leader in the technology within a much-needed ethical framework. However, they say it’s important not to be too preoccupied with the possible downsides.

“There is a risk of stifling innovation if we become overly-focused on hypothetical worst-case outcomes and speculations over some dystopian future AI-driven surveillance network,” argues Babuta.

“Legitimate ethical concerns will be overshadowed unless we focus on likely and realistic uses of AI in the short-to-medium term.”

You can find a copy of the RUSI’s full report here (PDF)

(Photo by Chris Yang on Unsplash)

The post World’s oldest defence think tank concludes British spies need AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/04/28/world-oldest-defence-think-tank-british-spies-ai/feed/ 1
Deepfake shows Nixon announcing the moon landing failed https://news.deepgeniusai.com/2020/02/06/deepfake-nixon-moon-landing-failed/ https://news.deepgeniusai.com/2020/02/06/deepfake-nixon-moon-landing-failed/#respond Thu, 06 Feb 2020 16:42:59 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6403 In the latest creepy deepfake, former US President Nixon is shown to announce that the first moon landing failed. Nixon was known to be a divisive figure but certainly recognisable. The video shows Nixon in the Oval Office, surrounded by flags, giving a presidential address to an eagerly awaiting world. However, unlike the actual first... Read more »

The post Deepfake shows Nixon announcing the moon landing failed appeared first on AI News.

]]>

In the latest creepy deepfake, former US President Nixon is shown to announce that the first moon landing failed.

Nixon was known to be a divisive figure but certainly recognisable. The video shows Nixon in the Oval Office, surrounded by flags, giving a presidential address to an eagerly awaiting world.

However, unlike the actual first moon landing – unless you’re a subscriber to conspiracy theories – this one failed.

“These brave men, Neil Armstrong and Edwin Aldrin, know that there is no hope for their recovery,” Nixon says in his trademark growl. “But they also know that there is hope for mankind in their sacrifice.”

Here are some excerpts from the full video:

What makes the video more haunting is that the speech itself is real. Although never broadcast, it was written for Nixon by speechwriter William Safire in the eventuality the moon landing did fail.

The deepfake was created by a team from MIT’s Center for Advanced Virtuality and put on display at the IDFA documentary festival in Amsterdam.

In order to recreate Nixon’s famous voice, the MIT team partnered with technicians from Ukraine and Israel and used advanced machine learning techniques.

We’ve covered many deepfakes here on AI News. While many are amusing, there are serious concerns that deepfakes could be used for malicious purposes such as blackmail or manipulation.

Ahead of the US presidential elections, some campaigners have worked to increase the awareness of deepfakes and get social media platforms to help tackle any dangerous videos.

Back in 2018, speaker Nancy Pelosi was the victim of a deepfake that went viral across social media which made her appear drunk and slurring her words. Pelosi criticised Facebook’s response, or lack thereof, and later told California’s KQED: “I think they have proven — by not taking down something they know is false — that they were willing enablers of the Russian interference in our election.”

As part of a bid to persuade the social media giant to change its policies on deepfakes, Israeli startup Canny AI created a deepfake of Facebook CEO Mark Zuckerberg – making it appear like he said: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

Last month, Facebook pledged to crack down on deepfakes ahead of the US presidential elections. However, the new rules don’t cover videos altered for parody or those edited “solely to omit or change the order of words,” which will not sound encouraging to those wanting a firm stance against potential voter manipulation.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Deepfake shows Nixon announcing the moon landing failed appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/02/06/deepfake-nixon-moon-landing-failed/feed/ 0
Deepfake app puts your face on GIFs while limiting data collection https://news.deepgeniusai.com/2020/01/14/deepfake-app-face-gifs-data-collection/ https://news.deepgeniusai.com/2020/01/14/deepfake-app-face-gifs-data-collection/#comments Tue, 14 Jan 2020 15:11:41 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6356 A new app called Doublicat allows users to superimpose their face into popular GIFs using deep learning technology. In the name of research, here’s one I made earlier: Doublicat uses a Generative Adversarial Network (GAN) to do its magic. The GAN is called RefaceAI and is developed by a company of the same name. RefaceAI... Read more »

The post Deepfake app puts your face on GIFs while limiting data collection appeared first on AI News.

]]>
A new app called Doublicat allows users to superimpose their face into popular GIFs using deep learning technology.

In the name of research, here’s one I made earlier:

Doublicat uses a Generative Adversarial Network (GAN) to do its magic. The GAN is called RefaceAI and is developed by a company of the same name.

RefaceAI was previously used in a face swapping app called Reflect. Elon Musk once used Reflect to put his face on Dwayne Johnson’s body. 

The app is a lot of fun, but – after concerns about viral Russian app FaceApp – many will be wondering just how much data is being collected in return.

Doublicat’s developers are upfront with asking for consent to store your photos upon first opening the app and this is confirmed in their privacy policy: “We may collect the photos, that you take with your camera while using our application.”

However, Doublicat says that photos are only stored on their server for 24 hours before they’re deleted. “The rest of the time your photos used in Doublicat application are stored locally on your mobile device and may be removed any time by either deleting these photos from your mobile device’s file system.”

The app also collects data about facial features but only the vector representations of each person’s face is stored. Doublicat assures that the facial recognition data collected “is not biometric data” and is deleted from their servers within 30 calendar days.

“In no way will Doublicat use your uploaded content for face recognition as Doublicat does not introduce the face recognition technologies or other technical means for processing biometric data for the unique identification or authentication of a user.”

The amount of data Doublicat can collect is limited compared to some alternatives. Apps such as Zao require users to 3D model their face whereas Doublicat only takes a front-facing picture.

RefaceAI is now looking to launch an app which can make deepfake videos rather than just GIFs. The move is likely to be controversial given the concerns around deepfakes and how such videos could be used for things such as political manipulation.

A fake video of Nancy Pelosi, Speaker of the United States House of Representatives, went viral last year after purportedly showing her slurring her words as if she was intoxicated. The clip shows how even a relatively unsophisticated video (it wasn’t an actual deepfake in this case) could be used to cause reputational damage and even swing votes.

A report from the NYU Stern Center for Business and Human Rights last September, covered by our sister publication MarketingTech, highlighted the various ways disinformation could be used ahead of this year’s presidential elections. One of the eight predictions is that deepfake videos will be used “to portray candidates saying and doing things they never said or did”.

Earlier this month, Facebook announced new policies around deepfakes. Any deepfake video that is designed to be misleading will be banned. The problem with the rules is they don’t cover videos altered for parody or those edited “solely to omit or change the order of words,” which will not sound encouraging to anyone wanting a firm stance against manipulation.

Doublicat is available for Android and iOS.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Deepfake app puts your face on GIFs while limiting data collection appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/01/14/deepfake-app-face-gifs-data-collection/feed/ 2
Facebook pledges crackdown on deepfakes ahead of the US presidential election https://news.deepgeniusai.com/2020/01/08/facebook-crackdown-deepfakes-us-presidential-election/ https://news.deepgeniusai.com/2020/01/08/facebook-crackdown-deepfakes-us-presidential-election/#respond Wed, 08 Jan 2020 18:04:20 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6331 Facebook has pledged to crack down on misleading deepfakes ahead of the US presidential election later this year. Voter manipulation is a concern for any functioning democracy and deepfakes provide a whole new challenge for social media platforms. A fake video of Nancy Pelosi, Speaker of the United States House of Representatives, went viral last... Read more »

The post Facebook pledges crackdown on deepfakes ahead of the US presidential election appeared first on AI News.

]]>
Facebook has pledged to crack down on misleading deepfakes ahead of the US presidential election later this year.

Voter manipulation is a concern for any functioning democracy and deepfakes provide a whole new challenge for social media platforms.

A fake video of Nancy Pelosi, Speaker of the United States House of Representatives, went viral last year after purportedly showing her slurring her words as if she was intoxicated. The clip shows how even a relatively unsophisticated video (it wasn’t an actual deepfake) could be used to cause reputational damage and swing votes.

Facebook refused to remove the video of Nancy Pelosi and instead said it would display an article from a third-party fact-checking website highlighting that it’s been edited and take measures to limit its reach. The approach, of course, was heavily criticised.

The new rules from Facebook claim that deepfake videos that are designed to be misleading will be banned. The problem with the rules is they don’t cover videos altered for parody or those edited “solely to omit or change the order of words,” which will not sound encouraging to those wanting a firm stance against manipulation.

In the age of “fake news,” many people have become aware not to necessarily believe what they read. Likewise, an increasing number of people also know how easily images are manipulated. Deepfake videos pose such a concern because the wider public are not yet aware enough of their existence or how to spot them.

A report from the NYU Stern Center for Business and Human Rights last September, covered by our sister publication MarketingTech, highlighted the various ways disinformation could be used ahead of this year’s presidential elections.

One of the eight predictions is that deepfake videos will be used “to portray candidates saying and doing things they never said or did”. Another prediction is that Iran and China may join Russia as sources of disinformation, the former perhaps now being even more likely given recent escalations between the US and Iran and the desire for non-military retaliation.

Legislation is being introduced to criminalise the production of deepfakes without disclosing that they’ve been modified, but the best approach is to limit them from being widely shared in the first place.

“A better approach, and one that avoids the danger of overreaching government censorship, would be for the social media platforms to improve their AI-screening technology, enhance human review, and remove deepfakes before they can do much damage,” the report suggests.

The month after Facebook refused to remove the edited video of Pelosi, a deepfake created by Israeli startup Canny AI aimed to raise awareness of the issue by making it appear like Facebook CEO Mark Zuckerberg said: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

Canny AI’s deepfake was designed to be clearly fake but it shows how easy it’s becoming to manipulate people’s views. In a tense world, it’s not hard to imagine what devastation could be caused simply by releasing a deepfake of a political leader declaring war or planning to launch a missile.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Facebook pledges crackdown on deepfakes ahead of the US presidential election appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/01/08/facebook-crackdown-deepfakes-us-presidential-election/feed/ 0
Experts discuss the current biggest threats posed by AI https://news.deepgeniusai.com/2019/12/04/experts-discuss-current-biggest-threats-ai/ https://news.deepgeniusai.com/2019/12/04/experts-discuss-current-biggest-threats-ai/#respond Wed, 04 Dec 2019 17:49:00 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6277 Several experts have given their thoughts on what threats AI poses, and unsurprisingly fake content is the current biggest danger. The experts, who were speaking on Tuesday at the WSJ Pro Cybersecurity Executive Forum in New York, believe that AI-generated content is of pressing concern to our societies. Camille François, chief innovation officer at social... Read more »

The post Experts discuss the current biggest threats posed by AI appeared first on AI News.

]]>
Several experts have given their thoughts on what threats AI poses, and unsurprisingly fake content is the current biggest danger.

The experts, who were speaking on Tuesday at the WSJ Pro Cybersecurity Executive Forum in New York, believe that AI-generated content is of pressing concern to our societies.

Camille François, chief innovation officer at social media analytics firm Graphika, says that deepfake articles pose the greatest danger.

We’ve already seen what human-generated “fake news” and disinformation campaigns can do, so it won’t be of much surprise to many that involving AI in that process is a leading threat.

François highlights that fake articles and disinformation campaigns today rely on a lot of manual work to create and spread a false message.

“When you look at disinformation campaigns, the amount of manual labour that goes into creating fake websites and fake blogs is gigantic,” François said.

“If you can just simply automate believable and engaging text, then it’s really flooding the internet with garbage in a very automated and scalable way. So that I’m pretty worried about.”

In February, OpenAI unveiled its GPT-2 tool which generates convincing fake text. The AI was trained on 40 gigabytes of text spanning eight million websites.

OpenAI decided against publicly releasing GPT-2 fearing the damage it could do. However, in August, two graduates decided to recreate OpenAI’s text generator.

The graduates said they do not believe their work currently poses a risk to society and released it to show the world what was possible without being a company or government with huge amounts of resources.

“This allows everyone to have an important conversation about security, and researchers to help secure against future potential abuses,” said Vanya Cohen, one of the graduates, to Wired.

Speaking on the same panel as François at the WSJ event, Celeste Fralick, chief data scientist and senior principal engineer at McAfee, recommended that companies partner with firms specialising in detecting deepfakes.

Among the scariest AI-related cybersecurity threats is “adversarial machine learning attacks” whereby a hacker finds and exploits a vulnerability in an AI system.

Fralick provides the example of an experiment by Dawn Song, a professor at the University of California, Berkeley, in which a driverless car was fooled into believing a stop sign was a 45 MPH speed limit sign just by using stickers.

According to Fralick, McAfee itself has performed similar experiments and discovered further vulnerabilities. In one, a 35 MPH speed limit sign was once again modified to fool a driverless car’s AI.

“We extended the middle portion of the three, so the car didn’t recognise it as 35; it recognised it as 85,” she said.

Both panellists believe entire workforces need to be educated about the threats posed by AI in addition to employing strategies for countering attacks.

There is “a great urgency to make sure people have basic AI literacy,” François concludes.

? , , , AI &

The post Experts discuss the current biggest threats posed by AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/12/04/experts-discuss-current-biggest-threats-ai/feed/ 0
Deepfake has Johnson and Corbyn advocating each other for Britain’s next PM https://news.deepgeniusai.com/2019/11/12/deepfake-johnson-corbyn-britain-next-pm/ https://news.deepgeniusai.com/2019/11/12/deepfake-johnson-corbyn-britain-next-pm/#respond Tue, 12 Nov 2019 14:16:24 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6185 A think tank has released two deepfake videos which appear to show election rivals Boris Johnson and Jeremy Corbyn advocating each other for Britain’s top role. The clips are produced by Future Advocacy and intend to show that people can no longer necessarily trust what they see in videos, not just to question what they... Read more »

The post Deepfake has Johnson and Corbyn advocating each other for Britain’s next PM appeared first on AI News.

]]>
A think tank has released two deepfake videos which appear to show election rivals Boris Johnson and Jeremy Corbyn advocating each other for Britain’s top role.

The clips are produced by Future Advocacy and intend to show that people can no longer necessarily trust what they see in videos, not just to question what they read and hear.

Here’s the Johnson video:

And here’s the Corbyn video:

In the era of fake news, people are becoming increasingly aware not to believe everything they read. Training the general population not to always believe what they can see with their own eyes is a lot more challenging.

At the same time, it’s also important in a democracy that media plurality is maintained and not too much influence is centralised to a handful of “trusted” outlets. Similarly, people cannot be allowed to just call something fake news to avoid scrutiny.

Future Advocacy highlights four key challenges:

  1. Detecting deepfakes – whether society can create the means for detecting a deepfake directly at the point of upload or once it has become widely disseminated.
  2. Liar’s dividend – a phenomenon in which genuine footage of controversial content can be dismissed by the subject as a deepfake, despite it being true.
  3. Regulation – what should the limitations be with regards to the creation of deepfakes and can these be practically enforced?
  4. Damage limitation – managing the impacts of deepfakes when regulation fails and the question of where responsibility should lie for damage limitation.

Areeq Chowdhury, Head of Think Tank at Future Advocacy, said:

“Deepfakes represent a genuine threat to democracy and society more widely. They can be used to fuel misinformation and totally undermine trust in audiovisual content.

Despite warnings over the past few years, politicians have so far collectively failed to address the issue of disinformation online. Instead, the response has been to defer to tech companies to do more. The responsibility for protecting our democracy lies in the corridors of Westminster not the boardrooms of Silicon Valley.

By releasing these deepfakes, we aim to use shock and humour to inform the public and put pressure on our lawmakers. This issue should be put above party politics. We urge all politicians to work together to update our laws and protect society from the threat of deepfakes, fake news, and micro-targeted political adverts online.”

Journalists are going to have to become experts in spotting fake content to maintain trust and integrity. Social media companies will also have to take some responsibility for the content they allow to spread on their platforms.

Social media moderation

Manual moderation of every piece of content that’s posted to a network like Facebook or Twitter is simply unfeasible, so automation is going to become necessary to at least flag potentially offending content.

But what constitutes offending content? That is the question social media giants are battling with in order to strike the right balance between free speech and expression while protecting their users from manipulation.

Just last night, Twitter released its draft policy on deepfakes and is currently accepting feedback on it.

The social network proposes the following steps for tweets it detects as featuring potentially manipulated content:

  • Place a notice next to tweets that share synthetic or manipulated media.
  • Warn people before they share or like tweets with synthetic or manipulated media.
  • Add a link – for example, to a news article or Twitter Moment – so that people can read more about why various sources believe the media is synthetic or manipulated.

Twitter defines deepfakes as “any photo, audio, or video that has been significantly altered or fabricated in a way that intends to mislead people or changes its original meaning.”

Twitter’s current definition sounds like it could end up flagging the internet’s favourite medium, memes, as deepfakes. However, there’s a compelling argument that memes often should at least be flagged as modified from their original intent.

Take the infamous “This is fine” meme that was actually part of a larger comic by KC Green before it was manipulated for individual purposes.

In this Vulture piece, Green gives his personal stance that he’s mostly fine with people using his work as a meme so long as they’re not monetising it for themselves or using it for political purposes.

On July 25th 2016, the official Republican Party Twitter account used Green’s work and added “Well ¯\_(ツ)_/¯ #DemsInPhilly #EnoughClinton”. Green later tweeted: “Everyone is in their right to use this is fine on social media posts, but man o man I personally would like @GOP to delete their stupid post.”

Raising awareness of deepfakes

Bill Posters is a UK artist known for creating subversive deepfakes of famous celebrities, including Donald Trump and Kim Kardashian. Posters was behind the viral deepfake of Mark Zuckerberg for the Spectre project which AI News reported on earlier this year.

Posters commented on his activism using deepfakes:

“We’ve used the biometric data of famous UK politicians to raise awareness to the fact that without greater controls and protections concerning personal data and powerful new technologies, misinformation poses a direct risk to everyone’s human rights including the rights of those in positions of power.

It’s staggering that after 3 years, the recommendations from the DCMS Select Committee enquiry into fake news or the Information Commissioner’s Office enquiry into the Cambridge Analytica scandals have not been applied to change UK laws to protect our liberty and democracy.

As a result, the conditions for computational forms of propaganda and misinformation campaigns to be amplified by social media platforms are still in effect today. We urge all political parties to come together and pass measures which safeguard future elections.”

As the UK heads towards its next major election, there is sure to be much debate around potential voter manipulation. Many have pointed towards Russian interference in Western democracies but there’s yet to be any solid evidence of that being the case.

Opposition parties, however, have criticised the incumbent government in the UK as refusing to release a report into Russian interference. Former US presidential candidate Hillary Clinton branded it “inexplicable and shameful” that the UK government has not yet published the report.

Allegations of interference and foul play will likely increase in the run-up to the election, but Future Advocacy is doing a great job in highlighting to the public that not everything you see can be believed.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Deepfake has Johnson and Corbyn advocating each other for Britain’s next PM appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/11/12/deepfake-johnson-corbyn-britain-next-pm/feed/ 0
California introduces legislation to stop political and porn deepfakes https://news.deepgeniusai.com/2019/10/07/california-introduces-legislation-stop-political-porn-deepfakes/ https://news.deepgeniusai.com/2019/10/07/california-introduces-legislation-stop-political-porn-deepfakes/#respond Mon, 07 Oct 2019 11:48:28 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6083 Deepfake videos have the potential to do unprecedented amounts of harm so California has introduced two bills designed to limit them. For those unaware, deepfakes use machine learning technology in order to make a person appear like they’re convincingly doing or saying things which they’re not. There are two main concerns about deepfake videos: Personal... Read more »

The post California introduces legislation to stop political and porn deepfakes appeared first on AI News.

]]>
Deepfake videos have the potential to do unprecedented amounts of harm so California has introduced two bills designed to limit them.

For those unaware, deepfakes use machine learning technology in order to make a person appear like they’re convincingly doing or saying things which they’re not.

There are two main concerns about deepfake videos:

  • Personal defamation – An individual is made to appear in a sexual and/or humiliating scene either for blackmail purposes or to stain that person’s image.
  • Manipulation – An influential person, typically a politician, is made to appear they’ve said something in order to sway public opinion and perhaps even vote a certain way.

Many celebrities have become victims of deepfake porn. One of the bills signed into law by the state of California last week allows victims to sue anyone who puts their image into a pornographic video without consent.

Earlier this year, Facebook CEO Mark Zuckerberg became the victim of a deepfake. Zuckerberg was portrayed to say: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

Clearly, to most of us, the Zuckerberg video is a fake. The video was actually created by Isreali startup Canny AI as part of a commissioned art installation called Spectre that was on display at the Sheffield Doc/Fest in the UK.

A month prior to the Zuckerberg video, Facebook refused to remove a deepfake video of House Speaker Nancy Pelosi which aimed to portray her as intoxicated. If deepfakes are allowed to go viral on huge social media platforms like Facebook, it will pose huge societal problems.

Pelosi later told California’s KQED: “I think they have proven — by not taking down something they know is false — that they were willing enablers of the Russian interference in our election.”

California’s second bill legislates against posting any manipulated video of a political candidate, albeit only within 60 days of an election.

California Assembly representative Marc Berman said:

“Voters have a right to know when video, audio, and images that they are being shown, to try to influence their vote in an upcoming election, have been manipulated and do not represent reality.

[That] makes deepfake technology a powerful and dangerous new tool in the arsenal of those who want to wage misinformation campaigns to confuse voters.”

While many people now know not to trust everything they read, most of us are still accustomed to believing what we see with our eyes. That’s what poses the biggest threat with deepfake videos.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post California introduces legislation to stop political and porn deepfakes appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/10/07/california-introduces-legislation-stop-political-porn-deepfakes/feed/ 0
Don’t believe your eyes: Exploring the positives and negatives of deepfakes https://news.deepgeniusai.com/2019/08/05/dont-believe-your-eyes-exploring-the-positives-and-negatives-of-deepfakes/ https://news.deepgeniusai.com/2019/08/05/dont-believe-your-eyes-exploring-the-positives-and-negatives-of-deepfakes/#comments Mon, 05 Aug 2019 09:18:29 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5893 In 2018 the Reddit community r/deepfakes gained international attention thanks to a piece of investigative journalism by Samantha Cole, deputy editor at VICE. Members of the forum had been using a burgeoning technology to superimpose celebrities’ faces onto pornographic videos. For the general public – and no doubt the unwitting stars – it was a... Read more »

The post Don’t believe your eyes: Exploring the positives and negatives of deepfakes appeared first on AI News.

]]>
In 2018 the Reddit community r/deepfakes gained international attention thanks to a piece of investigative journalism by Samantha Cole, deputy editor at VICE.

Members of the forum had been using a burgeoning technology to superimpose celebrities’ faces onto pornographic videos. For the general public – and no doubt the unwitting stars – it was a shock. Most were unaware this technology existed. Very few believed it was possible to produce such realistic footage.

The videos had been created by a Generative Adversarial Network (GAN), a machine learning model that uses algorithms to mimic the distribution of data. In this case, that meant superimposing one human face onto another, complete with realistic movement. 

Before long the term ‘deepfake’ became part of the common lexicon, both in porn circles and beyond, and its unceremonious introduction to the world set the tone for its negative connotations.

But are the origins and misuses of deepfakery causing us to take an unnecessarily dim view of a potentially useful technology?

Deep trouble

Fast forward to 2019 and the quality of deepfakes is improving at an exponential rate. Their potential for producing revenge porn, malicious hoaxes, fraud, misinformation, and blackmail is often the focus of news articles and think pieces – and not without reason.

Recently, deepfake videos of notable figures like Mark Zuckerberg, US House Speaker Nancy Pelosi, and former President Barack Obama have all made the news. And it’s becoming increasingly difficult to tell them apart from the real thing.

After President Trump retweeted a manipulated video of Ms Pelosi, presumably under the impression it was real, it became clear that the political implications of this technology could be phenomenal – especially in the age of ‘fake news’ and social media manipulation.  

In fact, the US Intelligence Committee recently issued a warning ahead of the 2020 elections, claiming: “Adversaries and strategic competitors probably will  attempt to use deep fakes or similar machine-learning technologies […] to influence campaigns directed against  the United States and our allies and partners.”

Ten years ago, this would have seemed like sci-fi fantasy. Today, in the wake of Cambridge Analytica, it is our reality.

Beyond the knee-jerk

The issue surrounding deepfakes is closely aligned to the pertinent question of our age – what happens when technology advances beyond our understanding of its implications?

People are concerned, and rightly so. But what’s often overlooked in favour of knee-jerk reactionism is the potential of deepfakes to do good.

Recently, GANs have been used for educational purposes, like the Dali Museum’s resurrection of Salvador Dali (it’s less terrifying than it sounds). And in medical training, where deepfake images are helping trainee doctors, nurses, and surgeons practice their profession.

GANs can also be used across a range of industries to improve personalization and immersivity. Retail is an obvious example. Before long it may be possible for customers to see exactly how they’d look in the products they’re browsing online.

Likewise, in entertainment, we may not be so far away from becoming the stars of our own Summer blockbusters. Or at the very least, chief casting directors.

Really, the possibilities are endless. And what happens next will determine whether we ever get to explore them.    

Striking the right balance

With Congress taking steps to potentially criminalize deepfakes altogether, GAN technology is about to face a defining moment. But legislation that’s too punitive may be a mistake when there’s potentially so much to gain – which is why policy-makers and technologists need to work together to find a solution.

At JED.ai, we believe in striking a balance; finding a way to protect vulnerable people from nefarious deepfake use, while still creating an environment that encourages technical exploration.

Current solutions revolve around technology that can identify deepfakes. But as the technology used to create these videos improves, these countermeasures will have to keep pace.

In response, we’re taking a different approach – working on a concept that would see all deepfakes registered on a secure, decentralised registry. We believe this will help to create the right environment to explore the positive applications of GAN technology, while protecting it – and the wider public – from exploitation.

With the right legislation and security solutions in place, there’s no reason deepfakes can’t become synonymous with more positive headlines – and we can’t wait to see what the future might bring.

About the author: Jedidiah Francis is the founder of Jed.ai Labs, a Startup Studio dedicated to using Machine Learning to improve the way we live, work and play.

 Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , and Cyber Security & .

The post Don’t believe your eyes: Exploring the positives and negatives of deepfakes appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/08/05/dont-believe-your-eyes-exploring-the-positives-and-negatives-of-deepfakes/feed/ 1