threats – AI News https://news.deepgeniusai.com Artificial Intelligence News Thu, 06 Aug 2020 12:45:25 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png threats – AI News https://news.deepgeniusai.com 32 32 University College London: Deepfakes are the ‘most serious’ AI crime threat https://news.deepgeniusai.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/ https://news.deepgeniusai.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/#respond Thu, 06 Aug 2020 12:41:52 +0000 https://news.deepgeniusai.com/?p=9794 Researchers from University College London have released a ranking of what experts believe to be the most serious AI crime threats. The researchers first created a list of 20 expected ways AI will be used by criminals within the next 15 years. 31 experts were then asked to rank them by potential severity. Deepfakes –... Read more »

The post University College London: Deepfakes are the ‘most serious’ AI crime threat appeared first on AI News.

]]>
Researchers from University College London have released a ranking of what experts believe to be the most serious AI crime threats.

The researchers first created a list of 20 expected ways AI will be used by criminals within the next 15 years. 31 experts were then asked to rank them by potential severity.

Deepfakes – AI-generated images, videos, and articles – ranked top of the list as the most serious threat.

New and dangerous territory

It’s of little surprise to see deepfakes rank so highly, given the existing issues with disinformation campaigns.

Most fake content today at least must be created by humans, such as those working in the likes of Russia’s infamous “troll farms”. Human-generated disinformation campaigns take time to produce to a convincing standard and often have patterns which make them easier to trace. 

Automating the production of fake content en masse, to influence things such as democratic votes and public opinion, is entering into new and dangerous territory.

One of the most high-profile deepfake cases so far was that of US house speaker Nancy Pelosi. In 2018, a deepfake video circulated on social media which made Pelosi appear drunk and slurring her words. Pelosi criticised Facebook’s response, or lack thereof, and later told California’s KQED: “I think they have proven — by not taking down something they know is false — that they were willing enablers of the Russian interference in our election.”

The deepfake of Pelosi was unsophisticated and likely created to be amusing rather than malicious, but it’s an early warning of how such fakes could be used to cause reputational damage – or worse. Just imagine the potential consequences a fake video of the president announcing an imminent strike on somewhere like North Korea could have.

Deepfakes also have obvious potential to be used for fraud purposes, to pretend to be someone else to access things like bank accounts and sensitive information.

Then there’s the issue of blackmail. Deep learning has already been used to put the faces of celebrities on adult performers. While fake, the threat to release such videos – and the embarrassment caused – could lead to some paying a ransom to keep it from being made public.

“People now conduct large parts of their lives online and their online activity can make and break reputations,” comments first author Dr Matthew Caldwell of UCL Computer Science. “Such an online environment, where data is property and information power, is ideally suited for exploitation by AI-based criminal activity.”

All in all, it’s easy to see why experts are so concerned about deepfakes.

As part of a bid to persuade Facebook to change its policies on deepfakes, Israeli startup Canny AI created a deepfake of Facebook CEO Mark Zuckerberg last year which made it appear like he said: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

Other AI crime threats

There were four other major AI crime threats identified by the researchers: the use of driverless cars as weapons, automated spear fishing, harvesting information for blackmail, and the disruption of AI-controlled systems.

“As the capabilities of AI-based technologies expand, so too has their potential for criminal exploitation,” explained senior author Professor Lewis Griffin of UCL Computer Science. “To adequately prepare for possible AI threats, we need to identify what these threats might be, and how they may impact our lives.”

Of medium concern were of things such as the sale of items and services wrongly called AI, such as security screening and targeted advertising solutions. The researchers believe leading people to believe they’re AI-powered could be lucrative.

Among the lesser concerns is things such as so-called “burglar bots” which could get in through access points of a property to unlock them or search for data. The researchers believe this poses less of a threat because they can be easily prevented through methods such as letterbox cages.

Similarly, the researchers note the potential for AI-based stalking is damaging for individuals but isn’t considered a major threat as it could not operate at scale.

You can find the researchers’ full paper in the Crime Science Journal here.

(Photo by Bill Oxford on Unsplash)

The post University College London: Deepfakes are the ‘most serious’ AI crime threat appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/feed/ 0
Experts discuss the current biggest threats posed by AI https://news.deepgeniusai.com/2019/12/04/experts-discuss-current-biggest-threats-ai/ https://news.deepgeniusai.com/2019/12/04/experts-discuss-current-biggest-threats-ai/#respond Wed, 04 Dec 2019 17:49:00 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6277 Several experts have given their thoughts on what threats AI poses, and unsurprisingly fake content is the current biggest danger. The experts, who were speaking on Tuesday at the WSJ Pro Cybersecurity Executive Forum in New York, believe that AI-generated content is of pressing concern to our societies. Camille François, chief innovation officer at social... Read more »

The post Experts discuss the current biggest threats posed by AI appeared first on AI News.

]]>
Several experts have given their thoughts on what threats AI poses, and unsurprisingly fake content is the current biggest danger.

The experts, who were speaking on Tuesday at the WSJ Pro Cybersecurity Executive Forum in New York, believe that AI-generated content is of pressing concern to our societies.

Camille François, chief innovation officer at social media analytics firm Graphika, says that deepfake articles pose the greatest danger.

We’ve already seen what human-generated “fake news” and disinformation campaigns can do, so it won’t be of much surprise to many that involving AI in that process is a leading threat.

François highlights that fake articles and disinformation campaigns today rely on a lot of manual work to create and spread a false message.

“When you look at disinformation campaigns, the amount of manual labour that goes into creating fake websites and fake blogs is gigantic,” François said.

“If you can just simply automate believable and engaging text, then it’s really flooding the internet with garbage in a very automated and scalable way. So that I’m pretty worried about.”

In February, OpenAI unveiled its GPT-2 tool which generates convincing fake text. The AI was trained on 40 gigabytes of text spanning eight million websites.

OpenAI decided against publicly releasing GPT-2 fearing the damage it could do. However, in August, two graduates decided to recreate OpenAI’s text generator.

The graduates said they do not believe their work currently poses a risk to society and released it to show the world what was possible without being a company or government with huge amounts of resources.

“This allows everyone to have an important conversation about security, and researchers to help secure against future potential abuses,” said Vanya Cohen, one of the graduates, to Wired.

Speaking on the same panel as François at the WSJ event, Celeste Fralick, chief data scientist and senior principal engineer at McAfee, recommended that companies partner with firms specialising in detecting deepfakes.

Among the scariest AI-related cybersecurity threats is “adversarial machine learning attacks” whereby a hacker finds and exploits a vulnerability in an AI system.

Fralick provides the example of an experiment by Dawn Song, a professor at the University of California, Berkeley, in which a driverless car was fooled into believing a stop sign was a 45 MPH speed limit sign just by using stickers.

According to Fralick, McAfee itself has performed similar experiments and discovered further vulnerabilities. In one, a 35 MPH speed limit sign was once again modified to fool a driverless car’s AI.

“We extended the middle portion of the three, so the car didn’t recognise it as 35; it recognised it as 85,” she said.

Both panellists believe entire workforces need to be educated about the threats posed by AI in addition to employing strategies for countering attacks.

There is “a great urgency to make sure people have basic AI literacy,” François concludes.

? , , , AI &

The post Experts discuss the current biggest threats posed by AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/12/04/experts-discuss-current-biggest-threats-ai/feed/ 0
Avast: AI, IoT, and fake apps top 2019 cybersecurity threats https://news.deepgeniusai.com/2019/01/04/avast-ai-iot-fake-apps-2019-cybersecurity-threats/ https://news.deepgeniusai.com/2019/01/04/avast-ai-iot-fake-apps-2019-cybersecurity-threats/#respond Fri, 04 Jan 2019 17:59:32 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4374 According to Avast’s annual Threat Landscape Report, the biggest cybersecurity threats in 2019 will be AI, IoT, and fake apps. Those who follow cybersecurity will likely be unsurprised at the list, but Avast goes into the specifics of each threat. “This year, we celebrated the 30th anniversary of the World Wide Web. Fast forward thirty... Read more »

The post Avast: AI, IoT, and fake apps top 2019 cybersecurity threats appeared first on AI News.

]]>
According to Avast’s annual Threat Landscape Report, the biggest cybersecurity threats in 2019 will be AI, IoT, and fake apps.

Those who follow cybersecurity will likely be unsurprised at the list, but Avast goes into the specifics of each threat.

“This year, we celebrated the 30th anniversary of the World Wide Web. Fast forward thirty years and the threat landscape is exponentially more complex, and the available attack surface is growing faster than it has at any other point in the history of technology,” commented Ondrej Vlcek, President of Consumer at Avast.

“People are acquiring more and varied types of connected devices, meaning every aspect of our lives could be compromised by an attack. Looking ahead to 2019, these trends point to a magnification of threats through these expanding threat surfaces.”

Adversarial AI

AI has primarily been used to aid in general tasks, or in the cybersecurity realm to recognise and defend against evolving threats. That is now changing as AI goes on the offense.

Avast predicts a greater number of ‘DeepAttacks’ in 2019. These new attacks, which began last year, use AI to generate convincing media to evade security controls or fool human users.

One example of a DeepAttack was the creation of a fake video showing former President Obama delivering sentences. The video was created for demonstrative purposes by Buzzfeed without malicious intent.

Some will use DeepAttacks to pretend to be people they’re not, potentially convincing unaware victims to hand over bank details or perform tasks.

As seen with the ‘DeepFakes’ trend of using AI to create adult videos featuring celebrity faces, similar videos could also be used to blackmail or embarrass people from all walks of society.

Evolving IoT threats

The Internet of Things (IoT) has already caused major problems – from botnets such as Mirai, to hackers virtually entering people’s homes.

Manufacturers often continue to prioritise getting new products out-the-door before competitors and security remains a dangerous afterthought.

Avast’s research has found manufacturers also overlook security to keep their costs low. In the coming year, Avast believes we’ll see IoT malware evolve similar to how PC and mobile did.

Fake Mobile Apps

Speaking of mobile threats, Avast foresees a continued growth of fake apps containing malware attempting to make their way onto users’ devices.

With some developers choosing to avoid official app stores, as we saw with Epic Games’ decision with Fortnite on Android, this provides an even greater potential for hackers to infect devices.

That doesn’t mean sticking to official stores guarantees safety. Avast flagged several fake apps which appeared even on the Google Play Store.

? , , AI &

The post Avast: AI, IoT, and fake apps top 2019 cybersecurity threats appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/01/04/avast-ai-iot-fake-apps-2019-cybersecurity-threats/feed/ 0