crime – AI News https://news.deepgeniusai.com Artificial Intelligence News Thu, 06 Aug 2020 12:45:25 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png crime – AI News https://news.deepgeniusai.com 32 32 University College London: Deepfakes are the ‘most serious’ AI crime threat https://news.deepgeniusai.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/ https://news.deepgeniusai.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/#respond Thu, 06 Aug 2020 12:41:52 +0000 https://news.deepgeniusai.com/?p=9794 Researchers from University College London have released a ranking of what experts believe to be the most serious AI crime threats. The researchers first created a list of 20 expected ways AI will be used by criminals within the next 15 years. 31 experts were then asked to rank them by potential severity. Deepfakes –... Read more »

The post University College London: Deepfakes are the ‘most serious’ AI crime threat appeared first on AI News.

]]>
Researchers from University College London have released a ranking of what experts believe to be the most serious AI crime threats.

The researchers first created a list of 20 expected ways AI will be used by criminals within the next 15 years. 31 experts were then asked to rank them by potential severity.

Deepfakes – AI-generated images, videos, and articles – ranked top of the list as the most serious threat.

New and dangerous territory

It’s of little surprise to see deepfakes rank so highly, given the existing issues with disinformation campaigns.

Most fake content today at least must be created by humans, such as those working in the likes of Russia’s infamous “troll farms”. Human-generated disinformation campaigns take time to produce to a convincing standard and often have patterns which make them easier to trace. 

Automating the production of fake content en masse, to influence things such as democratic votes and public opinion, is entering into new and dangerous territory.

One of the most high-profile deepfake cases so far was that of US house speaker Nancy Pelosi. In 2018, a deepfake video circulated on social media which made Pelosi appear drunk and slurring her words. Pelosi criticised Facebook’s response, or lack thereof, and later told California’s KQED: “I think they have proven — by not taking down something they know is false — that they were willing enablers of the Russian interference in our election.”

The deepfake of Pelosi was unsophisticated and likely created to be amusing rather than malicious, but it’s an early warning of how such fakes could be used to cause reputational damage – or worse. Just imagine the potential consequences a fake video of the president announcing an imminent strike on somewhere like North Korea could have.

Deepfakes also have obvious potential to be used for fraud purposes, to pretend to be someone else to access things like bank accounts and sensitive information.

Then there’s the issue of blackmail. Deep learning has already been used to put the faces of celebrities on adult performers. While fake, the threat to release such videos – and the embarrassment caused – could lead to some paying a ransom to keep it from being made public.

“People now conduct large parts of their lives online and their online activity can make and break reputations,” comments first author Dr Matthew Caldwell of UCL Computer Science. “Such an online environment, where data is property and information power, is ideally suited for exploitation by AI-based criminal activity.”

All in all, it’s easy to see why experts are so concerned about deepfakes.

As part of a bid to persuade Facebook to change its policies on deepfakes, Israeli startup Canny AI created a deepfake of Facebook CEO Mark Zuckerberg last year which made it appear like he said: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

Other AI crime threats

There were four other major AI crime threats identified by the researchers: the use of driverless cars as weapons, automated spear fishing, harvesting information for blackmail, and the disruption of AI-controlled systems.

“As the capabilities of AI-based technologies expand, so too has their potential for criminal exploitation,” explained senior author Professor Lewis Griffin of UCL Computer Science. “To adequately prepare for possible AI threats, we need to identify what these threats might be, and how they may impact our lives.”

Of medium concern were of things such as the sale of items and services wrongly called AI, such as security screening and targeted advertising solutions. The researchers believe leading people to believe they’re AI-powered could be lucrative.

Among the lesser concerns is things such as so-called “burglar bots” which could get in through access points of a property to unlock them or search for data. The researchers believe this poses less of a threat because they can be easily prevented through methods such as letterbox cages.

Similarly, the researchers note the potential for AI-based stalking is damaging for individuals but isn’t considered a major threat as it could not operate at scale.

You can find the researchers’ full paper in the Crime Science Journal here.

(Photo by Bill Oxford on Unsplash)

The post University College London: Deepfakes are the ‘most serious’ AI crime threat appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/feed/ 0
AI tags potential criminals before they’ve done anything https://news.deepgeniusai.com/2018/11/28/ai-tags-potential-criminals/ https://news.deepgeniusai.com/2018/11/28/ai-tags-potential-criminals/#comments Wed, 28 Nov 2018 13:13:04 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4248 British police want to use AI for highlighting who is at risk of becoming a criminal before they’ve actually committed any crime. Although it sounds like a dystopian nightmare, there are clear benefits. Resources and outreach programs can be allocated to attempt preventing a crime, stop anyone becoming a victim, and remove the costs associated... Read more »

The post AI tags potential criminals before they’ve done anything appeared first on AI News.

]]>
British police want to use AI for highlighting who is at risk of becoming a criminal before they’ve actually committed any crime.

Although it sounds like a dystopian nightmare, there are clear benefits.

Resources and outreach programs can be allocated to attempt preventing a crime, stop anyone becoming a victim, and remove the costs associated with prosecuting and jailing someone.

With prisons overburdened and space limited, reducing the need to lock someone up is a win for everyone. Courts can also prioritise other matters to improve the efficiency of the whole legal infrastructure.

The proposed system is called the National Data Analytics Solution (NDAS) and uses data from local and national police databases.

According to NDAS’ project leader, over a terabyte of data has been collected from the aforementioned databases. This data includes logs of committed crimes in addition to around five million identifiable people.

There are over 1,400 indicators within the data which the AI uses to calculate an individual’s risk of committing a crime. Such indicators could include past offences, whether the person had assistance, and whether those in their network are criminals.

Alleviating some fears, there are no plans to arrest someone before they’ve committed a crime based on their potential. The system is designed as a preventative measure.

 AI & >.

The post AI tags potential criminals before they’ve done anything appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/11/28/ai-tags-potential-criminals/feed/ 1
INTERPOL investigates how AI will impact crime and policing https://news.deepgeniusai.com/2018/07/17/interpol-ai-impact-crime-policing/ https://news.deepgeniusai.com/2018/07/17/interpol-ai-impact-crime-policing/#respond Tue, 17 Jul 2018 14:49:38 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3503 INTERPOL hosted an event in Singapore bringing leading experts together with the aim of examining how AI will affect crime and prevention. The event, organised by INTERPOL and the UNICRI Centre for AI and Robotics, was held at the former’s Global Complex for Innovation. Experts from across industries gathered to discuss issues and several private... Read more »

The post INTERPOL investigates how AI will impact crime and policing appeared first on AI News.

]]>
INTERPOL hosted an event in Singapore bringing leading experts together with the aim of examining how AI will affect crime and prevention.

The event, organised by INTERPOL and the UNICRI Centre for AI and Robotics, was held at the former’s Global Complex for Innovation. Experts from across industries gathered to discuss issues and several private sector companies gave live demonstrations of related projects.

Some technological advances in AI pose a threat. In a recent interview with Irakli Beridze, he provided us with an example of AI potentially being used for impersonation. This could eventually lead to completely automated fraud.

Speaking about the Singapore event, Beridze said:

“I believe that we are taking critical first steps to building a platform for ‘future-proofing’ law enforcement.

Initiatives such as this will help us to prepare for potential future types of crime and capitalize on technological advancements to develop new and effective tools for law enforcement.”

Bringing policing up-to-date on these emerging threats is vital. Some 50 participants in law enforcement from 13 countries attended the event to exchange their expertise with the private sector and academia.

Some of the potential use cases for AI in law enforcement was fascinating. Discussions were held about things such as conducting virtual autopsies, predicting crime to optimise resources, detecting suspicious behaviour, combining with blockchain technology for traceability, and the automation of patrol vehicles.

Anita Hazenberg, Director of INTERPOL’s Innovation Centre, commented:

“Innovation is not a matter for police alone. Strong partnerships between all stakeholders with expertise is necessary to ensure police can quickly adapt to future challenges and formulate inventive solutions.”

Naturally, there are many obstacles to overcome both technologically and socially before such ideas can be used.

One major concern is that of AI bias. Especially where things such as facial recognition and behaviour detection are concerned, there’s potential for automated racial profiling.

A 2010 study by researchers at NIST and the University of Texas in Dallas has already found that algorithms designed and tested in East Asia are better at recognising East Asians, while those designed in western countries are more accurate at detecting Caucasians.

Several live demonstrations were given at the event by private sector companies including virtual communications, facial recognition, and incident prediction and response optimisation systems.

Police forces are planning to invest heavily in AI. Singapore Police, for example, has deployed patrolling robots and shared their experience with them during the conference.

Next on INTERPOL’s agenda is drones. The organisation will be holding a drone expert forum in August to further assist police in understanding how drones can be a tool, a threat, and a source of evidence.

What impact do you think AI will have on crime and policing?

 

The post INTERPOL investigates how AI will impact crime and policing appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/07/17/interpol-ai-impact-crime-policing/feed/ 0