infosec – AI News https://news.deepgeniusai.com Artificial Intelligence News Wed, 25 Mar 2020 05:43:11 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png infosec – AI News https://news.deepgeniusai.com 32 32 F-Secure details nature-inspired AI project harnessing ‘swarm intelligence’ https://news.deepgeniusai.com/2019/11/21/fsecure-nature-ai-project-swarm-intelligence/ https://news.deepgeniusai.com/2019/11/21/fsecure-nature-ai-project-swarm-intelligence/#respond Thu, 21 Nov 2019 12:53:59 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6218 Cybersecurity giant F-Secure has detailed Project Blackfin, an AI initiative which harnesses nature-inspired “swarm intelligence” techniques. The concept sounds similar to Fetch.ai in that decentralised autonomous AI agents will collaborate in order to achieve common goals. Cambridge-based Fetch.ai is focusing its efforts on the use of autonomous AI agents for IoT purposes. Naturally, F-Secure is... Read more »

The post F-Secure details nature-inspired AI project harnessing ‘swarm intelligence’ appeared first on AI News.

]]>
Cybersecurity giant F-Secure has detailed Project Blackfin, an AI initiative which harnesses nature-inspired “swarm intelligence” techniques.

The concept sounds similar to Fetch.ai in that decentralised autonomous AI agents will collaborate in order to achieve common goals.

Cambridge-based Fetch.ai is focusing its efforts on the use of autonomous AI agents for IoT purposes. Naturally, F-Secure is currently seeking to use such agents to further improve the company’s detection and response capabilities.

Matti Aksela, F-Secure’s VP of AI, believes there’s a common misconception that “advanced” AI should mimic human intelligence (known as AGI, or Artificial General Intelligence).

“People’s expectations that ‘advanced’ machine intelligence simply mimics human intelligence is limiting our understanding of what AI can and should do,” says Aksela.

“Instead of building AI to function as though it were human, we can and should be exploring ways to unlock the unique potential of machine intelligence, and how that can augment what people do.”

On average, experts surveyed in 2017 estimate there’s a 50 percent chance AGI will be achieved by 2060. However, there’s a significant difference of opinion based on geography: Asian respondents expect AGI in 30 years, whereas North Americans expect it in 74 years.

The development of autonomous agents, like those pursued by F-Secure and Fetch.ai, should happen at a much faster pace.

F-Secure believes its own project will take several years to reach its full potential but some on-device intelligence mechanisms are already being used for the company’s breach-detection solutions.

While it’s not quite AGI, the individual traits each agent possesses should still provide very advanced capabilities when combined. This is most easily thought of as much like a team of humans working towards a common goal.

Indeed, Project Blackfin takes inspiration from natural phenomena. Swarm intelligence can be observed across nature such as schools of fish or ant colonies.

Rather than create a single centralised AI model to provide instructions, F-Secure says the AI agents would be intelligent and powerful enough to communicate and work together.

“Essentially, you’ll have a colony of fast local AIs adapting to their own environment while working together, instead of one big AI making decisions for everyone,” Aksela explains.

In the case of F-Secure, each of its agents learns from observing their local hosts and networks. These observations are then further augmented by the wider network of agents spanning various industries and organisations.

F-Secure highlights that another benefit of this approach is that it also helps organisations avoid sharing confidential, potentially sensitive information via the cloud or product telemetry.

“Looking beyond detecting breaches and attacks, we can envision these fleets of AI agents monitoring the overall health, efficiency, and usefulness of computer networks, or even systems like power grids or self-driving cars,” says Mikko Hypponen, F-Secure Chief Research Officer.

“But most of all, I think this research can help us see AI as something more than just a threat to our jobs and livelihoods.”

F-Secure plans to publish research, findings, and updates as they occur. More information on Project Blackfin is available here.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post F-Secure details nature-inspired AI project harnessing ‘swarm intelligence’ appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/11/21/fsecure-nature-ai-project-swarm-intelligence/feed/ 0
McAfee: Keep an eye on the humans pulling the levers, not the AIs https://news.deepgeniusai.com/2019/03/06/mcafee-keep-eye-humans-ais/ https://news.deepgeniusai.com/2019/03/06/mcafee-keep-eye-humans-ais/#respond Wed, 06 Mar 2019 17:14:56 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5301 Security firm McAfee has warned that it’s more likely humans will use AI for malicious purposes rather than it going rogue itself. It’s become a cliché metaphor, but people are still concerned a self-thinking killer AI like SkyNet from the film Terminator will be created. McAfee CTO Steve Grobman spoke at this year’s RSA conference... Read more »

The post McAfee: Keep an eye on the humans pulling the levers, not the AIs appeared first on AI News.

]]>
Security firm McAfee has warned that it’s more likely humans will use AI for malicious purposes rather than it going rogue itself.

It’s become a cliché metaphor, but people are still concerned a self-thinking killer AI like SkyNet from the film Terminator will be created.

McAfee CTO Steve Grobman spoke at this year’s RSA conference in San Francisco and warned the wrong humans in control of powerful AIs are his company’s primary concern.

To provide an example of how AIs could be used for good or bad purposes, Grobman handed over to McAfee Chief Data Scientist Dr Celeste Fralick.

Fralick explained how McAfee has attempted to predict crime in San Francisco using historic data combined with a machine learning model. The AI recommends where police could be deployed to have the best chance of apprehending criminals.

Most law-abiding citizens would agree this is a positive use of AI. However, in the hands of criminals it could be used to pinpoint where to commit a crime and have the best chance of avoiding capture.

In another demo at the conference, Fralick showed a video where his words were being spoken by Grobman in an example of a ‘DeepFake’.

“I used freely available, recorded public comments by you to create and train a machine learning model that let me develop a deepfake video with my words coming out of your mouth,” Fralick explained. “It just shows one way that AI and machine learning can be used to create massive chaos.

DeepFakes are opening up wide range of new threats including fraud through impersonation. Another is the potential for blackmail, with sexually-explicit fakes being threatened to be released to embarass an individual.

“We can’t allow fear to impede our progress, but it’s how we manage the innovation that is the real story,” Grobman concluded.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post McAfee: Keep an eye on the humans pulling the levers, not the AIs appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/03/06/mcafee-keep-eye-humans-ais/feed/ 0
Microsoft wants AI to predict when a PC is going to be infected https://news.deepgeniusai.com/2018/12/14/microsoft-ai-predict-pc-infected/ https://news.deepgeniusai.com/2018/12/14/microsoft-ai-predict-pc-infected/#respond Fri, 14 Dec 2018 16:25:36 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4334 Microsoft wants to harness AI’s incredible prediction abilities to detect PC malware attacks before they even happen. The company has sponsored a competition on Kaggle which challenges data scientists to create models which predict if a device is likely to become infected with malware given a current machine state. In a blog post, Microsoft wrote:... Read more »

The post Microsoft wants AI to predict when a PC is going to be infected appeared first on AI News.

]]>
Microsoft wants to harness AI’s incredible prediction abilities to detect PC malware attacks before they even happen.

The company has sponsored a competition on Kaggle which challenges data scientists to create models which predict if a device is likely to become infected with malware given a current machine state.

In a blog post, Microsoft wrote:

“The competition provides academics and researchers with varied backgrounds a fresh opportunity to work on a real-world problem using a fresh set of data from Microsoft.

Results from the contest will help us identify opportunities to further improve Microsoft’s layered defenses, focusing on preventative protection.

Not all machines are equally likely to get malware; competitors will help build models for identifying devices that have a higher risk of getting malware so that preemptive action can be taken.”

Participants are supplied with 9.4GB of anonymised data gathered from 16.8M devices to build their models.

A pot of $25,000 prize money will be used to incentivise participation spread out as:

  • 1st Place – $12,000
  • 2nd Place – $7,000
  • 3rd Place – $3,000
  • 4th Place – $2,000
  • 5th Place – $1,000

The best performing entry, thus far, has achieved 68.9 percent accuracy – though it’s likely this will be improved before the end.

Entries must be submitted before the competition closes on March 13, 2019.

You can find out more and enter on Kaggle here.

 AI & >.

The post Microsoft wants AI to predict when a PC is going to be infected appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/12/14/microsoft-ai-predict-pc-infected/feed/ 0
Information Commissioner targets intrusive facial recognition https://news.deepgeniusai.com/2018/05/15/information-commissioner-facial-recognition/ https://news.deepgeniusai.com/2018/05/15/information-commissioner-facial-recognition/#respond Tue, 15 May 2018 11:04:43 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3087 Facial recognition offers huge opportunities, but the Information Commissioner is more concerned about how it could impact privacy. In a post on the ICO blog, Information Commissioner Elizabeth Denham highlights the advantages and disadvantages of facial recognition. “I have identified FRT by law enforcement as a priority area for my office and I recently wrote... Read more »

The post Information Commissioner targets intrusive facial recognition appeared first on AI News.

]]>
Facial recognition offers huge opportunities, but the Information Commissioner is more concerned about how it could impact privacy.

In a post on the ICO blog, Information Commissioner Elizabeth Denham highlights the advantages and disadvantages of facial recognition.

“I have identified FRT by law enforcement as a priority area for my office and I recently wrote to the Home Office and the NPCC setting out my concerns,” Denham wrote. “Should my concerns not be addressed, I will consider what legal action is needed to ensure the right protections are in place for the public.”

One advantage many would appreciate is the ability to speed up passport control. However, how such data is collected and used is of great concern to many.

Facial recognition is not a new technology, but advances in AI is making it more powerful than ever. In the privacy-conscious Western world, the use of facial recognition is still relatively novel. In the East, it’s long been a fairly accepted part of society.

Last month, AI News reported Chinese facial recognition provider SenseTime became the most funded AI startup in history.

SenseTime’s technology is used by the Chinese government and its ‘Viper’ system is aiming to process and analyse over 100,000 simultaneous real-time streams from traffic cameras, ATMs, and more — for tagging and keeping track of individuals.

It’s easy to see how a system like SenseTime can be used to detect criminals. In fact, last month, a suspect wanted by police was apprehended by police after facial recognition caught him among 60,000 concertgoers.

Here in the UK, tests of facial recognition for detecting criminals have been less effective.

Last week, South Wales Police announced it used NEC’s NeoFace Watch facial recognition software at the Champions League Final in Cardiff. Its success rate was just eight percent and raised 2,297 false positives.

Rather than improve efficiency, such a poor result would increase police work substantially.

“Police forces must have clear evidence to demonstrate that the use of FRT in public spaces is effective in resolving the problem that it aims to address and that no less intrusive technology or methods are available to address that problem,” wrote Denham.

The information commissioner is ‘deeply concerned’ about the lack of national-level coordination in assessing the privacy risks and a comprehensive governance framework with regards to facial recognition.

Organisations including the Civil Society, Big Brother Watch, and the Electronic Frontier Foundation in the U.S have expressed similar concerns about facial recognition in recent reports. In the wrong hands, it could be very dangerous.

Do you agree with facial recognition?

 

The post Information Commissioner targets intrusive facial recognition appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/05/15/information-commissioner-facial-recognition/feed/ 0
Experts believe AI will be weaponised in the next 12 months – attacks unslowed by dark web shutdowns https://news.deepgeniusai.com/2017/08/02/experts-believe-ai-will-weaponised-next-12-months-attacks-unslowed-dark-web-shutdowns/ https://news.deepgeniusai.com/2017/08/02/experts-believe-ai-will-weaponised-next-12-months-attacks-unslowed-dark-web-shutdowns/#respond Wed, 02 Aug 2017 16:16:22 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=2256 The majority of cybersecurity experts believe AI will be weaponised for use in cyberattacks within the next 12 months, and the shutting down of dark web markets will not decrease malware activity. Cylance posted the results of their survey of Black Hat USA 2017 attendees yesterday, and 62 percent of the infosec experts believe cyberattacks... Read more »

The post Experts believe AI will be weaponised in the next 12 months – attacks unslowed by dark web shutdowns appeared first on AI News.

]]>
The majority of cybersecurity experts believe AI will be weaponised for use in cyberattacks within the next 12 months, and the shutting down of dark web markets will not decrease malware activity.

Cylance posted the results of their survey of Black Hat USA 2017 attendees yesterday, and 62 percent of the infosec experts believe cyberattacks will become far more advanced over the course of the next year due to artificial intelligence.

Interestingly, 32 percent said there wasn’t a chance of AI being used for attacks in the next 12 months. The remaining six percent were uncertain.

Following an increasing pace of high-profile and devastating cyberattacks in recent years, law enforcement agencies have been cracking down on dark web marketplaces where strains of malware are often sold. Just last month, two dark web marketplaces known as AlphaBay and Hansa were seized following an international operation between Europol, the FBI, the U.S. Drug Enforcement Agency, and the Dutch National Police.

Despite these closures, 80 percent of the surveyed cybersecurity experts believe it will not slow down cyberattacks. 7 percent said they were uncertain which leaves just 13 percent believing it will have an impact.

With regards to whom poses the biggest cybersecurity threat to the United States, Russia came out number one (34%) which is perhaps no surprise considering the ongoing investigations into allegations of the nation’s involvement in the U.S presidential elections. This was closely followed by organised cybercriminals (33%), then China (20%), North Korea (11%), and Iran (2%).

On a more positive note, while AI poses a threat to cybersecurity, it’s also improving defense and the ability to be more proactive when attacks occur to limit the potential damage.

“Based on our findings, it is clear that infosec professionals are worried about a mix of advanced threats and negligence on the part of their organizations, with little consensus with regards to which groups (nation-states or general cybercriminals) pose the biggest threat to our security,” wrote the Cyclance team in a blog post. “As such, a combination of advanced defensive solutions and general education initiatives is needed, in order to ensure we begin moving towards a more secure future.”

Are you concerned about AI being weaponised? Share your thoughts in the comments.

To learn more about AI in the Enterprise register for your pass to the AI Exhibition and Conference this fall in Santa Clara, CA today!

 

The post Experts believe AI will be weaponised in the next 12 months – attacks unslowed by dark web shutdowns appeared first on AI News.

]]>
https://news.deepgeniusai.com/2017/08/02/experts-believe-ai-will-weaponised-next-12-months-attacks-unslowed-dark-web-shutdowns/feed/ 0