defence – AI News https://news.deepgeniusai.com Artificial Intelligence News Tue, 28 Apr 2020 12:11:05 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png defence – AI News https://news.deepgeniusai.com 32 32 World’s oldest defence think tank concludes British spies need AI https://news.deepgeniusai.com/2020/04/28/world-oldest-defence-think-tank-british-spies-ai/ https://news.deepgeniusai.com/2020/04/28/world-oldest-defence-think-tank-british-spies-ai/#comments Tue, 28 Apr 2020 12:11:03 +0000 https://news.deepgeniusai.com/?p=9572 The Royal United Services Institute (RUSI) says in an intelligence report that British spies will need to use AI to counter threats. Based in Westminster, the RUSI is the world’s oldest think tank on international defence and security. Founded in 1831 by the first Duke of Wellington, Sir Arthur Wellesley, the RUSI remains a highly... Read more »

The post World’s oldest defence think tank concludes British spies need AI appeared first on AI News.

]]>
The Royal United Services Institute (RUSI) says in an intelligence report that British spies will need to use AI to counter threats.

Based in Westminster, the RUSI is the world’s oldest think tank on international defence and security. Founded in 1831 by the first Duke of Wellington, Sir Arthur Wellesley, the RUSI remains a highly respected institution that’s as relevant today as ever.

AI is rapidly advancing the capabilities of adversaries. In its report, the RUSI says that hackers – both state-sponsored and independent – are likely to use AI for cyberattacks on the web and political systems.

Adversaries “will undoubtedly seek to use AI to attack the UK”, the RUSI notes.

Threats could emerge in a variety of ways. Deepfakes, which use a neural network to generate convincing fake videos and images, are one example of a threat already being posed today. With the US elections coming up, there’s concerns deepfakes of political figures could be used for voter manipulation.

AI could also be used for powerful new malware which mutates to avoid detection. Such malware could even infect and take control of emerging technologies such as driverless cars, smart city infrastructure, and drones.

The RUSI believes that humans will struggle to counter AI threats alone and will need the assistance of automation.

“Adoption of AI is not just important to help intelligence agencies manage the technical challenge of information overload,” said Alexander Babuta, one of the report’s authors. “It is highly likely that malicious actors will use AI to attack the UK in numerous ways, and the intelligence community will need to develop new AI-based defence measures.”

GCHQ, the UK’s service which focuses on signals intelligence , commissioned the RUSI’s independent report. Ken McCallum, the new head of MI5 – the UK’s domestic counter-intelligence and security agency – has already said that greater use of AI will be one of his priorities.

The RUSI believes AI will be of little value for “predictive intelligence” to do things such as predicting when a terrorist act is likely to occur before it happens. Highlighting counter-terrorism specifically, the RUSI says such cases are too infrequent to look for patterns compared to other criminal acts. Reasons for terrorist acts can also change very quickly dependent on world events.

All of this raises concerns about the automation of discrimination. The RUSI calls for more of an “augmented” intelligence – whereby technology assists sifting through large amounts of data, but decisions are ultimately taken by humans – rather than leaving it all up to the machines.

In terms of global positioning, the RUSI recognises the UK’s strength in AI with talent emerging from the country’s world-leading universities and capabilities in the GCHQ, bodies like the Alan Turing Institute, the Centre for Data Ethics and Innovation, and even more in the private sector.

While it’s widely-acknowledged countries like the US and China have far more resources overall to throw at AI advancements, the RUSI believes the UK has the potential to be a leader in the technology within a much-needed ethical framework. However, they say it’s important not to be too preoccupied with the possible downsides.

“There is a risk of stifling innovation if we become overly-focused on hypothetical worst-case outcomes and speculations over some dystopian future AI-driven surveillance network,” argues Babuta.

“Legitimate ethical concerns will be overshadowed unless we focus on likely and realistic uses of AI in the short-to-medium term.”

You can find a copy of the RUSI’s full report here (PDF)

(Photo by Chris Yang on Unsplash)

The post World’s oldest defence think tank concludes British spies need AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/04/28/world-oldest-defence-think-tank-british-spies-ai/feed/ 1
Palantir took over Project Maven defense contract after Google backed out https://news.deepgeniusai.com/2019/12/12/palantir-project-maven-defense-contract-google-out/ https://news.deepgeniusai.com/2019/12/12/palantir-project-maven-defense-contract-google-out/#comments Thu, 12 Dec 2019 13:55:30 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6303 Surveillance firm Palantir took up a Pentagon defense contract known as Project Maven after Google dropped out due to backlash. Project Maven is a Pentagon initiative aiming to use AI technologies for deploying and monitoring unmanned aerial vehicles (UAVs). Naturally, Google’s involvement with the initiative received plenty of backlash both internally and externally. At least... Read more »

The post Palantir took over Project Maven defense contract after Google backed out appeared first on AI News.

]]>
Surveillance firm Palantir took up a Pentagon defense contract known as Project Maven after Google dropped out due to backlash.

Project Maven is a Pentagon initiative aiming to use AI technologies for deploying and monitoring unmanned aerial vehicles (UAVs).

Naturally, Google’s involvement with the initiative received plenty of backlash both internally and externally. At least a dozen employees quit Google while many others threatened to walk out if the firm continued building military products.

The pressure forced Google to abandon the lucrative Pentagon contract. However, it just meant that it was happily picked up by another company.

According to Business Insider who broke the news, the company which stepped in to develop Project Maven was Palantir – a company founded by Peter Thiel, a serial entrepreneur, venture capitalist, and cofounder of PayPal.

Business Insider reporter Becky Peterson wrote that:

“Palantir is working with the Defense Department to build artificial intelligence that can analyze video feeds from aerial drones … Internally at Palantir, where names of clients are kept close to the vest, the project is referred to as ‘Tron,’ after the 1982 Steven Lisberger film.”

In June 2018, Thiel famously said that Google’s decision to pull out from Project Maven but push ahead with Project Dragonfly (a search project for China) amounts to “treason” and should be investigated as such.

Project Maven/Tron is described as being capable of extensive tracking and monitoring of UAVs without human input, but the unclassified information available indicates that it will not be able to fire upon targets. This is somewhat in-line with the accepted norms being established about the use of AI in the military.

Many experts accept that AI will increasingly be used in the military but are seeking to establish acceptable practices. One of the key principles is that, while an AI can track and offer advice to human operators, it should never be able to make decisions by itself which could lead to loss of life.

The rapid pace in which the Project Maven contract was picked up by another company gives credence to comments made by some tech giants that, rather than pull out from such contracts altogether – and potentially hand them to less ethical companies – it’s better to help shape them from the inside.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Palantir took over Project Maven defense contract after Google backed out appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/12/12/palantir-project-maven-defense-contract-google-out/feed/ 3
US defense department outlines its AI strategy https://news.deepgeniusai.com/2019/02/14/us-defense-department-ai-strategy/ https://news.deepgeniusai.com/2019/02/14/us-defense-department-ai-strategy/#respond Thu, 14 Feb 2019 17:11:19 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4936 Shortly after President Trump issued his vague AI executive order, the US Defense Department outlined a more comprehensive strategy. “The impact of artificial intelligence will extend across the entire department, spanning from operations and training to recruiting and healthcare,” DoD CIO Dana Deasy said. A 17-page document outlines how the DoD intends to advance its... Read more »

The post US defense department outlines its AI strategy appeared first on AI News.

]]>
Shortly after President Trump issued his vague AI executive order, the US Defense Department outlined a more comprehensive strategy.

“The impact of artificial intelligence will extend across the entire department, spanning from operations and training to recruiting and healthcare,” DoD CIO Dana Deasy said.

A 17-page document outlines how the DoD intends to advance its AI prowess with five key steps:

  1. Delivering AI-enabled capabilities that address key missions.
  2. Scaling AI’s impact across DoD through a common foundation that enables decentralized development and experimentation.
  3. Cultivating a leading AI workforce.
  4. Engaging with commercial, academic, and international allies and partners.
  5. Leading in military ethics and AI safety.

Given the concerns about the so-called AI ‘arms race’, that final point will cause a sigh of relief in some people – at least for those who believe it.

The DoD will rapidly prototype new innovations, increase research and development, and boost training and recruitment.

Rather than AI replacing jobs, the DoD believes it will empower those currently serving: “The women and men in the US armed forces remain our enduring source of strength; we will use AI-enabled information, tools, and systems to empower, not replace, those who serve.”

Prior to his resignation as US Secretary of Defense, General Editor Mattis implored the president to create a national strategy for AI. With his defense background, Mattis was concerned the US is not keeping pace with the likes of China.

Here are the example areas in which the DoD believes AI can improve day-to-day operations:

  • Improving situational awareness and decision-making.
  • Increasing the safety of operating equipment.
  • Implementing predictive maintenance and supply.
  • Streamlining business processes (e.g. reducing the time spent on highly manual, repetitive, and frequent tasks.)

“The present moment is pivotal: we must act to protect our security and advance our competitiveness,” the DOD document states. “But we must embrace change if we are to reap the benefits of continued security and prosperity for the future.”

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post US defense department outlines its AI strategy appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/02/14/us-defense-department-ai-strategy/feed/ 0
Google funding ‘good’ AI may help some forget that military fiasco https://news.deepgeniusai.com/2018/10/30/google-funding-good-ai-military-fiasco/ https://news.deepgeniusai.com/2018/10/30/google-funding-good-ai-military-fiasco/#respond Tue, 30 Oct 2018 12:49:29 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4139 Google has launched an initiative to fund ‘good’ AI which may help some forget about the questionable military contracts it was involved with. The new initiative, called AI for Social Good, is a joint effort between the company’s philanthropic subsidiary Google.org and its own experts. Kicking off the initiative is the ‘AI Impact Challenge’ which... Read more »

The post Google funding ‘good’ AI may help some forget that military fiasco appeared first on AI News.

]]>
Google has launched an initiative to fund ‘good’ AI which may help some forget about the questionable military contracts it was involved with.

The new initiative, called AI for Social Good, is a joint effort between the company’s philanthropic subsidiary Google.org and its own experts.

Kicking off the initiative is the ‘AI Impact Challenge’ which is set to provide $25 million in funding to non-profits while providing access to Google’s vast resources.

As part of the initiative, Google partnered with the Pacific Islands Fisheries Science Center of the US National Oceanic and Atmospheric Administration (NOAA) to develop algorithms to identify humpback whale calls.

The algorithms were created using 15 years worth of data and provide vital information about humpback whale presence, seasonality, daily calling behaviour, and population structure.

While it’s great to see Google funding and lending its expertise to important AI projects, it’s set to a wider backdrop of Silicon Valley tech giants’ involvement with controversial projects such as defence.

Google itself was embroiled in a backlash over its ‘Project Maven’ defence contract to supply drone analysing AI to the Pentagon. The contract received both internal and external criticism.

Back in April, Google’s infamous ‘Don’t be evil’ motto was removed from its code of conduct’s preface. Now, in the final line, it says: “And remember… don’t be evil, and if you see something that you think isn’t right – speak up!”

Google’s employees spoke up. Over 4,000 signed a petition demanding their management cease the project and never again “build warfare technology.”

Following the Project Maven backlash, Google CEO Sundar Pichai promised in a blog post the company will not develop technologies or weapons that cause harm, or anything which can be used for surveillance violating “internationally accepted norms” or “widely accepted principles of international law and human rights”.

Here are what Google says is the company’s key objectives for AI developments:

    1. Be socially beneficial.
    1. Avoid creating or reinforcing unfair bias.
    1. Be built and tested for safety.
    1. Be accountable to people.
    1. Incorporate privacy design principles.
    1. Uphold high standards of scientific excellence.
  1. Be made available for uses that accord with these principles.  

That first objective, “be socially beneficial”, is what Google is aiming for with its latest initiative. The company says it’s not against future government contracts as long as they’re ethical.

“We’re entirely happy to work with the US government and other governments in ways that are consistent with our principles,” Google’s AI chief Jeff Dean told reporters Monday.

 AI & >.

The post Google funding ‘good’ AI may help some forget that military fiasco appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/10/30/google-funding-good-ai-military-fiasco/feed/ 0
DARPA introduces ‘third wave’ of artificial intelligence https://news.deepgeniusai.com/2018/09/28/darpa-third-wave-artificial-intelligence/ https://news.deepgeniusai.com/2018/09/28/darpa-third-wave-artificial-intelligence/#comments Fri, 28 Sep 2018 12:15:08 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3780 The Pentagon is launching a new artificial intelligence push it calls ‘AI Next’ which aims to improve the relationship between machines and humans. As part of the multi-year initiative, the US Defense Advanced Research Projects Agency (DARPA) is set to invest more than $2bn in the programme. In promo material for the programme, DARPA says... Read more »

The post DARPA introduces ‘third wave’ of artificial intelligence appeared first on AI News.

]]>
The Pentagon is launching a new artificial intelligence push it calls ‘AI Next’ which aims to improve the relationship between machines and humans.

As part of the multi-year initiative, the US Defense Advanced Research Projects Agency (DARPA) is set to invest more than $2bn in the programme.

In promo material for the programme, DARPA says AI Next will accelerate “the Third Wave” which enables machines to adapt to changing situations.

For instance, adaptive reasoning will enable computer algorithms to discern the difference between the use of ‘principal’ and ‘principle’ based on the analysis of surrounding words to help determine context.

Dr Steven Walker, Director of DARPA, said:

“Today, machines lack contextual reasoning capabilities and their training must cover every eventuality – which is not only costly – but ultimately impossible.

We want to explore how machines can acquire human-like communication and reasoning capabilities, with the ability to recognise new situations and environments and adapt to them.”

DARPA defines the first wave of AI as enabling “reasoning over narrowly defined problems,” but with a poor level of certainty. The second wave, it claims, enables “creating statistical models and training them on big data,” albeit with minimal reasoning.

Moving away from scripted responses is the next aim for AI. A survey conducted at the Joint Multi-Conference on Human-Level Artificial Intelligence found that 37 percent of respondents believe human-like artificial intelligence will be achieved within five to 10 years.

AI Next will also involve DARPA’s Artificial Intelligence Exploration (AIE) programme announced back in July.

AIE is DARPA’s initiative for the development of AI concepts that it considers high-risk, but high-payoff. The aim is to establish the feasibility of such projects within a one-and-a-half year timescale.

 Want to hear more about AI topics like this from global thought leaders? Find out more about the AI Expo shows in Santa Clara, London, and Amsterdam.

The post DARPA introduces ‘third wave’ of artificial intelligence appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/09/28/darpa-third-wave-artificial-intelligence/feed/ 1
Experts warn of AI disasters leading to research lockdown https://news.deepgeniusai.com/2018/09/13/experts-warn-ai-disasters-research/ https://news.deepgeniusai.com/2018/09/13/experts-warn-ai-disasters-research/#respond Thu, 13 Sep 2018 15:40:41 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3734 Experts from around the world have warned of potential AI disasters that could lead to a subsequent lockdown of research. Andrew Moore, the new head of AI at Google Cloud, is one such expert who has warned of scenarios that would lead to public backlash and restrictions that would prevent AI from reaching its full... Read more »

The post Experts warn of AI disasters leading to research lockdown appeared first on AI News.

]]>
Experts from around the world have warned of potential AI disasters that could lead to a subsequent lockdown of research.

Andrew Moore, the new head of AI at Google Cloud, is one such expert who has warned of scenarios that would lead to public backlash and restrictions that would prevent AI from reaching its full potential.

Back in November, Moore spoke at the Artificial Intelligence and Global Security Initiative. In his keynote, he said:

“If an AI disaster happens – and that would, for instance, be an autonomous car killing people due to serious bugs – then at some point AI systems are going to be locked down for development, at least in the US.

There are some even more horrible scenarios — which I don’t want to talk about on the stage, which we’re really worried about — that will cause the complete lockdown of robotics research.”

Autonomous vehicles have indeed already been involved with accidents.

Back in March, just four months after Moore’s warning, an Uber self-driving vehicle caused a fatality. The subsequent investigation found Elaine Herzberg and her bicycle were acknowledged by the car’s sensors but then flagged as a ‘false positive’ and dismissed.

Following years of sci-fi movies featuring out-of-control AI robots, it’s unsurprising the public are on edge about the pace of recent developments. There’s a lot of responsibility on researchers to conduct their work safely and ethically.

Professor Jim al-Khalili, the incoming president of the British Science Association, told the Financial Times:

“It is quite staggering to consider that until a few years ago AI was not taken seriously, even by AI researchers.

We are now seeing an unprecedented level of interest, investment and technological progress in the field, which many people, including myself, feel is happening too fast.”

In the race between world powers to become AI leaders, many fear it will lead to rushed and dangerous results. This is of particular concern with regards to AI militarisation.

Many researchers believe AI should not be used for military purposes. Several Google employees recently left the company over its contract with the Pentagon to develop recognition software for its drones.

Over 4,000 other employees signed a petition demanding that Google’s management cease the project and promise to never again ‘build warfare technology.’

Google has since made the decision not to renew its Pentagon contract when it expires. However, it’s already caused ripples across Silicon Valley with many employees for companies such as Microsoft and Amazon demanding not to be involved with military contracts.

Much like the development of nuclear weapons, however, AI being developed for military purposes seems inevitable and there will always be players willing to step in. Last month, AI News reported Booz Allen secured an $885 million Pentagon AI contract.

From a military standpoint, maintaining similar capabilities as a potential adversary is necessary. Back in July, China announced plans to upgrade its naval power with unmanned AI submarines that provide an edge over the fleets of their global counterparts.

Russian President Vladimir Putin, meanwhile, recently said: “[AI] comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”

Few dispute that AI will have a huge impact on the world, but the debate rages on about whether it will be primarily good or bad. Beyond the potential dangers of rogue AIs, there’s also the argument over the impact on jobs.

Al-Khalili wants to see AI added to school curriculums – as well as public information programmes launched – to educate good practices, prepare the workforce, and reduce fears created by sci-fi.

What are your thoughts on AI fears?

 

The post Experts warn of AI disasters leading to research lockdown appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/09/13/experts-warn-ai-disasters-research/feed/ 0
EFF offers guidance to militaries seeking AI implementation https://news.deepgeniusai.com/2018/08/15/eff-guidance-militaries-ai/ https://news.deepgeniusai.com/2018/08/15/eff-guidance-militaries-ai/#respond Wed, 15 Aug 2018 14:50:35 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3640 The EFF (Electronic Frontier Foundation) has released a whitepaper offering guidance on the implementation of military AI projects. AI being used for military purposes is a scary thought, but it’s ultimately inevitable. The best that can be hoped for is that it’s used in a sensible way that addresses people’s concerns. The publishing of the... Read more »

The post EFF offers guidance to militaries seeking AI implementation appeared first on AI News.

]]>
The EFF (Electronic Frontier Foundation) has released a whitepaper offering guidance on the implementation of military AI projects.

AI being used for military purposes is a scary thought, but it’s ultimately inevitable. The best that can be hoped for is that it’s used in a sensible way that addresses people’s concerns.

The publishing of the whitepaper arrives in the wake of Google employees resigning over the company’s defense contract to provide AI knowledge to the US military’s drone project. Google has since decided against renewing the contract.

Some military planners and defense contractors struggle to understand the concerns of employees from Silicon Valley giants like Google, and the EFF is hoping to ‘bridge the gap’ to help them.

The EFF wants three core questions to be considered:

  • What are the major technical and strategic risks of applying current machine learning methods in weapons systems or military command and control?
  • What are the appropriate responses that states and militaries can adopt in response?
  • What kinds of AI are safe for military use, and what kinds aren’t?

One concept which has a lot of support is that any decision to kill must ultimately be made by a human operator, even if an AI recommends it. This ensures human compassion plays a part and guarantees accountability when mistakes occur instead of faulty programming.

In a blog post, the EFF wrote:

“Machine learning technologies have received incredible hype, and indeed they have made exciting progress on some fronts, but they remain brittle, subject to novel failure modes, and vulnerable to diverse forms of adversarial attack and manipulation.

They also lack the basic forms of common sense and judgment on which humans usually rely.”

At this time, the EFF highlights these points as reasons to keep AI away from things like target selection, fire control, and most command, control, and intelligence (C2I) roles at least for the foreseeable future.

Part I identifies how military use of AI could lead to unexpected dangers and risks:

  • Machine learning systems can be easily fooled or subverted: neural networks are vulnerable to a range of novel attacks including adversarial examples, model stealing, and data poisoning.
  • The current balance of power in cybersecurity significantly favours attackers over defenders.
  • Many of the recent lauded AI accomplishments have come from the field of reinforcement learning (RL) but current state-of-the-art RL systems are unpredictable, hard to control, and unsuited to complex real-world deployment.
  • Interactions between the systems deployed will be extremely complex, impossible to model, and subject to catastrophic forms of failure that are hard to mitigate. As a result, there is a serious risk of accidental or escalation of conflict.

Part II offers and elaborates on an agenda for mitigating these risks:

  • Support and establish international institutions and agreements for managing AI, and AI-related risks, in military contexts.
  • Focus on machine learning applications that lie outside of the “kill chain,” including logistics, system diagnostics and repair, and defensive cybersecurity.
  • Focus R&D effort on increasing the predictability, robustness, and safety of ML systems.
  • Share predictability and safety research with the wider academic and civilian research community.
  • Focus on defensive cybersecurity (including fixing vulnerabilities in widespread platforms and civilian infrastructure) as a major strategic objective.
  • Engage in military-to-military dialogue, and pursue memoranda of understanding and other instruments, agreements, or treaties to prevent the risks of accidental conflict, and accidental escalation.

Finally, Part III provides strategic questions to consider in the future that are intended to help the defense community contribute to building safe and controllable AI systems, rather than making vulnerable systems and processes that lead to regret in decades to come.

The full white paper can be found here (PDF)

What are your thoughts on the EFF’s whitepaper?

 

The post EFF offers guidance to militaries seeking AI implementation appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/08/15/eff-guidance-militaries-ai/feed/ 0
Booz Allen secures $885m Pentagon defense AI contract https://news.deepgeniusai.com/2018/08/01/booz-allen-pentagon-defense-ai/ https://news.deepgeniusai.com/2018/08/01/booz-allen-pentagon-defense-ai/#respond Wed, 01 Aug 2018 15:20:57 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3578 While many private companies are shying away from providing their AI expertise for defense purposes, Booz Allen has secured a $885M Pentagon contract. Under the contract, Booz Allen will help the Department of Defense and the intelligence community to “rapidly employ artificial intelligence, neural, and deep neural networks.” The company notes how today’s intelligence environment... Read more »

The post Booz Allen secures $885m Pentagon defense AI contract appeared first on AI News.

]]>
While many private companies are shying away from providing their AI expertise for defense purposes, Booz Allen has secured a $885M Pentagon contract.

Under the contract, Booz Allen will help the Department of Defense and the intelligence community to “rapidly employ artificial intelligence, neural, and deep neural networks.”

The company notes how today’s intelligence environment necessitates the US government maintains unprecedented amounts of Intelligence Surveillance Reconnaissance (ISR) data and information to thwart potential attacks and threats around the globe.

Judi Dotson, Executive Vice President of Booz Allen, comments:

“The high volume, variety and velocity of intelligence acquired across the U.S. government cannot be harnessed by people alone.

Our team of expert data scientists and engineers will apply cutting-edge solutions to deliver integrated eMAPS support to unlock the value of artificial intelligence (AI) and analytics, which will give warfighters positioned around the world the tools they need to drive U.S. national security forward.”

The use of AI for defense purposes is a controversial topic. Some believe it should be kept away, while others argue that it’s important to keep pace with other nations who are likely to pursue similar technological advancements.

Earlier this year, employees from Google resigned in protest of its ‘Project Maven’ plans to develop AI for the US military to help airmen sift through hours and hours of drone surveillance video.

Over 4,000 other employees signed a petition demanding that Google’s management cease the project and promise to never again “build warfare technology.”

The company has since withdrawn its plans and said that it would not be renewing its contract with the Department of Defense when it expires next year.

In a further step, Google CEO Sundar Pichai wrote in a blog post the company will not develop technologies or weapons that cause harm, or anything which can be used for surveillance violating “internationally accepted norms” or “widely accepted principles of international law and human rights.”

Employees from other Silicon Valley companies leading in AI development, such as Microsoft and Amazon, made similar demands from their leadership.

As a government contracting giant, it’s unlikely Booz Allen will receive the same backlash as those consumer-focused companies. However, the fears over AI and defense will remain in the hearts of many observers.

What are your thoughts on the defense AI contract?

 

The post Booz Allen secures $885m Pentagon defense AI contract appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/08/01/booz-allen-pentagon-defense-ai/feed/ 0
DSTL will run Ministry of Defence’s AI research lab https://news.deepgeniusai.com/2018/05/29/dstl-ministry-of-defence-ai-research-lab/ https://news.deepgeniusai.com/2018/05/29/dstl-ministry-of-defence-ai-research-lab/#comments Tue, 29 May 2018 16:24:50 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3198 UK Defence Secretary Gavin Williamson has announced the creation of an AI research lab based at the Defence Science and Technology Laboratory (DSTL) in Porton Down, as part of an initiative by the Ministry of Defence. The lab will focus on advancing the use of AI for defence purposes to ensure the UK remains able... Read more »

The post DSTL will run Ministry of Defence’s AI research lab appeared first on AI News.

]]>
UK Defence Secretary Gavin Williamson has announced the creation of an AI research lab based at the Defence Science and Technology Laboratory (DSTL) in Porton Down, as part of an initiative by the Ministry of Defence.

The lab will focus on advancing the use of AI for defence purposes to ensure the UK remains able to counter evolving threats.

Dstl currently delivers more than £20 million of AI research. Specific focuses of the new facility will include autonomous vehicles, countering fake news, and the development of enhanced computer network defences.

Williamson made the announcement during the first meeting of the joint UK-US Defence Innovation Board.

At the meeting, Williamson said:

“The relationship we have with our American partners is indispensable to both our nations. In the face of evolving global threats, we must harness new technologies and approaches to stay ahead of our adversaries and keep us safe.

Today’s meeting of military and scientific minds from both sides of the Atlantic encourages our best and brightest to develop new capabilities in everything from artificial intelligence and autonomous vehicles, to advanced cyber and robotics.”

The team of American experts includes notable figures such as: Dr Eric E. Schmidt, former chair of Google Inc; Dr. J. Michael McQuade Ph.D, Senior Vice President for Science and Technology at United Technologies; and Sally Donnelly, former Senior Advisor to the Secretary of Defense.

Following the meeting, a reciprocal team from the UK Defence Innovation Board will visit the US later this year to develop joint recommendations based on the needs of the MoD and its American partners.

What are your thoughts on the defence AI research lab?

 

The post DSTL will run Ministry of Defence’s AI research lab appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/05/29/dstl-ministry-of-defence-ai-research-lab/feed/ 1