EFF offers guidance to militaries seeking AI implementation

EFF offers guidance to militaries seeking AI implementation

The EFF (Electronic Frontier Foundation) has released a whitepaper offering guidance on the implementation of military AI projects.

AI being used for military purposes is a scary thought, but it’s ultimately inevitable. The best that can be hoped for is that it’s used in a sensible way that addresses people’s concerns.

The publishing of the whitepaper arrives in the wake of Google employees resigning over the company’s defense contract to provide AI knowledge to the US military’s drone project. Google has since decided against renewing the contract.

Some military planners and defense contractors struggle to understand the concerns of employees from Silicon Valley giants like Google, and the EFF is hoping to ‘bridge the gap’ to help them.

The EFF wants three core questions to be considered:

  • What are the major technical and strategic risks of applying current machine learning methods in weapons systems or military command and control?
  • What are the appropriate responses that states and militaries can adopt in response?
  • What kinds of AI are safe for military use, and what kinds aren’t?

One concept which has a lot of support is that any decision to kill must ultimately be made by a human operator, even if an AI recommends it. This ensures human compassion plays a part and guarantees accountability when mistakes occur instead of faulty programming.

In a blog post, the EFF wrote:

“Machine learning technologies have received incredible hype, and indeed they have made exciting progress on some fronts, but they remain brittle, subject to novel failure modes, and vulnerable to diverse forms of adversarial attack and manipulation.

They also lack the basic forms of common sense and judgment on which humans usually rely.”

At this time, the EFF highlights these points as reasons to keep AI away from things like target selection, fire control, and most command, control, and intelligence (C2I) roles at least for the foreseeable future.

Part I identifies how military use of AI could lead to unexpected dangers and risks:

  • Machine learning systems can be easily fooled or subverted: neural networks are vulnerable to a range of novel attacks including adversarial examples, model stealing, and data poisoning.
  • The current balance of power in cybersecurity significantly favours attackers over defenders.
  • Many of the recent lauded AI accomplishments have come from the field of reinforcement learning (RL) but current state-of-the-art RL systems are unpredictable, hard to control, and unsuited to complex real-world deployment.
  • Interactions between the systems deployed will be extremely complex, impossible to model, and subject to catastrophic forms of failure that are hard to mitigate. As a result, there is a serious risk of accidental or escalation of conflict.

Part II offers and elaborates on an agenda for mitigating these risks:

  • Support and establish international institutions and agreements for managing AI, and AI-related risks, in military contexts.
  • Focus on machine learning applications that lie outside of the “kill chain,” including logistics, system diagnostics and repair, and defensive cybersecurity.
  • Focus R&D effort on increasing the predictability, robustness, and safety of ML systems.
  • Share predictability and safety research with the wider academic and civilian research community.
  • Focus on defensive cybersecurity (including fixing vulnerabilities in widespread platforms and civilian infrastructure) as a major strategic objective.
  • Engage in military-to-military dialogue, and pursue memoranda of understanding and other instruments, agreements, or treaties to prevent the risks of accidental conflict, and accidental escalation.

Finally, Part III provides strategic questions to consider in the future that are intended to help the defense community contribute to building safe and controllable AI systems, rather than making vulnerable systems and processes that lead to regret in decades to come.

The full white paper can be found here (PDF)

What are your thoughts on the EFF’s whitepaper?

 

Tags: , , , , , , , ,