eff – AI News https://news.deepgeniusai.com Artificial Intelligence News Wed, 25 Mar 2020 05:27:51 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png eff – AI News https://news.deepgeniusai.com 32 32 EFF offers guidance to militaries seeking AI implementation https://news.deepgeniusai.com/2018/08/15/eff-guidance-militaries-ai/ https://news.deepgeniusai.com/2018/08/15/eff-guidance-militaries-ai/#respond Wed, 15 Aug 2018 14:50:35 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3640 The EFF (Electronic Frontier Foundation) has released a whitepaper offering guidance on the implementation of military AI projects. AI being used for military purposes is a scary thought, but it’s ultimately inevitable. The best that can be hoped for is that it’s used in a sensible way that addresses people’s concerns. The publishing of the... Read more »

The post EFF offers guidance to militaries seeking AI implementation appeared first on AI News.

]]>
The EFF (Electronic Frontier Foundation) has released a whitepaper offering guidance on the implementation of military AI projects.

AI being used for military purposes is a scary thought, but it’s ultimately inevitable. The best that can be hoped for is that it’s used in a sensible way that addresses people’s concerns.

The publishing of the whitepaper arrives in the wake of Google employees resigning over the company’s defense contract to provide AI knowledge to the US military’s drone project. Google has since decided against renewing the contract.

Some military planners and defense contractors struggle to understand the concerns of employees from Silicon Valley giants like Google, and the EFF is hoping to ‘bridge the gap’ to help them.

The EFF wants three core questions to be considered:

  • What are the major technical and strategic risks of applying current machine learning methods in weapons systems or military command and control?
  • What are the appropriate responses that states and militaries can adopt in response?
  • What kinds of AI are safe for military use, and what kinds aren’t?

One concept which has a lot of support is that any decision to kill must ultimately be made by a human operator, even if an AI recommends it. This ensures human compassion plays a part and guarantees accountability when mistakes occur instead of faulty programming.

In a blog post, the EFF wrote:

“Machine learning technologies have received incredible hype, and indeed they have made exciting progress on some fronts, but they remain brittle, subject to novel failure modes, and vulnerable to diverse forms of adversarial attack and manipulation.

They also lack the basic forms of common sense and judgment on which humans usually rely.”

At this time, the EFF highlights these points as reasons to keep AI away from things like target selection, fire control, and most command, control, and intelligence (C2I) roles at least for the foreseeable future.

Part I identifies how military use of AI could lead to unexpected dangers and risks:

  • Machine learning systems can be easily fooled or subverted: neural networks are vulnerable to a range of novel attacks including adversarial examples, model stealing, and data poisoning.
  • The current balance of power in cybersecurity significantly favours attackers over defenders.
  • Many of the recent lauded AI accomplishments have come from the field of reinforcement learning (RL) but current state-of-the-art RL systems are unpredictable, hard to control, and unsuited to complex real-world deployment.
  • Interactions between the systems deployed will be extremely complex, impossible to model, and subject to catastrophic forms of failure that are hard to mitigate. As a result, there is a serious risk of accidental or escalation of conflict.

Part II offers and elaborates on an agenda for mitigating these risks:

  • Support and establish international institutions and agreements for managing AI, and AI-related risks, in military contexts.
  • Focus on machine learning applications that lie outside of the “kill chain,” including logistics, system diagnostics and repair, and defensive cybersecurity.
  • Focus R&D effort on increasing the predictability, robustness, and safety of ML systems.
  • Share predictability and safety research with the wider academic and civilian research community.
  • Focus on defensive cybersecurity (including fixing vulnerabilities in widespread platforms and civilian infrastructure) as a major strategic objective.
  • Engage in military-to-military dialogue, and pursue memoranda of understanding and other instruments, agreements, or treaties to prevent the risks of accidental conflict, and accidental escalation.

Finally, Part III provides strategic questions to consider in the future that are intended to help the defense community contribute to building safe and controllable AI systems, rather than making vulnerable systems and processes that lead to regret in decades to come.

The full white paper can be found here (PDF)

What are your thoughts on the EFF’s whitepaper?

 

The post EFF offers guidance to militaries seeking AI implementation appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/08/15/eff-guidance-militaries-ai/feed/ 0
Bill forcing AI bots to reveal themselves faces EFF opposition https://news.deepgeniusai.com/2018/05/24/bill-ai-bot-reveal-eff/ https://news.deepgeniusai.com/2018/05/24/bill-ai-bot-reveal-eff/#comments Thu, 24 May 2018 13:58:39 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3175 A bill that would force AI bots to reveal themselves as not being human is facing opposition from the EFF over free speech concerns. Many were slightly disturbed by Google’s demo of its Duplex AI conducting a phone call and the other participant being unaware they weren’t speaking to a human. Less than a month... Read more »

The post Bill forcing AI bots to reveal themselves faces EFF opposition appeared first on AI News.

]]>
A bill that would force AI bots to reveal themselves as not being human is facing opposition from the EFF over free speech concerns.

Many were slightly disturbed by Google’s demo of its Duplex AI conducting a phone call and the other participant being unaware they weren’t speaking to a human. Less than a month later, Microsoft demonstrated it also had the same capabilities.

There are clearly big changes ahead in how we interact, and not everyone is going to be happy speaking to a robot without being aware. The B.O.T. Act (SB 1001) intends to make it illegal for a computer to speak to someone in California without revealing it’s not human.

The summary of the bill reads:

“This bill would make it unlawful for any person to use a bot, as defined, to communicate or interact with natural persons in California online with the intention of misleading and would provide that a person using a bot is presumed to act with the intent to mislead unless the person discloses that the bot is not a natural person.

The bill would require an online platform to enable users to report violations of this prohibition, to respond to the reports, and to provide the Attorney General with specified related information.”

Google and Microsoft have both said their respective AIs would reveal themselves not to be human regardless of legislation.

The B.O.T. Act is facing stiff opposition from the Electronic Freedom Foundation (EFF) who appear to be setting themselves up as champions of rights for machines.

In a post, the EFF wrote: “Why does it matter that a bot (instead of a human) is speaking such that we should have a government mandate to force disclosure?”

The non-profit for digital privacy argues the law raises ‘significant free speech concerns’ and could represent the start of what’s going to be a long debate over what rights machines should have.

Do you think AIs should be forced to reveal themselves as not human?

 

The post Bill forcing AI bots to reveal themselves faces EFF opposition appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/05/24/bill-ai-bot-reveal-eff/feed/ 1