pentagon – AI News https://news.deepgeniusai.com Artificial Intelligence News Wed, 25 Mar 2020 05:27:05 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png pentagon – AI News https://news.deepgeniusai.com 32 32 Pentagon is ‘falling behind’ in military AI, claims former NSWC chief https://news.deepgeniusai.com/2019/10/23/pentagon-military-ai-former-nswc-chief/ https://news.deepgeniusai.com/2019/10/23/pentagon-military-ai-former-nswc-chief/#respond Wed, 23 Oct 2019 14:50:41 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6136 The former head of US Naval Special Warfield Command (NSWC) has warned the Pentagon is falling behind adversaries in military AI developments. Speaking on Tuesday, Rear Adm. Brian Losey said AI is able to provide tactical guidance as well as anticipate enemy actions and mitigate threats. Adversaries with such technology will have a significant advantage.... Read more »

The post Pentagon is ‘falling behind’ in military AI, claims former NSWC chief appeared first on AI News.

]]>
The former head of US Naval Special Warfield Command (NSWC) has warned the Pentagon is falling behind adversaries in military AI developments.

Speaking on Tuesday, Rear Adm. Brian Losey said AI is able to provide tactical guidance as well as anticipate enemy actions and mitigate threats. Adversaries with such technology will have a significant advantage.

Losey is retired from the military but is now a partner at San Diego-based Shield AI.

Shield AI specialises in building artificial intelligence systems for the national security sector. The company’s flagship Hivemind AI enables autonomous robots to “see”, “reason”, and “search” the world. Nova is Shield AI’s first Hivemind-powered robot which autonomously searches buildings while streaming video and generating maps.

During a panel discussion at The Promise and The Risk of the AI Revolution conference, Losey said:

“We’re losing a lot of folks because of encounters with the unknown. Not knowing when we enter a house whether hostiles will be there and not really being able to adequately discern whether there are threats before we encounter them. And that’s how we incurred most of our casualties.

The idea is: can we use autonomy, can we use edge AI, can we use AI for manoeuvre to mitigate risk to operators to reduce casualties?”

AI has clear benefits today for soldiers on the battlefield, national policing, and even areas such as firefighting. In the future, it may be vital for national defense against ever more sophisticated weapons.

Some of the US’ historic adversaries, such as Russia, have already shown off developments such as killer robots and hypersonic missiles. AI will be vital to equalising the capabilities and hopefully act as a deterrent to the use of such weaponry.

“If you’re concerned about national security in the future, then it is imperative that the United States lead AI so we that we can unfold the best practices so that we’re not driven by secure AI to assume additional levels of risk when it comes to lethal actions,” Losey said.

Meanwhile, Nobel Peace Prize winner Jody Williams has warned against robots making life-and-death decisions on the battlefield. Williams said it is ‘unethical and immoral’ and can never be undone.

Williams was speaking at the UN in New York following the Project Quarterback announcement from the US military which uses AI to make decisions on what human soldiers should target and destroy.

“We need to step back and think about how artificial intelligence robotic weapons systems would affect this planet and the people living on it,” said Williams during a panel discussion.

It’s almost inevitable AI will be used for military purposes. Arguably, the best we can hope for is to quickly establish international norms for their development and usage to minimise the unthinkable potential damage.

One such norm that many researchers have backed is that AI should only make recommendations on actions to take, but a human should take accountability for any decision made.

A 2017 report by the Human Rights Watch chillingly concluded that no-one is currently accountable for a robot unlawfully killing someone in the heat of a battle.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Pentagon is ‘falling behind’ in military AI, claims former NSWC chief appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/10/23/pentagon-military-ai-former-nswc-chief/feed/ 0
DARPA introduces ‘third wave’ of artificial intelligence https://news.deepgeniusai.com/2018/09/28/darpa-third-wave-artificial-intelligence/ https://news.deepgeniusai.com/2018/09/28/darpa-third-wave-artificial-intelligence/#comments Fri, 28 Sep 2018 12:15:08 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3780 The Pentagon is launching a new artificial intelligence push it calls ‘AI Next’ which aims to improve the relationship between machines and humans. As part of the multi-year initiative, the US Defense Advanced Research Projects Agency (DARPA) is set to invest more than $2bn in the programme. In promo material for the programme, DARPA says... Read more »

The post DARPA introduces ‘third wave’ of artificial intelligence appeared first on AI News.

]]>
The Pentagon is launching a new artificial intelligence push it calls ‘AI Next’ which aims to improve the relationship between machines and humans.

As part of the multi-year initiative, the US Defense Advanced Research Projects Agency (DARPA) is set to invest more than $2bn in the programme.

In promo material for the programme, DARPA says AI Next will accelerate “the Third Wave” which enables machines to adapt to changing situations.

For instance, adaptive reasoning will enable computer algorithms to discern the difference between the use of ‘principal’ and ‘principle’ based on the analysis of surrounding words to help determine context.

Dr Steven Walker, Director of DARPA, said:

“Today, machines lack contextual reasoning capabilities and their training must cover every eventuality – which is not only costly – but ultimately impossible.

We want to explore how machines can acquire human-like communication and reasoning capabilities, with the ability to recognise new situations and environments and adapt to them.”

DARPA defines the first wave of AI as enabling “reasoning over narrowly defined problems,” but with a poor level of certainty. The second wave, it claims, enables “creating statistical models and training them on big data,” albeit with minimal reasoning.

Moving away from scripted responses is the next aim for AI. A survey conducted at the Joint Multi-Conference on Human-Level Artificial Intelligence found that 37 percent of respondents believe human-like artificial intelligence will be achieved within five to 10 years.

AI Next will also involve DARPA’s Artificial Intelligence Exploration (AIE) programme announced back in July.

AIE is DARPA’s initiative for the development of AI concepts that it considers high-risk, but high-payoff. The aim is to establish the feasibility of such projects within a one-and-a-half year timescale.

 Want to hear more about AI topics like this from global thought leaders? Find out more about the AI Expo shows in Santa Clara, London, and Amsterdam.

The post DARPA introduces ‘third wave’ of artificial intelligence appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/09/28/darpa-third-wave-artificial-intelligence/feed/ 1
Experts warn of AI disasters leading to research lockdown https://news.deepgeniusai.com/2018/09/13/experts-warn-ai-disasters-research/ https://news.deepgeniusai.com/2018/09/13/experts-warn-ai-disasters-research/#respond Thu, 13 Sep 2018 15:40:41 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3734 Experts from around the world have warned of potential AI disasters that could lead to a subsequent lockdown of research. Andrew Moore, the new head of AI at Google Cloud, is one such expert who has warned of scenarios that would lead to public backlash and restrictions that would prevent AI from reaching its full... Read more »

The post Experts warn of AI disasters leading to research lockdown appeared first on AI News.

]]>
Experts from around the world have warned of potential AI disasters that could lead to a subsequent lockdown of research.

Andrew Moore, the new head of AI at Google Cloud, is one such expert who has warned of scenarios that would lead to public backlash and restrictions that would prevent AI from reaching its full potential.

Back in November, Moore spoke at the Artificial Intelligence and Global Security Initiative. In his keynote, he said:

“If an AI disaster happens – and that would, for instance, be an autonomous car killing people due to serious bugs – then at some point AI systems are going to be locked down for development, at least in the US.

There are some even more horrible scenarios — which I don’t want to talk about on the stage, which we’re really worried about — that will cause the complete lockdown of robotics research.”

Autonomous vehicles have indeed already been involved with accidents.

Back in March, just four months after Moore’s warning, an Uber self-driving vehicle caused a fatality. The subsequent investigation found Elaine Herzberg and her bicycle were acknowledged by the car’s sensors but then flagged as a ‘false positive’ and dismissed.

Following years of sci-fi movies featuring out-of-control AI robots, it’s unsurprising the public are on edge about the pace of recent developments. There’s a lot of responsibility on researchers to conduct their work safely and ethically.

Professor Jim al-Khalili, the incoming president of the British Science Association, told the Financial Times:

“It is quite staggering to consider that until a few years ago AI was not taken seriously, even by AI researchers.

We are now seeing an unprecedented level of interest, investment and technological progress in the field, which many people, including myself, feel is happening too fast.”

In the race between world powers to become AI leaders, many fear it will lead to rushed and dangerous results. This is of particular concern with regards to AI militarisation.

Many researchers believe AI should not be used for military purposes. Several Google employees recently left the company over its contract with the Pentagon to develop recognition software for its drones.

Over 4,000 other employees signed a petition demanding that Google’s management cease the project and promise to never again ‘build warfare technology.’

Google has since made the decision not to renew its Pentagon contract when it expires. However, it’s already caused ripples across Silicon Valley with many employees for companies such as Microsoft and Amazon demanding not to be involved with military contracts.

Much like the development of nuclear weapons, however, AI being developed for military purposes seems inevitable and there will always be players willing to step in. Last month, AI News reported Booz Allen secured an $885 million Pentagon AI contract.

From a military standpoint, maintaining similar capabilities as a potential adversary is necessary. Back in July, China announced plans to upgrade its naval power with unmanned AI submarines that provide an edge over the fleets of their global counterparts.

Russian President Vladimir Putin, meanwhile, recently said: “[AI] comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”

Few dispute that AI will have a huge impact on the world, but the debate rages on about whether it will be primarily good or bad. Beyond the potential dangers of rogue AIs, there’s also the argument over the impact on jobs.

Al-Khalili wants to see AI added to school curriculums – as well as public information programmes launched – to educate good practices, prepare the workforce, and reduce fears created by sci-fi.

What are your thoughts on AI fears?

 

The post Experts warn of AI disasters leading to research lockdown appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/09/13/experts-warn-ai-disasters-research/feed/ 0
Booz Allen secures $885m Pentagon defense AI contract https://news.deepgeniusai.com/2018/08/01/booz-allen-pentagon-defense-ai/ https://news.deepgeniusai.com/2018/08/01/booz-allen-pentagon-defense-ai/#respond Wed, 01 Aug 2018 15:20:57 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3578 While many private companies are shying away from providing their AI expertise for defense purposes, Booz Allen has secured a $885M Pentagon contract. Under the contract, Booz Allen will help the Department of Defense and the intelligence community to “rapidly employ artificial intelligence, neural, and deep neural networks.” The company notes how today’s intelligence environment... Read more »

The post Booz Allen secures $885m Pentagon defense AI contract appeared first on AI News.

]]>
While many private companies are shying away from providing their AI expertise for defense purposes, Booz Allen has secured a $885M Pentagon contract.

Under the contract, Booz Allen will help the Department of Defense and the intelligence community to “rapidly employ artificial intelligence, neural, and deep neural networks.”

The company notes how today’s intelligence environment necessitates the US government maintains unprecedented amounts of Intelligence Surveillance Reconnaissance (ISR) data and information to thwart potential attacks and threats around the globe.

Judi Dotson, Executive Vice President of Booz Allen, comments:

“The high volume, variety and velocity of intelligence acquired across the U.S. government cannot be harnessed by people alone.

Our team of expert data scientists and engineers will apply cutting-edge solutions to deliver integrated eMAPS support to unlock the value of artificial intelligence (AI) and analytics, which will give warfighters positioned around the world the tools they need to drive U.S. national security forward.”

The use of AI for defense purposes is a controversial topic. Some believe it should be kept away, while others argue that it’s important to keep pace with other nations who are likely to pursue similar technological advancements.

Earlier this year, employees from Google resigned in protest of its ‘Project Maven’ plans to develop AI for the US military to help airmen sift through hours and hours of drone surveillance video.

Over 4,000 other employees signed a petition demanding that Google’s management cease the project and promise to never again “build warfare technology.”

The company has since withdrawn its plans and said that it would not be renewing its contract with the Department of Defense when it expires next year.

In a further step, Google CEO Sundar Pichai wrote in a blog post the company will not develop technologies or weapons that cause harm, or anything which can be used for surveillance violating “internationally accepted norms” or “widely accepted principles of international law and human rights.”

Employees from other Silicon Valley companies leading in AI development, such as Microsoft and Amazon, made similar demands from their leadership.

As a government contracting giant, it’s unlikely Booz Allen will receive the same backlash as those consumer-focused companies. However, the fears over AI and defense will remain in the hearts of many observers.

What are your thoughts on the defense AI contract?

 

The post Booz Allen secures $885m Pentagon defense AI contract appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/08/01/booz-allen-pentagon-defense-ai/feed/ 0
Don’t Be Evil: Google publishes its AI ethical principles following backlash https://news.deepgeniusai.com/2018/06/08/google-ai-ethical-code/ https://news.deepgeniusai.com/2018/06/08/google-ai-ethical-code/#respond Fri, 08 Jun 2018 15:21:28 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3275 Following the backlash over its Project Maven plans to develop AI for the US military, Google has since withdrawn and published its ethical principles. Project Maven was Google’s collaboration with the US Department of Defense. In March, leaks indicated that Google supplied AI technology to the Pentagon to help analyse drone footage. The following month,... Read more »

The post Don’t Be Evil: Google publishes its AI ethical principles following backlash appeared first on AI News.

]]>
Following the backlash over its Project Maven plans to develop AI for the US military, Google has since withdrawn and published its ethical principles.

Project Maven was Google’s collaboration with the US Department of Defense. In March, leaks indicated that Google supplied AI technology to the Pentagon to help analyse drone footage.

The following month, over 4,000 employees signed a petition demanding that Google’s management cease work on Project Maven and promise to never again “build warfare technology.”

In April 2018, Google’s infamous ‘Don’t be evil’ motto was removed from the code of conduct’s preface — but retained in its last sentence. In the final line, it now says: “And remember… don’t be evil, and if you see something that you think isn’t right – speak up!”

Google’s employees saw something that wasn’t right and did speak up. In fact, Gizmodo reported a dozen or so employees resigned in protest.

The company listened and told its employees last week that it would not be renewing its contract with the Department of Defense when it expires next year.

In a bid to further quell fears about the development of its AI technology and how the company intends it to be used, Google has today published its ethical principles.

Google CEO Sundar Pichai wrote in a blog post the company will not develop technologies or weapons that cause harm, or anything which can be used for surveillance violating “internationally accepted norms” or “widely accepted principles of international law and human rights.”

Some observers are concerned the clauses about the ‘accepted norms’ provides ground to push the boundaries of what’s considered acceptable.

Gizmodo also reported that Google sought to help build systems that enabled the Pentagon to perform surveillance on entire cities. In China, that is something which is widely accepted, and in use today.

Here are what Google says is the company’s key objectives for AI developments:

  1. Be socially beneficial.
  2. Avoid creating or reinforcing unfair bias.
  3. Be built and tested for safety.
  4. Be accountable to people.
  5. Incorporate privacy design principles.
  6. Uphold high standards of scientific excellence.
  7. Be made available for uses that accord with these principles.  

Pichai promised the company “will work to limit potentially harmful or abusive applications” and will block the use of their technology if they “become aware of uses that are inconsistent” with the principles Google has set out today.

What are your thoughts on Google’s AI ethical principles?

 

The post Don’t Be Evil: Google publishes its AI ethical principles following backlash appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/06/08/google-ai-ethical-code/feed/ 0