military – AI News https://news.deepgeniusai.com Artificial Intelligence News Mon, 10 Aug 2020 15:06:42 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png military – AI News https://news.deepgeniusai.com 32 32 DARPA’s AI-powered jet fight will be held virtually due to COVID-19 https://news.deepgeniusai.com/2020/08/10/darpa-ai-jet-fight-online-covid-19/ https://news.deepgeniusai.com/2020/08/10/darpa-ai-jet-fight-online-covid-19/#respond Mon, 10 Aug 2020 15:06:40 +0000 https://news.deepgeniusai.com/?p=9803 An upcoming event to display and test AI-powered jet fighters will now be held virtually due to COVID-19. “We are still excited to see how the AI algorithms perform against each other as well as a Weapons School-trained human and hope that fighter pilots from across the Air Force, Navy, and Marine Corps, as well... Read more »

The post DARPA’s AI-powered jet fight will be held virtually due to COVID-19 appeared first on AI News.

]]>
An upcoming event to display and test AI-powered jet fighters will now be held virtually due to COVID-19.

“We are still excited to see how the AI algorithms perform against each other as well as a Weapons School-trained human and hope that fighter pilots from across the Air Force, Navy, and Marine Corps, as well as military leaders and members of the AI tech community will register and watch online,” said Col. Dan Javorsek, program manager in DARPA’s Strategic Technology Office.

“It’s been amazing to see how far the teams have advanced AI for autonomous dogfighting in less than a year.”

DARPA (Defense Advanced Research Projects Agency) is using the AlphaDogfight Trial event to recruit more AI developers for its Air Combat Evolution (ACE) program.

The upcoming event is the final in a series of three and will finish with a bang as the AI-powered F-16 fighter planes virtually take on a human pilot.

“Regardless of whether the human or machine wins the final dogfight, the AlphaDogfight Trials is all about increasing trust in AI,” Javorsek added.

“If the champion AI earns the respect of an F-16 pilot, we’ll have come one step closer to achieving effective human-machine teaming in air combat, which is the goal of the ACE program.”

The first event was held in November last year with early algorithms:

A second event was held in January this year demonstrating the vast improvements made with the algorithms over a relatively short period of time. The algorithms took on adversaries created by the Johns Hopkins University Applied Physics Lab:

The third and final event will be streamed live from the Applied Physics Lab (APL) from August 18th-20th.

Eight teams will fly against five APL-developed adversary AI algorithms on day one. On day two, teams will fly against each other in a round-robin tournament.

Day three is when things get most exciting, with the top four teams competing in a single-elimination tournament for the AlphaDogfight Trials Championship. The winning team’s AI will then fly against a real F-16 pilot to test the AI’s abilities against a human.

ACE envisions future air combat eventually being conducted without putting human pilots at risk. In the meantime, DARPA hopes the initiative will help improve human pilots’ trust in fighting alongside AI.

Prior registration is required to view the event. Non-US citizens must register prior to August 11th while Americans have until August 17th.

You can register for the event here.

(Image Credit: DARPA)

The post DARPA’s AI-powered jet fight will be held virtually due to COVID-19 appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/08/10/darpa-ai-jet-fight-online-covid-19/feed/ 0
Palantir took over Project Maven defense contract after Google backed out https://news.deepgeniusai.com/2019/12/12/palantir-project-maven-defense-contract-google-out/ https://news.deepgeniusai.com/2019/12/12/palantir-project-maven-defense-contract-google-out/#comments Thu, 12 Dec 2019 13:55:30 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6303 Surveillance firm Palantir took up a Pentagon defense contract known as Project Maven after Google dropped out due to backlash. Project Maven is a Pentagon initiative aiming to use AI technologies for deploying and monitoring unmanned aerial vehicles (UAVs). Naturally, Google’s involvement with the initiative received plenty of backlash both internally and externally. At least... Read more »

The post Palantir took over Project Maven defense contract after Google backed out appeared first on AI News.

]]>
Surveillance firm Palantir took up a Pentagon defense contract known as Project Maven after Google dropped out due to backlash.

Project Maven is a Pentagon initiative aiming to use AI technologies for deploying and monitoring unmanned aerial vehicles (UAVs).

Naturally, Google’s involvement with the initiative received plenty of backlash both internally and externally. At least a dozen employees quit Google while many others threatened to walk out if the firm continued building military products.

The pressure forced Google to abandon the lucrative Pentagon contract. However, it just meant that it was happily picked up by another company.

According to Business Insider who broke the news, the company which stepped in to develop Project Maven was Palantir – a company founded by Peter Thiel, a serial entrepreneur, venture capitalist, and cofounder of PayPal.

Business Insider reporter Becky Peterson wrote that:

“Palantir is working with the Defense Department to build artificial intelligence that can analyze video feeds from aerial drones … Internally at Palantir, where names of clients are kept close to the vest, the project is referred to as ‘Tron,’ after the 1982 Steven Lisberger film.”

In June 2018, Thiel famously said that Google’s decision to pull out from Project Maven but push ahead with Project Dragonfly (a search project for China) amounts to “treason” and should be investigated as such.

Project Maven/Tron is described as being capable of extensive tracking and monitoring of UAVs without human input, but the unclassified information available indicates that it will not be able to fire upon targets. This is somewhat in-line with the accepted norms being established about the use of AI in the military.

Many experts accept that AI will increasingly be used in the military but are seeking to establish acceptable practices. One of the key principles is that, while an AI can track and offer advice to human operators, it should never be able to make decisions by itself which could lead to loss of life.

The rapid pace in which the Project Maven contract was picked up by another company gives credence to comments made by some tech giants that, rather than pull out from such contracts altogether – and potentially hand them to less ethical companies – it’s better to help shape them from the inside.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Palantir took over Project Maven defense contract after Google backed out appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/12/12/palantir-project-maven-defense-contract-google-out/feed/ 3
Pentagon is ‘falling behind’ in military AI, claims former NSWC chief https://news.deepgeniusai.com/2019/10/23/pentagon-military-ai-former-nswc-chief/ https://news.deepgeniusai.com/2019/10/23/pentagon-military-ai-former-nswc-chief/#respond Wed, 23 Oct 2019 14:50:41 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6136 The former head of US Naval Special Warfield Command (NSWC) has warned the Pentagon is falling behind adversaries in military AI developments. Speaking on Tuesday, Rear Adm. Brian Losey said AI is able to provide tactical guidance as well as anticipate enemy actions and mitigate threats. Adversaries with such technology will have a significant advantage.... Read more »

The post Pentagon is ‘falling behind’ in military AI, claims former NSWC chief appeared first on AI News.

]]>
The former head of US Naval Special Warfield Command (NSWC) has warned the Pentagon is falling behind adversaries in military AI developments.

Speaking on Tuesday, Rear Adm. Brian Losey said AI is able to provide tactical guidance as well as anticipate enemy actions and mitigate threats. Adversaries with such technology will have a significant advantage.

Losey is retired from the military but is now a partner at San Diego-based Shield AI.

Shield AI specialises in building artificial intelligence systems for the national security sector. The company’s flagship Hivemind AI enables autonomous robots to “see”, “reason”, and “search” the world. Nova is Shield AI’s first Hivemind-powered robot which autonomously searches buildings while streaming video and generating maps.

During a panel discussion at The Promise and The Risk of the AI Revolution conference, Losey said:

“We’re losing a lot of folks because of encounters with the unknown. Not knowing when we enter a house whether hostiles will be there and not really being able to adequately discern whether there are threats before we encounter them. And that’s how we incurred most of our casualties.

The idea is: can we use autonomy, can we use edge AI, can we use AI for manoeuvre to mitigate risk to operators to reduce casualties?”

AI has clear benefits today for soldiers on the battlefield, national policing, and even areas such as firefighting. In the future, it may be vital for national defense against ever more sophisticated weapons.

Some of the US’ historic adversaries, such as Russia, have already shown off developments such as killer robots and hypersonic missiles. AI will be vital to equalising the capabilities and hopefully act as a deterrent to the use of such weaponry.

“If you’re concerned about national security in the future, then it is imperative that the United States lead AI so we that we can unfold the best practices so that we’re not driven by secure AI to assume additional levels of risk when it comes to lethal actions,” Losey said.

Meanwhile, Nobel Peace Prize winner Jody Williams has warned against robots making life-and-death decisions on the battlefield. Williams said it is ‘unethical and immoral’ and can never be undone.

Williams was speaking at the UN in New York following the Project Quarterback announcement from the US military which uses AI to make decisions on what human soldiers should target and destroy.

“We need to step back and think about how artificial intelligence robotic weapons systems would affect this planet and the people living on it,” said Williams during a panel discussion.

It’s almost inevitable AI will be used for military purposes. Arguably, the best we can hope for is to quickly establish international norms for their development and usage to minimise the unthinkable potential damage.

One such norm that many researchers have backed is that AI should only make recommendations on actions to take, but a human should take accountability for any decision made.

A 2017 report by the Human Rights Watch chillingly concluded that no-one is currently accountable for a robot unlawfully killing someone in the heat of a battle.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Pentagon is ‘falling behind’ in military AI, claims former NSWC chief appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/10/23/pentagon-military-ai-former-nswc-chief/feed/ 0
EU AI Expert Group: Ethical risks are ‘unimaginable’ https://news.deepgeniusai.com/2019/04/11/eu-ai-expert-group-ethical-risks/ https://news.deepgeniusai.com/2019/04/11/eu-ai-expert-group-ethical-risks/#respond Thu, 11 Apr 2019 11:51:44 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5487 The EU Commission’s AI expert group has published its assessment of the rapidly-advancing technology and warned it has “unimaginable” ethical risks. Some of the highlighted risks includes lethal autonomous systems, tracking individuals, and ‘scoring’ people in society. On the subject of lethal autonomous systems, the experts warn machines with cognitive skills could “decide whom, when... Read more »

The post EU AI Expert Group: Ethical risks are ‘unimaginable’ appeared first on AI News.

]]>
The EU Commission’s AI expert group has published its assessment of the rapidly-advancing technology and warned it has “unimaginable” ethical risks.

Some of the highlighted risks includes lethal autonomous systems, tracking individuals, and ‘scoring’ people in society.

On the subject of lethal autonomous systems, the experts warn machines with cognitive skills could “decide whom, when and where to fight without human intervention”.

When it comes to tracking individuals, the experts foresee biometric data of people being involuntarily used such as “lie detection [or] personality assessment through micro expressions”.

Citizen scoring is on some people’s minds after being featured in an episode of dystopian series Black Mirror. The experts note that scoring criteria must be transparent and fair, with scores being challengeable

The guidelines have been several years in the making and have launched alongside a pilot project for testing how they work in practice.

Experts from various fields across Europe sit in the group, including academic lawyers from Birmingham and Oxford universities.

They concluded: “it is important to build AI systems that are worthy of trust, since human beings will only be able to confidently and fully reap its benefits when the technology, including the processes and people behind the technology, are trustworthy.”

The EU as a whole is looking to invest €20bn (£17bn) every year for the next decade to close the current gap between European developments and those in Asia and North America.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post EU AI Expert Group: Ethical risks are ‘unimaginable’ appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/04/11/eu-ai-expert-group-ethical-risks/feed/ 0
US defense department outlines its AI strategy https://news.deepgeniusai.com/2019/02/14/us-defense-department-ai-strategy/ https://news.deepgeniusai.com/2019/02/14/us-defense-department-ai-strategy/#respond Thu, 14 Feb 2019 17:11:19 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4936 Shortly after President Trump issued his vague AI executive order, the US Defense Department outlined a more comprehensive strategy. “The impact of artificial intelligence will extend across the entire department, spanning from operations and training to recruiting and healthcare,” DoD CIO Dana Deasy said. A 17-page document outlines how the DoD intends to advance its... Read more »

The post US defense department outlines its AI strategy appeared first on AI News.

]]>
Shortly after President Trump issued his vague AI executive order, the US Defense Department outlined a more comprehensive strategy.

“The impact of artificial intelligence will extend across the entire department, spanning from operations and training to recruiting and healthcare,” DoD CIO Dana Deasy said.

A 17-page document outlines how the DoD intends to advance its AI prowess with five key steps:

  1. Delivering AI-enabled capabilities that address key missions.
  2. Scaling AI’s impact across DoD through a common foundation that enables decentralized development and experimentation.
  3. Cultivating a leading AI workforce.
  4. Engaging with commercial, academic, and international allies and partners.
  5. Leading in military ethics and AI safety.

Given the concerns about the so-called AI ‘arms race’, that final point will cause a sigh of relief in some people – at least for those who believe it.

The DoD will rapidly prototype new innovations, increase research and development, and boost training and recruitment.

Rather than AI replacing jobs, the DoD believes it will empower those currently serving: “The women and men in the US armed forces remain our enduring source of strength; we will use AI-enabled information, tools, and systems to empower, not replace, those who serve.”

Prior to his resignation as US Secretary of Defense, General Editor Mattis implored the president to create a national strategy for AI. With his defense background, Mattis was concerned the US is not keeping pace with the likes of China.

Here are the example areas in which the DoD believes AI can improve day-to-day operations:

  • Improving situational awareness and decision-making.
  • Increasing the safety of operating equipment.
  • Implementing predictive maintenance and supply.
  • Streamlining business processes (e.g. reducing the time spent on highly manual, repetitive, and frequent tasks.)

“The present moment is pivotal: we must act to protect our security and advance our competitiveness,” the DOD document states. “But we must embrace change if we are to reap the benefits of continued security and prosperity for the future.”

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post US defense department outlines its AI strategy appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/02/14/us-defense-department-ai-strategy/feed/ 0
Chinese university recruits ‘patriotic’ students to build AI weapons https://news.deepgeniusai.com/2018/11/09/chinese-university-students-ai-weapons/ https://news.deepgeniusai.com/2018/11/09/chinese-university-students-ai-weapons/#comments Fri, 09 Nov 2018 16:10:06 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4184 A university in China has recruited 27 boys and four girls to become the world’s youngest AI weapons scientists. All of the students are under 18 and were picked from a list of 5,000 candidates by the Beijing Institute of Technology (BIT). Beyond academic prowess, the BIT sought other qualities in the candidates. “We are... Read more »

The post Chinese university recruits ‘patriotic’ students to build AI weapons appeared first on AI News.

]]>
A university in China has recruited 27 boys and four girls to become the world’s youngest AI weapons scientists.

All of the students are under 18 and were picked from a list of 5,000 candidates by the Beijing Institute of Technology (BIT).

Beyond academic prowess, the BIT sought other qualities in the candidates.

“We are looking for qualities such as creative thinking, willingness to fight, a persistence when facing challenges,” a BIT professor told the South China Morning Post.

The recruitment of students from such a young age marks a new point in the race to weaponise AI, primarily led by the US and China.

Students on the ‘Experimental Program for Intelligent Weapons Systems’ course will be mentored by two senior weapons scientists.

Following their first semester, the students will be asked to choose a speciality field in order to be assigned to a relevant defence laboratory for hands-on experience.

The course is four years long and students will be expected to progress onto a PhD at the university to lead China’s AI weapons initiatives.

Last year, Chinese President Xi Jinping emphasised his country will be putting a much greater focus on military AI research.

AI News reported back in July that China is planning for a new era of sea power with unmanned AI-powered submarines. The country hopes to have them operational by the early 2020s to patrol areas home to disputed military bases.

“The AI has no soul. It is perfect for this kind of job,” said Lin Yang, Chief Scientist on the project. “[An AI sub] can be instructed to take down a nuclear-powered submarine or other high-value targets. It can even perform a kamikaze strike.”

Of particular concern is that China’s subs are being designed not to seek input during the course of a mission. The international norm being promoted by AI researchers is that any weaponised AI system ultimately requires human input to make decisions.

If China is prepared to fully automate their submarines, it’s likely they’re willing to do so for other weapons systems.

There’s the infamous story of Soviet Officer Stanislav Petrov who decided not to launch the country’s nuclear warheads after a computer glitch made it appear like that five Minuteman intercontinental ballistic missiles had been launched by the US towards the Soviet Union.

Human instinct averted a nuclear disaster that day.

“We are wiser than the computers,” Petrov said in a 2010 interview with the German magazine Der Spiegel. “We created them.”

Had it been an AI instead of Petrov making the decision in 1983, the outcome would likely have been very different. China’s apparent willingness to fully automate weapons should be a concern to us all.

 AI & >.

The post Chinese university recruits ‘patriotic’ students to build AI weapons appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/11/09/chinese-university-students-ai-weapons/feed/ 1
Google funding ‘good’ AI may help some forget that military fiasco https://news.deepgeniusai.com/2018/10/30/google-funding-good-ai-military-fiasco/ https://news.deepgeniusai.com/2018/10/30/google-funding-good-ai-military-fiasco/#respond Tue, 30 Oct 2018 12:49:29 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4139 Google has launched an initiative to fund ‘good’ AI which may help some forget about the questionable military contracts it was involved with. The new initiative, called AI for Social Good, is a joint effort between the company’s philanthropic subsidiary Google.org and its own experts. Kicking off the initiative is the ‘AI Impact Challenge’ which... Read more »

The post Google funding ‘good’ AI may help some forget that military fiasco appeared first on AI News.

]]>
Google has launched an initiative to fund ‘good’ AI which may help some forget about the questionable military contracts it was involved with.

The new initiative, called AI for Social Good, is a joint effort between the company’s philanthropic subsidiary Google.org and its own experts.

Kicking off the initiative is the ‘AI Impact Challenge’ which is set to provide $25 million in funding to non-profits while providing access to Google’s vast resources.

As part of the initiative, Google partnered with the Pacific Islands Fisheries Science Center of the US National Oceanic and Atmospheric Administration (NOAA) to develop algorithms to identify humpback whale calls.

The algorithms were created using 15 years worth of data and provide vital information about humpback whale presence, seasonality, daily calling behaviour, and population structure.

While it’s great to see Google funding and lending its expertise to important AI projects, it’s set to a wider backdrop of Silicon Valley tech giants’ involvement with controversial projects such as defence.

Google itself was embroiled in a backlash over its ‘Project Maven’ defence contract to supply drone analysing AI to the Pentagon. The contract received both internal and external criticism.

Back in April, Google’s infamous ‘Don’t be evil’ motto was removed from its code of conduct’s preface. Now, in the final line, it says: “And remember… don’t be evil, and if you see something that you think isn’t right – speak up!”

Google’s employees spoke up. Over 4,000 signed a petition demanding their management cease the project and never again “build warfare technology.”

Following the Project Maven backlash, Google CEO Sundar Pichai promised in a blog post the company will not develop technologies or weapons that cause harm, or anything which can be used for surveillance violating “internationally accepted norms” or “widely accepted principles of international law and human rights”.

Here are what Google says is the company’s key objectives for AI developments:

    1. Be socially beneficial.
    1. Avoid creating or reinforcing unfair bias.
    1. Be built and tested for safety.
    1. Be accountable to people.
    1. Incorporate privacy design principles.
    1. Uphold high standards of scientific excellence.
  1. Be made available for uses that accord with these principles.  

That first objective, “be socially beneficial”, is what Google is aiming for with its latest initiative. The company says it’s not against future government contracts as long as they’re ethical.

“We’re entirely happy to work with the US government and other governments in ways that are consistent with our principles,” Google’s AI chief Jeff Dean told reporters Monday.

 AI & >.

The post Google funding ‘good’ AI may help some forget that military fiasco appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/10/30/google-funding-good-ai-military-fiasco/feed/ 0
DARPA introduces ‘third wave’ of artificial intelligence https://news.deepgeniusai.com/2018/09/28/darpa-third-wave-artificial-intelligence/ https://news.deepgeniusai.com/2018/09/28/darpa-third-wave-artificial-intelligence/#comments Fri, 28 Sep 2018 12:15:08 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3780 The Pentagon is launching a new artificial intelligence push it calls ‘AI Next’ which aims to improve the relationship between machines and humans. As part of the multi-year initiative, the US Defense Advanced Research Projects Agency (DARPA) is set to invest more than $2bn in the programme. In promo material for the programme, DARPA says... Read more »

The post DARPA introduces ‘third wave’ of artificial intelligence appeared first on AI News.

]]>
The Pentagon is launching a new artificial intelligence push it calls ‘AI Next’ which aims to improve the relationship between machines and humans.

As part of the multi-year initiative, the US Defense Advanced Research Projects Agency (DARPA) is set to invest more than $2bn in the programme.

In promo material for the programme, DARPA says AI Next will accelerate “the Third Wave” which enables machines to adapt to changing situations.

For instance, adaptive reasoning will enable computer algorithms to discern the difference between the use of ‘principal’ and ‘principle’ based on the analysis of surrounding words to help determine context.

Dr Steven Walker, Director of DARPA, said:

“Today, machines lack contextual reasoning capabilities and their training must cover every eventuality – which is not only costly – but ultimately impossible.

We want to explore how machines can acquire human-like communication and reasoning capabilities, with the ability to recognise new situations and environments and adapt to them.”

DARPA defines the first wave of AI as enabling “reasoning over narrowly defined problems,” but with a poor level of certainty. The second wave, it claims, enables “creating statistical models and training them on big data,” albeit with minimal reasoning.

Moving away from scripted responses is the next aim for AI. A survey conducted at the Joint Multi-Conference on Human-Level Artificial Intelligence found that 37 percent of respondents believe human-like artificial intelligence will be achieved within five to 10 years.

AI Next will also involve DARPA’s Artificial Intelligence Exploration (AIE) programme announced back in July.

AIE is DARPA’s initiative for the development of AI concepts that it considers high-risk, but high-payoff. The aim is to establish the feasibility of such projects within a one-and-a-half year timescale.

 Want to hear more about AI topics like this from global thought leaders? Find out more about the AI Expo shows in Santa Clara, London, and Amsterdam.

The post DARPA introduces ‘third wave’ of artificial intelligence appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/09/28/darpa-third-wave-artificial-intelligence/feed/ 1
Britain successfully trials AI in battlefield scanning experiment https://news.deepgeniusai.com/2018/09/24/britain-trials-ai-battlefield-experiment/ https://news.deepgeniusai.com/2018/09/24/britain-trials-ai-battlefield-experiment/#respond Mon, 24 Sep 2018 09:15:01 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3758 Britain has successfully trialled using AI to scan for hidden attackers in a mock urban battlefield environment in Montreal, Canada. The AI, called SAPIENT, was developed in the UK with the aim of using sensors to detect potential unseen dangers to soldiers. SAPIENT is more efficient than manually scanning live feeds and frees up more... Read more »

The post Britain successfully trials AI in battlefield scanning experiment appeared first on AI News.

]]>
Britain has successfully trialled using AI to scan for hidden attackers in a mock urban battlefield environment in Montreal, Canada.

The AI, called SAPIENT, was developed in the UK with the aim of using sensors to detect potential unseen dangers to soldiers.

SAPIENT is more efficient than manually scanning live feeds and frees up more soldiers to be used for operational means elsewhere.

Canada and the UK maintain a close security partnership as part of the so-called ‘Five Eyes’ alliance which also includes Australia, New Zealand, and the United States.

SAPIENT was tested alongside other high-end military technologies including exoskeleton suits and new surveillance and night vision equipment.

Defence Minister Stuart Andrew said:

“This British system can act as autonomous eyes in the urban battlefield. This technology can scan streets for enemy movements so troops can be ready for combat with quicker, more reliable information on attackers hiding around the corner.

Investing millions in advanced technology like this will give us the edge in future battles.”

Trials with Five Eyes partners are due to go on for three weeks and include soldiers from each nation. A similar exercise is due to be conducted in the UK in 2020.

AI being used for military purposes is a controversial subject. Some believe it should have no role, while others feel it must be developed to keep pace with other nations such as Russia and China who are both investing heavily.

“Artificial intelligence is the future, not only for Russia, but for all humankind,” said Russian President Vladimir Putin. “It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”

Earlier this year, Google was forced to drop its contract with the Pentagon to develop AI technology for drones following backlash and resignations from employees. Many other Silicon Valley giants followed in committing not to undertake military work.

Other companies, however, were more than happy to pick up the lucrative contracts.

What are your thoughts on SAPIENT and military AI?

 

The post Britain successfully trials AI in battlefield scanning experiment appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/09/24/britain-trials-ai-battlefield-experiment/feed/ 0
Experts warn of AI disasters leading to research lockdown https://news.deepgeniusai.com/2018/09/13/experts-warn-ai-disasters-research/ https://news.deepgeniusai.com/2018/09/13/experts-warn-ai-disasters-research/#respond Thu, 13 Sep 2018 15:40:41 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3734 Experts from around the world have warned of potential AI disasters that could lead to a subsequent lockdown of research. Andrew Moore, the new head of AI at Google Cloud, is one such expert who has warned of scenarios that would lead to public backlash and restrictions that would prevent AI from reaching its full... Read more »

The post Experts warn of AI disasters leading to research lockdown appeared first on AI News.

]]>
Experts from around the world have warned of potential AI disasters that could lead to a subsequent lockdown of research.

Andrew Moore, the new head of AI at Google Cloud, is one such expert who has warned of scenarios that would lead to public backlash and restrictions that would prevent AI from reaching its full potential.

Back in November, Moore spoke at the Artificial Intelligence and Global Security Initiative. In his keynote, he said:

“If an AI disaster happens – and that would, for instance, be an autonomous car killing people due to serious bugs – then at some point AI systems are going to be locked down for development, at least in the US.

There are some even more horrible scenarios — which I don’t want to talk about on the stage, which we’re really worried about — that will cause the complete lockdown of robotics research.”

Autonomous vehicles have indeed already been involved with accidents.

Back in March, just four months after Moore’s warning, an Uber self-driving vehicle caused a fatality. The subsequent investigation found Elaine Herzberg and her bicycle were acknowledged by the car’s sensors but then flagged as a ‘false positive’ and dismissed.

Following years of sci-fi movies featuring out-of-control AI robots, it’s unsurprising the public are on edge about the pace of recent developments. There’s a lot of responsibility on researchers to conduct their work safely and ethically.

Professor Jim al-Khalili, the incoming president of the British Science Association, told the Financial Times:

“It is quite staggering to consider that until a few years ago AI was not taken seriously, even by AI researchers.

We are now seeing an unprecedented level of interest, investment and technological progress in the field, which many people, including myself, feel is happening too fast.”

In the race between world powers to become AI leaders, many fear it will lead to rushed and dangerous results. This is of particular concern with regards to AI militarisation.

Many researchers believe AI should not be used for military purposes. Several Google employees recently left the company over its contract with the Pentagon to develop recognition software for its drones.

Over 4,000 other employees signed a petition demanding that Google’s management cease the project and promise to never again ‘build warfare technology.’

Google has since made the decision not to renew its Pentagon contract when it expires. However, it’s already caused ripples across Silicon Valley with many employees for companies such as Microsoft and Amazon demanding not to be involved with military contracts.

Much like the development of nuclear weapons, however, AI being developed for military purposes seems inevitable and there will always be players willing to step in. Last month, AI News reported Booz Allen secured an $885 million Pentagon AI contract.

From a military standpoint, maintaining similar capabilities as a potential adversary is necessary. Back in July, China announced plans to upgrade its naval power with unmanned AI submarines that provide an edge over the fleets of their global counterparts.

Russian President Vladimir Putin, meanwhile, recently said: “[AI] comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”

Few dispute that AI will have a huge impact on the world, but the debate rages on about whether it will be primarily good or bad. Beyond the potential dangers of rogue AIs, there’s also the argument over the impact on jobs.

Al-Khalili wants to see AI added to school curriculums – as well as public information programmes launched – to educate good practices, prepare the workforce, and reduce fears created by sci-fi.

What are your thoughts on AI fears?

 

The post Experts warn of AI disasters leading to research lockdown appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/09/13/experts-warn-ai-disasters-research/feed/ 0