human rights watch – AI News https://news.deepgeniusai.com Artificial Intelligence News Wed, 25 Mar 2020 05:27:52 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png human rights watch – AI News https://news.deepgeniusai.com 32 32 Pentagon is ‘falling behind’ in military AI, claims former NSWC chief https://news.deepgeniusai.com/2019/10/23/pentagon-military-ai-former-nswc-chief/ https://news.deepgeniusai.com/2019/10/23/pentagon-military-ai-former-nswc-chief/#respond Wed, 23 Oct 2019 14:50:41 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6136 The former head of US Naval Special Warfield Command (NSWC) has warned the Pentagon is falling behind adversaries in military AI developments. Speaking on Tuesday, Rear Adm. Brian Losey said AI is able to provide tactical guidance as well as anticipate enemy actions and mitigate threats. Adversaries with such technology will have a significant advantage.... Read more »

The post Pentagon is ‘falling behind’ in military AI, claims former NSWC chief appeared first on AI News.

]]>
The former head of US Naval Special Warfield Command (NSWC) has warned the Pentagon is falling behind adversaries in military AI developments.

Speaking on Tuesday, Rear Adm. Brian Losey said AI is able to provide tactical guidance as well as anticipate enemy actions and mitigate threats. Adversaries with such technology will have a significant advantage.

Losey is retired from the military but is now a partner at San Diego-based Shield AI.

Shield AI specialises in building artificial intelligence systems for the national security sector. The company’s flagship Hivemind AI enables autonomous robots to “see”, “reason”, and “search” the world. Nova is Shield AI’s first Hivemind-powered robot which autonomously searches buildings while streaming video and generating maps.

During a panel discussion at The Promise and The Risk of the AI Revolution conference, Losey said:

“We’re losing a lot of folks because of encounters with the unknown. Not knowing when we enter a house whether hostiles will be there and not really being able to adequately discern whether there are threats before we encounter them. And that’s how we incurred most of our casualties.

The idea is: can we use autonomy, can we use edge AI, can we use AI for manoeuvre to mitigate risk to operators to reduce casualties?”

AI has clear benefits today for soldiers on the battlefield, national policing, and even areas such as firefighting. In the future, it may be vital for national defense against ever more sophisticated weapons.

Some of the US’ historic adversaries, such as Russia, have already shown off developments such as killer robots and hypersonic missiles. AI will be vital to equalising the capabilities and hopefully act as a deterrent to the use of such weaponry.

“If you’re concerned about national security in the future, then it is imperative that the United States lead AI so we that we can unfold the best practices so that we’re not driven by secure AI to assume additional levels of risk when it comes to lethal actions,” Losey said.

Meanwhile, Nobel Peace Prize winner Jody Williams has warned against robots making life-and-death decisions on the battlefield. Williams said it is ‘unethical and immoral’ and can never be undone.

Williams was speaking at the UN in New York following the Project Quarterback announcement from the US military which uses AI to make decisions on what human soldiers should target and destroy.

“We need to step back and think about how artificial intelligence robotic weapons systems would affect this planet and the people living on it,” said Williams during a panel discussion.

It’s almost inevitable AI will be used for military purposes. Arguably, the best we can hope for is to quickly establish international norms for their development and usage to minimise the unthinkable potential damage.

One such norm that many researchers have backed is that AI should only make recommendations on actions to take, but a human should take accountability for any decision made.

A 2017 report by the Human Rights Watch chillingly concluded that no-one is currently accountable for a robot unlawfully killing someone in the heat of a battle.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Pentagon is ‘falling behind’ in military AI, claims former NSWC chief appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/10/23/pentagon-military-ai-former-nswc-chief/feed/ 0
Editorial: Stopping AI’s discrimination will be difficult, but vital https://news.deepgeniusai.com/2018/05/17/editorial-stopping-ai-discrimination/ https://news.deepgeniusai.com/2018/05/17/editorial-stopping-ai-discrimination/#respond Thu, 17 May 2018 17:26:07 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3098 Several human rights organisations have signed a declaration calling for governments and companies to help ensure AI technologies are indiscriminate, but it’s going to be difficult. Amnesty International and Access Now prepared the ‘Toronto Declaration’ (PDF) that’s also been signed by Human Rights Watch and the Wikimedia Foundation. As an open declaration; other companies, governments,... Read more »

The post Editorial: Stopping AI’s discrimination will be difficult, but vital appeared first on AI News.

]]>
Several human rights organisations have signed a declaration calling for governments and companies to help ensure AI technologies are indiscriminate, but it’s going to be difficult.

Amnesty International and Access Now prepared the ‘Toronto Declaration’ (PDF) that’s also been signed by Human Rights Watch and the Wikimedia Foundation. As an open declaration; other companies, governments, and organisations are being called on to add their endorsement.

In a post, Access Now wrote:

“As machine learning systems advance in capability and increase in use, we must examine the positive and negative implications of these technologies.

We acknowledge the potential for these technologies to be used for good and to promote human rights, but also the potential to intentionally or inadvertently discriminate against individuals or groups of people.

We must keep our focus on how these technologies will affect individual human beings and human rights. In a world of machine learning systems, who will bear accountability for harming human rights?”

Ethics have become a major talking point in the AI industry. However, much of the conversation so far has focused on drawing red lines when it comes to surveillance and military applications.

There’s a big debate over AIs potential impact to jobs. Some believe automation will cause a work shortage, while others argue that most will simply be enhanced by AI.

If jobs are being replaced, ideas like a universal income will have to be re-examined. If jobs are being enhanced, ensuring AI is indiscriminate will be even more important.

AI has already shown discrimination

Technologies developed and used in the West are typically developed by white males.

Research has been performed into the gender and race gap of executives in Silicon Valley. This data at least provides some indication of the representation problem:

What this means is that, unintentionally, products often perform better for this particular group. Today, that could just mean something relatively trivial like Siri recognising an American male voice with greater accuracy (even as a British male, Silicon Valley-developed products often struggle with my accent!)

2010 study by researchers at NIST and the University of Texas in Dallas found that algorithms designed and tested in East Asia are better at recognising East Asians, while those designed in western countries are more accurate at detecting Caucasians.

However, if jobs are becoming more reliant on AI, they need to work as well for everyone who uses them. Failing to do so will put certain groups at a greater advantage than others.

“From policing, to welfare systems, online discourse, and healthcare – to name a few examples – systems employing machine learning technologies can vastly and rapidly change or reinforce power structures or inequalities on an unprecedented scale and with significant harm to human rights,” wrote Access Now.

Policing is one area of particular concern. An investigative report by ProPublica revealed that computer-generated ‘risk assessment scores’ used to determine eligibility for parole are almost twice as likely to label black defendants as potential repeat offenders, despite evidence to the contrary.

Similarly, a 2012 study (paywall) by the IEEE  found that police surveillance cameras using facial recognition to identify suspected criminals are five to 10 percent less accurate when identifying African Americans – which could lead to more innocent black people being arrested.

Machine learning models for AI are often trained on public data and therefore we must be careful about what sources are used. Microsoft’s attempt to create a chatbot which learns from the public, Tay, infamously ended up becoming a rather unsavoury character spouting racist and sexist remarks.

The declaration signed today is a great start to keep these issues in mind as AI technologies are being developed, but it will require tackling inequalities across the whole of society to make developments truly representative of those it serves.

What are your thoughts on the AI discrimination issue?

 

The post Editorial: Stopping AI’s discrimination will be difficult, but vital appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/05/17/editorial-stopping-ai-discrimination/feed/ 0