amnesty – AI News https://news.deepgeniusai.com Artificial Intelligence News Wed, 25 Mar 2020 05:27:05 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png amnesty – AI News https://news.deepgeniusai.com 32 32 Amnesty International warns of AI ‘nightmare scenarios’ https://news.deepgeniusai.com/2018/06/14/amnesty-international-ai-nightmare/ https://news.deepgeniusai.com/2018/06/14/amnesty-international-ai-nightmare/#respond Thu, 14 Jun 2018 13:50:20 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3327 Human rights campaigners Amnesty International have warned of the potential ‘nightmare scenarios’ arising from AI if left unchecked. In a blog post, one scenario Amnesty foresees AI being used for is autonomous systems choosing military targets with little-to-no human oversight. Military AI Fears The development of AI has been likened to another arms race. Much... Read more »

The post Amnesty International warns of AI ‘nightmare scenarios’ appeared first on AI News.

]]>
Human rights campaigners Amnesty International have warned of the potential ‘nightmare scenarios’ arising from AI if left unchecked.

In a blog post, one scenario Amnesty foresees AI being used for is autonomous systems choosing military targets with little-to-no human oversight.

Military AI Fears

The development of AI has been likened to another arms race. Much like nuclear weapons, there is the argument if a nation doesn’t develop its capabilities then others will. Furthermore, there’s a greater incentive to use it with the knowledge of having the upper-hand.

Much progress has been made on nuclear disarmament, although the US and Russia still hold — and modernise — huge arsenals (approximately 6,800 and 7,000 warheads, respectively.)

This rivalry shows no signs of letting up and Russia continues to be linked with rogue state-like actions including hacking, interference with Western diplomatic processes, misinformation campaigns, and even assassinations.

Last week, it was unveiled that New York-based artificial intelligence startup Clarifai saw a server compromised while it was conducting secretive work on the U.S. Defense Department’s Project Maven.

Project Maven, which Google has decided it will not renew its contract to lend its expertise to following backlash, aims to automate the processing of drone images. While it’s unclear whether the hack was state-sponsored, it allegedly originated from Russia.

AI Discrimination

The next concern on Amnesty’s list is discrimination by biased algorithms — whether intentional, or not.

Unfortunately, the current under-representation problem in STEM fields is causing unintentional bias.

Here in the West, technologies are still mostly developed by white males and can often unintentionally perform better for this group.

A 2010 study by researchers at NIST and the University of Texas in Dallas found that algorithms designed and tested in East Asia are better at recognising East Asians, while those designed in Western countries are more accurate at detecting Caucasians.

Digital rights campaigners Access Now recently wrote in a post:

“From policing, to welfare systems, online discourse, and healthcare – to name a few examples – systems employing machine learning technologies can vastly and rapidly change or reinforce power structures or inequalities on an unprecedented scale and with significant harm to human rights.”

One company, Pymetrics, recently unveiled an open source tool for detecting unintentional bias in algorithms. As long as such tools are used, it could be very important to ensuring digital equality.

Meanwhile, some companies are deliberately implementing bias in their algorithms.

Russian startup NtechLab has come under fire after building an ‘ethnicity detection’ feature into its facial recognition system. Considering the existing problem of racial profiling, the idea of it becoming automated naturally raises some concern.

In a bid to quell fears about the use of its own technology for nefarious purposes, Google published its ethical principles for AI development.

Google says it will not develop technologies or weapons that cause harm, or anything which can be used for surveillance violating “internationally accepted norms” or “widely accepted principles of international law and human rights.”

Last month, Amnesty International and Access Now circulated the Toronto Declaration (PDF) which proposed a set of principles to prevent discrimination from AI and help to ensure its responsible development.

Then you’ve got MIT, who deliberately built a psychopathic AI based on a serial killer.

What are your thoughts on Amnesty International’s AI concerns?

 

The post Amnesty International warns of AI ‘nightmare scenarios’ appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/06/14/amnesty-international-ai-nightmare/feed/ 0