killer ai – AI News https://news.deepgeniusai.com Artificial Intelligence News Wed, 25 Mar 2020 05:10:08 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png killer ai – AI News https://news.deepgeniusai.com 32 32 Microsoft chief Brad Smith warns that killer robots are ‘unstoppable’ https://news.deepgeniusai.com/2019/09/23/microsoft-brad-smith-killer-robots-unstoppable/ https://news.deepgeniusai.com/2019/09/23/microsoft-brad-smith-killer-robots-unstoppable/#respond Mon, 23 Sep 2019 12:06:08 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6040 Microsoft chief Brad Smith issued a warning over the weekend that killer robots are ‘unstoppable’ and a new digital Geneva Convention is required. Most sci-fi fans will think of Terminator when they hear of killer robots. In the classic film series, a rogue military AI called Skynet gained self-awareness after spreading to millions of servers... Read more »

The post Microsoft chief Brad Smith warns that killer robots are ‘unstoppable’ appeared first on AI News.

]]>
Microsoft chief Brad Smith issued a warning over the weekend that killer robots are ‘unstoppable’ and a new digital Geneva Convention is required.

Most sci-fi fans will think of Terminator when they hear of killer robots. In the classic film series, a rogue military AI called Skynet gained self-awareness after spreading to millions of servers around the world. Concluding that humans would attempt to shut it down, Skynet sought to exterminate all of mankind in the interest of self-preservation.

While it was once just a popcorn flick, Terminator now offers a dire warning of what could be if precautions are not taken.

As with most technologies, AI will find itself increasingly used for military applications. The ultimate goal for general artificial intelligence is to self-learn. Combine both, and Skynet no longer seems the wild dramatisation that it once did.

Speaking to The Telegraph, Smith seems to agree. Smith points towards developments in the US, China, UK, Russia, Isreal, South Korea, and others, who are all developing autonomous weapon systems.

Wars could one day be fought on battlefields entirely with robots, a scenario that has many pros and cons. On the one hand, it reduces the risk to human troops. On the other, it makes declaring war easier and runs the risk of machines going awry.

Many technologists have likened the race to militarise AI to the nuclear arms race. In a pursuit to be the first and best, dangerous risks may be taken.

There’s still no clear responsible entity for death or injuries caused by an autonomous machine – the manufacturer, developer, or an overseer. This has also been a subject of much debate in regards to how insurance will work with driverless cars.

With military applications, many technologists have called for AI to never make a combat decision – especially one that would result in fatalities – on its own. While AI can make recommendations, a final decision must be made by a human.

Preventing unimaginable devastation

The story of Russian lieutenant colonel Stanislav Petrov in 1983 offers a warning of how a machine without human oversight may cause unimaginable devastation.

Petrov’s computers reported that an intercontinental missile had been launched by the US towards the Soviet Union. The Soviet Union’s strategy was an immediate and compulsory nuclear counter-attack against the US in such a scenario. Petrov used his instinct that the computer was incorrect and decided against launching a nuclear missile, and he was right. 

Had the decision in 1983 whether to deploy a nuclear missile been made solely on the computer, one would have been launched and met with retaliatory launches from the US and its allies.

Smith wants to see a new digital Geneva Convention in order to bring world powers together in agreement over acceptable norms when it comes to AI. “The safety of civilians is at risk today. We need more urgent action, and we need it in the form of a digital Geneva Convention, rules that will protect civilians and soldiers.” 

Many companies – including thousands of Google employees, following backlash over a Pentagon contract to develop AI tech for drones – have pledged not to develop AI technologies for harmful use.

Smith has launched a new book called Tools and Weapons. At the launch, Smith also called for stricter rules over the use of facial recognition technology. “There needs to be a new law in this space, we need regulation in the world of facial recognition in order to protect against potential abuse.”

Last month, a report from Dutch NGO PAX said leading tech firms are putting the world ‘at risk’ of killer AI. Microsoft, along with Amazon, was ranked among the highest risk. Microsoft itself warned investors back in February that its AI offerings could damage the company’s reputation. 

“Why are companies like Microsoft and Amazon not denying that they’re currently developing these highly controversial weapons, which could decide to kill people without direct human involvement?” said Frank Slijper, lead author of PAX’s report.

A global campaign simply titled Campaign To Stop Killer Robots now includes 113 NGOs across 57 countries and has doubled in size over the past year.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Microsoft chief Brad Smith warns that killer robots are ‘unstoppable’ appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/09/23/microsoft-brad-smith-killer-robots-unstoppable/feed/ 0
Report: Companies like Amazon and Microsoft are ‘putting world at risk’ of killer AI https://news.deepgeniusai.com/2019/08/22/report-companies-amazon-microsoft-world-risk-ai/ https://news.deepgeniusai.com/2019/08/22/report-companies-amazon-microsoft-world-risk-ai/#respond Thu, 22 Aug 2019 12:31:17 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5960 A survey of major players within the industry concludes that leading tech companies like Amazon and Microsoft are putting the world ‘at risk’ of killer AI. PAX, a Dutch NGO, ranked 50 firms based on three criteria: If technology they’re developing could be used for killer AI. Their involvement with military projects. If they’ve committed... Read more »

The post Report: Companies like Amazon and Microsoft are ‘putting world at risk’ of killer AI appeared first on AI News.

]]>
A survey of major players within the industry concludes that leading tech companies like Amazon and Microsoft are putting the world ‘at risk’ of killer AI.

PAX, a Dutch NGO, ranked 50 firms based on three criteria:

  1. If technology they’re developing could be used for killer AI.
  2. Their involvement with military projects.
  3. If they’ve committed to not being involved with military applications in the future.

Microsoft and Amazon are named among the world’s ‘highest risk’ tech companies putting the world at risk, while Google leads the way among large tech companies implementing proper safeguards.

Google’s ranking among the safest tech companies may be of surprise to some given the company’s reputation for mass data collection. Mountain View was also caught up in an outcry regarding its controversial ‘Project Maven’ contract with the Pentagon.

Project Maven was a contract Google had with the Pentagon to supply AI technology for military drones. Several high-profile employees resigned over the contract, while over 4,000 Google staff signed a petition demanding their management cease the project and never again “build warfare technology.”

Following the Project Maven backlash, Google CEO Sundar Pichai promised in a blog post the company will not develop technologies or weapons that cause harm, or anything which can be used for surveillance violating “internationally accepted norms” or “widely accepted principles of international law and human rights”.

Pichai’s promise not to be involved with such contracts in the future appears to have satisfied PAX in their rankings. Google has since attempted to improve its public image around its AI developments with things such as the creation of a dedicated ethics panel, but that backfired and collapsed quickly after featuring a member of a right-wing think tank and a defense drone mogul.

“Why are companies like Microsoft and Amazon not denying that they’re currently developing these highly controversial weapons, which could decide to kill people without direct human involvement?” said Frank Slijper, lead author of the report published this week.

Microsoft, which ranks among the highest risk tech companies in PAX’s list, warned investors back in February that its AI offerings could damage the company’s reputation. 

In a quarterly report, Microsoft wrote:

“Some AI scenarios present ethical issues. If we enable or offer AI solutions that are controversial because of their impact on human rights, privacy, employment, or other social issues, we may experience brand or reputational harm.”

Some of Microsoft’s forays into the technology have already proven troublesome, such as chatbot ‘Tay’ which became a racist, sexist, generally-rather-unsavoury character after internet users took advantage of its machine-learning capabilities.

Microsoft and Amazon are both currently bidding for a $10 billion Pentagon contract to provide cloud infrastructure for the US military.

“Tech companies need to be aware that unless they take measures, their technology could contribute to the development of lethal autonomous weapons,” comments Daan Kayser, PAX project leader on autonomous weapons. “Setting up clear, publicly-available policies is an essential strategy to prevent this from happening.”

You can find PAX’s full risk assessment of the companies here (PDF).

? , , , AI &

The post Report: Companies like Amazon and Microsoft are ‘putting world at risk’ of killer AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/08/22/report-companies-amazon-microsoft-world-risk-ai/feed/ 0