The post Neil deGrasse Tyson shares Musk’s view that AI is ‘our biggest existential crisis’ appeared first on AI News.
]]>Musk made his now-infamous comment during the South by Southwest tech conference in Austin, Texas last year as part of a call for regulation. Musk warned: “I think that’s the single biggest existential crisis that we face and the most pressing one.”
A year later, Neil deGrasse Tyson was asked what he believes to be the biggest threat to mankind during an episode of his StarTalk radio show.
Dr Tyson appeared alongside Josh Clark, host of the “Stuff You Should Know” and “The End of The World” podcasts, who was also asked the same question.
“I would say that AI is probably our biggest existential crisis,” Clark said. “The reason why is because we are putting onto the table right now the pieces for a machine to become super intelligent.”
Clark goes on to explain how we don’t yet know how to fully-define, let alone program, morality and friendliness.
“We make the assumption that if AI became super intelligent that friendliness would be a property of that intelligence. That is not necessarily true.”
Dr Tyson chimed in to say he initially had a different answer to what poses the greatest threat to mankind. “I had a different answer, but I like your answer better than the answer I was going to give,” he said.
“What won me over with your argument was that if you locked AI in a box, it would get out. My gosh, it gets out every time. Before I was thinking, ‘This is America, AI gets out of control, you shoot it’… but that does not work, because AI might be in a box, but it will convince you to let it out.”
Dr Tyson does not say what his previous answer was going to be, but he’s warned in the past about the dangers of huge asteroids impacting the Earth and joined calls for action on climate change.
Earlier this week, AI News reported on comments made by Pope Francis who also warned of the dangers of unregulated AI. Pope Francis believes a failure to properly consider the moral and ethical implications of the technology could risk a ‘regression to a form of barbarism’.
(Image by Thor Nielsen / NTNU under CC BY-SA 2.0 license)
Interested in hearing industry leaders discuss subjects like this? , , , AI &
The post Neil deGrasse Tyson shares Musk’s view that AI is ‘our biggest existential crisis’ appeared first on AI News.
]]>The post Experts warn AI poses a ‘clear and present danger’ appeared first on AI News.
]]>The foreboding report is titled ‘The Malicious Use Of Artificial Intelligence’ and was co-authored by experts from Oxford University, The Centre For The Study of Existential Risk, the Electronic Frontier Foundation, and more.
Three primary areas of risk were identified:
Digital security — The risk of AI being used for increasing the scale and efficiency of cyberattacks. These attacks could be to compromise other systems by reducing laborious tasks, or it could exploit human error with new attacks such as speech synthesis.
Physical security — The idea that AI could be used to inflict direct harm on living beings or physical buildings/systems/infrastructure. Some provided examples include connected vehicles being compromised to crash, or even situations once seen as dystopian such as swarms of micro-drones.
Political security — The researchers highlight the possibility of AI automating the creation of propaganda, or manipulating existing content to sway opinions. With the allegations of Russia using digital means to influence the outcome of the U.S. presidential elections, and other key international decisions, for many people this will be the clearest example of the present danger.
Here are some of the potential scenarios:
As with most things, it will likely take a disaster to occur before action is taken. The researchers are joining previous calls for AI regulation — including for a robot ethics charter, and for a ‘global stand’ against militarisation — in the attempt to be more proactive about countering malicious usage.
In the report, the researchers wrote:
“The proposed interventions require attention and action not just from AI researchers and companies but also from legislators, civil servants, regulators, security researchers and educators. The challenge is daunting and the stakes are high.”
Some of the proposals include:
The full report (PDF) is quite a chilling read, and highlights scenarios which could be straight out of Black Mirror. Hopefully, policymakers read the report and take heed of the experts’ warnings before it becomes necessary.
What are your thoughts about the warnings of malicious AI?
The post Experts warn AI poses a ‘clear and present danger’ appeared first on AI News.
]]>The post Musk warns ‘it begins’ as Putin claims the AI-leading nation rules the world appeared first on AI News.
]]>Musk, co-chairman of OpenAI, has long warned of dire consequences for mishandling AI development. OpenAI itself is a non-profit research company that aims to champion promoting and developing friendly AI in a way to benefit humanity.
As with any major technology advancement, however, there will undoubtedly be those which aim to weaponise it and to do so before rivals. Based on Putin’s comments to Russia-based publication RT, it sounds as if the nation is among them.
“Artificial intelligence is the future, not only for Russia, but for all humankind,” said Putin, in a report from RT. “It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”
Musk tweeted his brief reaction to the news:
It begins … https://t.co/mbjw5hWC5J
— Elon Musk (@elonmusk) September 4, 2017
Further responses to his tweet highlighted the concern about AI weapon systems. In particular, an AI which may decide a preemptive strike is the best option to prevent a threat from developing. The lack of human involvement in the decision also enables the blame to be mitigated.
Last week, AI News reported that China is catching up to the U.S. in artificial intelligence. Part of this rapid development is due to a significant increase in government support of core AI programs. China will increase spending to $22 billion in the next few years, with plans to spend nearly $60 billion per year by 2025.
Musk has also voiced concerns about this international competition for AI superiority:
China, Russia, soon all countries w strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo.
— Elon Musk (@elonmusk) September 4, 2017
These recent developments further highlight the pressing need for regulations and open dialogue on AI development to ensure it benefits humanity rather than poses a threat.
See more: Experts believe AI will be weaponised in the next 12 months
Are you concerned about AI posing a threat? Share your thoughts in the comments.
The post Musk warns ‘it begins’ as Putin claims the AI-leading nation rules the world appeared first on AI News.
]]>