robots – AI News https://news.deepgeniusai.com Artificial Intelligence News Wed, 25 Mar 2020 05:26:26 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png robots – AI News https://news.deepgeniusai.com 32 32 Microsoft chief Brad Smith warns that killer robots are ‘unstoppable’ https://news.deepgeniusai.com/2019/09/23/microsoft-brad-smith-killer-robots-unstoppable/ https://news.deepgeniusai.com/2019/09/23/microsoft-brad-smith-killer-robots-unstoppable/#respond Mon, 23 Sep 2019 12:06:08 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6040 Microsoft chief Brad Smith issued a warning over the weekend that killer robots are ‘unstoppable’ and a new digital Geneva Convention is required. Most sci-fi fans will think of Terminator when they hear of killer robots. In the classic film series, a rogue military AI called Skynet gained self-awareness after spreading to millions of servers... Read more »

The post Microsoft chief Brad Smith warns that killer robots are ‘unstoppable’ appeared first on AI News.

]]>
Microsoft chief Brad Smith issued a warning over the weekend that killer robots are ‘unstoppable’ and a new digital Geneva Convention is required.

Most sci-fi fans will think of Terminator when they hear of killer robots. In the classic film series, a rogue military AI called Skynet gained self-awareness after spreading to millions of servers around the world. Concluding that humans would attempt to shut it down, Skynet sought to exterminate all of mankind in the interest of self-preservation.

While it was once just a popcorn flick, Terminator now offers a dire warning of what could be if precautions are not taken.

As with most technologies, AI will find itself increasingly used for military applications. The ultimate goal for general artificial intelligence is to self-learn. Combine both, and Skynet no longer seems the wild dramatisation that it once did.

Speaking to The Telegraph, Smith seems to agree. Smith points towards developments in the US, China, UK, Russia, Isreal, South Korea, and others, who are all developing autonomous weapon systems.

Wars could one day be fought on battlefields entirely with robots, a scenario that has many pros and cons. On the one hand, it reduces the risk to human troops. On the other, it makes declaring war easier and runs the risk of machines going awry.

Many technologists have likened the race to militarise AI to the nuclear arms race. In a pursuit to be the first and best, dangerous risks may be taken.

There’s still no clear responsible entity for death or injuries caused by an autonomous machine – the manufacturer, developer, or an overseer. This has also been a subject of much debate in regards to how insurance will work with driverless cars.

With military applications, many technologists have called for AI to never make a combat decision – especially one that would result in fatalities – on its own. While AI can make recommendations, a final decision must be made by a human.

Preventing unimaginable devastation

The story of Russian lieutenant colonel Stanislav Petrov in 1983 offers a warning of how a machine without human oversight may cause unimaginable devastation.

Petrov’s computers reported that an intercontinental missile had been launched by the US towards the Soviet Union. The Soviet Union’s strategy was an immediate and compulsory nuclear counter-attack against the US in such a scenario. Petrov used his instinct that the computer was incorrect and decided against launching a nuclear missile, and he was right. 

Had the decision in 1983 whether to deploy a nuclear missile been made solely on the computer, one would have been launched and met with retaliatory launches from the US and its allies.

Smith wants to see a new digital Geneva Convention in order to bring world powers together in agreement over acceptable norms when it comes to AI. “The safety of civilians is at risk today. We need more urgent action, and we need it in the form of a digital Geneva Convention, rules that will protect civilians and soldiers.” 

Many companies – including thousands of Google employees, following backlash over a Pentagon contract to develop AI tech for drones – have pledged not to develop AI technologies for harmful use.

Smith has launched a new book called Tools and Weapons. At the launch, Smith also called for stricter rules over the use of facial recognition technology. “There needs to be a new law in this space, we need regulation in the world of facial recognition in order to protect against potential abuse.”

Last month, a report from Dutch NGO PAX said leading tech firms are putting the world ‘at risk’ of killer AI. Microsoft, along with Amazon, was ranked among the highest risk. Microsoft itself warned investors back in February that its AI offerings could damage the company’s reputation. 

“Why are companies like Microsoft and Amazon not denying that they’re currently developing these highly controversial weapons, which could decide to kill people without direct human involvement?” said Frank Slijper, lead author of PAX’s report.

A global campaign simply titled Campaign To Stop Killer Robots now includes 113 NGOs across 57 countries and has doubled in size over the past year.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Microsoft chief Brad Smith warns that killer robots are ‘unstoppable’ appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/09/23/microsoft-brad-smith-killer-robots-unstoppable/feed/ 0
NHS report suggests AI will give docs more patient time https://news.deepgeniusai.com/2019/02/11/nhs-report-ai-docs-patient-time/ https://news.deepgeniusai.com/2019/02/11/nhs-report-ai-docs-patient-time/#respond Mon, 11 Feb 2019 12:22:35 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4916 A report from the NHS suggests the impending technological ‘revolution’ in healthcare will increase the amount of time doctors can spend with patients. NHS doctors are overburdened; a problem only getting worse from a growing and ageing population, and not enough funding. The report was led by US academic Eric Topol and calls for a... Read more »

The post NHS report suggests AI will give docs more patient time appeared first on AI News.

]]>
A report from the NHS suggests the impending technological ‘revolution’ in healthcare will increase the amount of time doctors can spend with patients.

NHS doctors are overburdened; a problem only getting worse from a growing and ageing population, and not enough funding.

The report was led by US academic Eric Topol and calls for a reskilling of NHS staff to harness new digital skills. AI and robotics can reduce the burden on healthcare professionals, but only if they’re utilised effectively.

Doctors will not be replaced by robots but instead will have their abilities “enhanced” to improve care. Around 90 percent of all NHS jobs are predicted to require digital skills within the next 20 years.

The use of virtual assistants such as those offered by Apple, Google, and Amazon are expected to be among the closest innovations to being ready.

Assistants can help with checking whether symptoms require urgent care, a GP appointment, or whether a doctor needs to be seen at all. This would help prevent the misuse of A&E by people with trivial ailments or the booking of GP appointments for otherwise healthy adults with things such as a common cold.

Virtual assistants could also be used to book and remind of appointments. This would help to reduce the number of unattended appointments that someone else could have needed.

Yet another concept is the use of a ‘mental health triage bot’ that engages in conversations while analysing text and voice for suicidal ideas and emotion. This could help reduce the ~6000 suicides per year.

The main concern preventing uptake is the potential for errors, which in healthcare could be fatal.

AI News previously reported on the findings of NHS consultant ‘Dr Murphy’ who reached out to us after using ‘GP at Hand’ from Babylon Health, an AI-powered service promoted by health secretary Matt Hancock.

Dr Murphy has since posted many flawed experiences with the service, but one example of a “48yr old obese 30/day male smoker develop[ing] sudden onset central chest pain & sweating” suggested booking a GP appointment. Anyone with common sense would say call 999 urgently.

That example could have meant life or death and shows, while such a system could one day provide huge benefits, it must undergo rigorous testing.

Commenting on the report, Hancock said:

Our health service is on the cusp of a technology revolution and our brilliant staff will be in the driving seat when it happens.

Technology must be there to enhance and support clinicians. It has the potential to make working lives easier for dedicated NHS staff and free them up to use their medical expertise and do what they do best: care for patients.”

In the NHS report, it’s claimed the use of virtual assistants could save 5.7 million hours of GP’s time across England per year.

Further AI use cases include speeding up the interpretation of scans; improving accuracy while enabling treatments to begin sooner. We’ve created a dedicated ‘healthcare’ category on AI News highlighting the incredible advances in this area.

When it comes to robotics, their assistance in surgery could be expanded in addition to being used for simple tasks which are important but time-consuming such as dispensing medicines.

Other emerging technologies such as VR also present exciting opportunities. Virtual reality could help with pain reduction and treating mental conditions such as post-traumatic stress, anxiety, and phobias.

The report’s authors conclude: “Our review of the evidence leads us to suggest that these technologies will not replace healthcare professionals, but will enhance them … giving them more time to care for patients.”

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post NHS report suggests AI will give docs more patient time appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/02/11/nhs-report-ai-docs-patient-time/feed/ 0
Experts warn of AI disasters leading to research lockdown https://news.deepgeniusai.com/2018/09/13/experts-warn-ai-disasters-research/ https://news.deepgeniusai.com/2018/09/13/experts-warn-ai-disasters-research/#respond Thu, 13 Sep 2018 15:40:41 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3734 Experts from around the world have warned of potential AI disasters that could lead to a subsequent lockdown of research. Andrew Moore, the new head of AI at Google Cloud, is one such expert who has warned of scenarios that would lead to public backlash and restrictions that would prevent AI from reaching its full... Read more »

The post Experts warn of AI disasters leading to research lockdown appeared first on AI News.

]]>
Experts from around the world have warned of potential AI disasters that could lead to a subsequent lockdown of research.

Andrew Moore, the new head of AI at Google Cloud, is one such expert who has warned of scenarios that would lead to public backlash and restrictions that would prevent AI from reaching its full potential.

Back in November, Moore spoke at the Artificial Intelligence and Global Security Initiative. In his keynote, he said:

“If an AI disaster happens – and that would, for instance, be an autonomous car killing people due to serious bugs – then at some point AI systems are going to be locked down for development, at least in the US.

There are some even more horrible scenarios — which I don’t want to talk about on the stage, which we’re really worried about — that will cause the complete lockdown of robotics research.”

Autonomous vehicles have indeed already been involved with accidents.

Back in March, just four months after Moore’s warning, an Uber self-driving vehicle caused a fatality. The subsequent investigation found Elaine Herzberg and her bicycle were acknowledged by the car’s sensors but then flagged as a ‘false positive’ and dismissed.

Following years of sci-fi movies featuring out-of-control AI robots, it’s unsurprising the public are on edge about the pace of recent developments. There’s a lot of responsibility on researchers to conduct their work safely and ethically.

Professor Jim al-Khalili, the incoming president of the British Science Association, told the Financial Times:

“It is quite staggering to consider that until a few years ago AI was not taken seriously, even by AI researchers.

We are now seeing an unprecedented level of interest, investment and technological progress in the field, which many people, including myself, feel is happening too fast.”

In the race between world powers to become AI leaders, many fear it will lead to rushed and dangerous results. This is of particular concern with regards to AI militarisation.

Many researchers believe AI should not be used for military purposes. Several Google employees recently left the company over its contract with the Pentagon to develop recognition software for its drones.

Over 4,000 other employees signed a petition demanding that Google’s management cease the project and promise to never again ‘build warfare technology.’

Google has since made the decision not to renew its Pentagon contract when it expires. However, it’s already caused ripples across Silicon Valley with many employees for companies such as Microsoft and Amazon demanding not to be involved with military contracts.

Much like the development of nuclear weapons, however, AI being developed for military purposes seems inevitable and there will always be players willing to step in. Last month, AI News reported Booz Allen secured an $885 million Pentagon AI contract.

From a military standpoint, maintaining similar capabilities as a potential adversary is necessary. Back in July, China announced plans to upgrade its naval power with unmanned AI submarines that provide an edge over the fleets of their global counterparts.

Russian President Vladimir Putin, meanwhile, recently said: “[AI] comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”

Few dispute that AI will have a huge impact on the world, but the debate rages on about whether it will be primarily good or bad. Beyond the potential dangers of rogue AIs, there’s also the argument over the impact on jobs.

Al-Khalili wants to see AI added to school curriculums – as well as public information programmes launched – to educate good practices, prepare the workforce, and reduce fears created by sci-fi.

What are your thoughts on AI fears?

 

The post Experts warn of AI disasters leading to research lockdown appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/09/13/experts-warn-ai-disasters-research/feed/ 0
AI robots will solve underwater infrastructure damage checks https://news.deepgeniusai.com/2018/07/20/ai-robots-underwater-infrastructure/ https://news.deepgeniusai.com/2018/07/20/ai-robots-underwater-infrastructure/#respond Fri, 20 Jul 2018 15:12:03 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3529 Robots will be paired with a versatile AI that can quickly adapt to unpredictable conditions when examining underwater infrastructure. Some of a nation’s most vital infrastructure hides beneath the water. The difficulty in accessing most of it, however, makes important damage checks infrequent. Sending humans down requires significant training and can take several weeks to... Read more »

The post AI robots will solve underwater infrastructure damage checks appeared first on AI News.

]]>
Robots will be paired with a versatile AI that can quickly adapt to unpredictable conditions when examining underwater infrastructure.

Some of a nation’s most vital infrastructure hides beneath the water. The difficulty in accessing most of it, however, makes important damage checks infrequent.

Sending humans down requires significant training and can take several weeks to recover due to the often extreme depths. There are far more underwater structures than skilled divers to inspect them.

Robots have been designed to carry out some of these dangerous tasks. The problem is until now they’ve lacked the smarts to deal with the unpredictable and rapidly-changing nature of underwater conditions.

Researchers from Stevens Institute of Technology are working on algorithms which enable these underwater robots to check and protect infrastructure.

Their work is led by Brendan Englot, Professor of Mechanical Engineering at Stevens.

“There are so many difficult disturbances pushing the robot around, and there is often very poor visibility, making it hard to give a vehicle underwater the same situational awareness that a person would have just walking around on the ground or being up in the air,” says Englot.

Englot and his team are using reinforcement learning for training algorithms. Rather than use an exact mathematical model, the robot performs actions and observes whether it helps to attain its goal.

Through a case of trial-and-error, the algorithm is updated with the collected data to figure out the best ways to deal with changing underwater conditions. This will enable the robot to successfully manoeuvre and navigate even in previously unmapped areas.

A robot was recently sent on a mission to map a pier in Manhattan.

“We didn’t have a prior model of that pier,” says Englot. “We were able to just send our robot down and it was able to come back and successfully locate itself throughout the whole mission.”

The robots use sonar for data, widely regarded as the most reliable for undersea navigation. It works similar to a dolphin’s echolocation by measuring how long it takes for high-frequency chirps to bounce off nearby structures.

A pitfall with this approach is you’re only going to be able to receive imagery similar to a grayscale medical ultrasound. Englot and his team believe that once a structure has been mapped out, a second pass by the robot could use a camera for a high-resolution image of critical areas.

For now, it’s early days but Englot’s project is an example of how AI is enabling a new era for robotics that improves efficiency while reducing the risks to humans.

What are your thoughts on the use of AI-powered robots for underwater checks?

 

The post AI robots will solve underwater infrastructure damage checks appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/07/20/ai-robots-underwater-infrastructure/feed/ 0
Scientists pledge not to build AIs which kill without oversight https://news.deepgeniusai.com/2018/07/18/scientists-build-ai-kill-oversight/ https://news.deepgeniusai.com/2018/07/18/scientists-build-ai-kill-oversight/#respond Wed, 18 Jul 2018 13:44:50 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3510 Thousands of scientists have signed a pledge not to have any role in building AIs which have the ability to kill without human oversight. When many think of AI, they at least give some passing thought of rogue AIs seen in sci-fi movies such as the infamous Skynet in Terminator. In an ideal world, AI... Read more »

The post Scientists pledge not to build AIs which kill without oversight appeared first on AI News.

]]>
Thousands of scientists have signed a pledge not to have any role in building AIs which have the ability to kill without human oversight.

When many think of AI, they at least give some passing thought of rogue AIs seen in sci-fi movies such as the infamous Skynet in Terminator.

In an ideal world, AI would never be used in any military capacity. However, it was almost certainly be developed one way or another because of the advantage it would provide to an adversary without similar capabilities.

Russian President Vladimir Putin, when asked his thoughts on AI, recently said: “Whoever becomes the leader in this sphere will become the ruler of the world.”

Putin’s words sparked fears of a race in AI development similar to that of the nuclear arms race, and one which could be potentially reckless.

Rather than attempting to stop military AI development, a more attainable goal is to at least ensure any AI decision to kill is subject to human oversight.

Demis Hassabis at Google DeepMind and Elon Musk from SpaceX are among the more than 2,400 scientists who signed the pledge not to develop AI or robots which kill without human oversight.

The pledge was created by The Future of Life Institute and calls on governments to agree on laws and regulations that stigmatise and effectively ban the development of killer robots.

“We the undersigned agree that the decision to take a human life should never be delegated to a machine,” the pledge reads. It goes on to warn “lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual.”

Programming Humanity

Human compassion is difficult to program, we’re certainly many years away from being able to do so. However, it’s vital when it comes to life-or-death matters.

Consider a missile defense AI set up to protect a nation. Based on pure logic, it may determine that wiping out another nation which begins a missile program is the best way to protect its own. Humans would take into account these are people’s lives and seeking alternatives such as diplomatic resolutions should be sought.

Robots may one day be used for policing to reduce the risk to human officers. They could be armed, with firearms or tasers, but the responsibility to fire should always come down to a human operator.

Although it will undoubtedly improve with time, AI has been proven to have a serious bias problem. A 2010 study by researchers at NIST and the University of Texas in Dallas found that algorithms designed and tested in East Asia are better at recognising East Asians, while those designed in Western countries are more accurate at detecting Caucasians.

An armed robot who mistakenly identifies someone for another person could end up killing that individual simply due to a flaw with its algorithms. Confirming the AI’s assessment with a human operator may be enough to prevent such a disaster.

Read more: INTERPOL investigates how AI will impact crime and policing

Do you agree with the pledge made by the scientists?

 

The post Scientists pledge not to build AIs which kill without oversight appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/07/18/scientists-build-ai-kill-oversight/feed/ 0