Military – AI News https://news.deepgeniusai.com Artificial Intelligence News Mon, 10 Aug 2020 15:06:42 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Military – AI News https://news.deepgeniusai.com 32 32 DARPA’s AI-powered jet fight will be held virtually due to COVID-19 https://news.deepgeniusai.com/2020/08/10/darpa-ai-jet-fight-online-covid-19/ https://news.deepgeniusai.com/2020/08/10/darpa-ai-jet-fight-online-covid-19/#respond Mon, 10 Aug 2020 15:06:40 +0000 https://news.deepgeniusai.com/?p=9803 An upcoming event to display and test AI-powered jet fighters will now be held virtually due to COVID-19. “We are still excited to see how the AI algorithms perform against each other as well as a Weapons School-trained human and hope that fighter pilots from across the Air Force, Navy, and Marine Corps, as well... Read more »

The post DARPA’s AI-powered jet fight will be held virtually due to COVID-19 appeared first on AI News.

]]>
An upcoming event to display and test AI-powered jet fighters will now be held virtually due to COVID-19.

“We are still excited to see how the AI algorithms perform against each other as well as a Weapons School-trained human and hope that fighter pilots from across the Air Force, Navy, and Marine Corps, as well as military leaders and members of the AI tech community will register and watch online,” said Col. Dan Javorsek, program manager in DARPA’s Strategic Technology Office.

“It’s been amazing to see how far the teams have advanced AI for autonomous dogfighting in less than a year.”

DARPA (Defense Advanced Research Projects Agency) is using the AlphaDogfight Trial event to recruit more AI developers for its Air Combat Evolution (ACE) program.

The upcoming event is the final in a series of three and will finish with a bang as the AI-powered F-16 fighter planes virtually take on a human pilot.

“Regardless of whether the human or machine wins the final dogfight, the AlphaDogfight Trials is all about increasing trust in AI,” Javorsek added.

“If the champion AI earns the respect of an F-16 pilot, we’ll have come one step closer to achieving effective human-machine teaming in air combat, which is the goal of the ACE program.”

The first event was held in November last year with early algorithms:

A second event was held in January this year demonstrating the vast improvements made with the algorithms over a relatively short period of time. The algorithms took on adversaries created by the Johns Hopkins University Applied Physics Lab:

The third and final event will be streamed live from the Applied Physics Lab (APL) from August 18th-20th.

Eight teams will fly against five APL-developed adversary AI algorithms on day one. On day two, teams will fly against each other in a round-robin tournament.

Day three is when things get most exciting, with the top four teams competing in a single-elimination tournament for the AlphaDogfight Trials Championship. The winning team’s AI will then fly against a real F-16 pilot to test the AI’s abilities against a human.

ACE envisions future air combat eventually being conducted without putting human pilots at risk. In the meantime, DARPA hopes the initiative will help improve human pilots’ trust in fighting alongside AI.

Prior registration is required to view the event. Non-US citizens must register prior to August 11th while Americans have until August 17th.

You can register for the event here.

(Image Credit: DARPA)

The post DARPA’s AI-powered jet fight will be held virtually due to COVID-19 appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/08/10/darpa-ai-jet-fight-online-covid-19/feed/ 0
US Department of Defense adopts ethical principles for AI use https://news.deepgeniusai.com/2020/02/26/us-department-of-defense-adopts-ethical-principles-for-ai-use/ https://news.deepgeniusai.com/2020/02/26/us-department-of-defense-adopts-ethical-principles-for-ai-use/#respond Wed, 26 Feb 2020 11:20:53 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6430 The US Department of Defense (DoD) has formally adopted a set of principles to use artificial intelligence (AI) for military use. In October 2019, the recommendations for use of the technology were provided to Secretary of Defense Dr. Mark T. Esper by the Defense Innovation Board. These recommendations came after 15 months of consultation with... Read more »

The post US Department of Defense adopts ethical principles for AI use appeared first on AI News.

]]>
The US Department of Defense (DoD) has formally adopted a set of principles to use artificial intelligence (AI) for military use.

In October 2019, the recommendations for use of the technology were provided to Secretary of Defense Dr. Mark T. Esper by the Defense Innovation Board. These recommendations came after 15 months of consultation with leading AI specialists in several industries like the government, academia, commercial, and the public.

The move aligns with the DoD’s AI strategy objective directing the country’s military lead in AI ethics and the legal use of AI systems. The principles will be designed on the US military’s existing ethics framework based on the US Constitution, Title 10 of the US Code, Law of War, existing international treaties and longstanding norms and values. As the existing framework delivers technology-neutral and enduring foundation for ethical behaviour, the use of AI increases new ethical uncertainties and risks. This is where the new principles will play their role by addressing the new challenges.

The next in line to announce ethics and transparency as its motto is the European Union, which has launched strategies for AI and the “data economy”. In its statement, the European Commission (EC) said: “European society powered by digital solutions that put people first, open up new opportunities for businesses, and boost the development of trustworthy technology to foster an open and democratic society and a vibrant and sustainable economy”. According to the EC, the focus would be on three key objectives in digital: “technology that works for people, a fair and competitive economy, and an open, democratic and sustainable society”.

According to a UK government report issued earlier this month, the government was ‘failing on openness’ with regards to its AI usage, although a specific regulator was not proposed as the answer. The report added that fears over ‘black box AI’, whereby data produces results through unexplainable methods, were largely misplaced. The report advocated the use of the Nolan Principles in bringing through AI for the UK public sector, arguing they did not need reformulating.

? Attend the co-located 

The post US Department of Defense adopts ethical principles for AI use appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/02/26/us-department-of-defense-adopts-ethical-principles-for-ai-use/feed/ 0
Palantir took over Project Maven defense contract after Google backed out https://news.deepgeniusai.com/2019/12/12/palantir-project-maven-defense-contract-google-out/ https://news.deepgeniusai.com/2019/12/12/palantir-project-maven-defense-contract-google-out/#comments Thu, 12 Dec 2019 13:55:30 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6303 Surveillance firm Palantir took up a Pentagon defense contract known as Project Maven after Google dropped out due to backlash. Project Maven is a Pentagon initiative aiming to use AI technologies for deploying and monitoring unmanned aerial vehicles (UAVs). Naturally, Google’s involvement with the initiative received plenty of backlash both internally and externally. At least... Read more »

The post Palantir took over Project Maven defense contract after Google backed out appeared first on AI News.

]]>
Surveillance firm Palantir took up a Pentagon defense contract known as Project Maven after Google dropped out due to backlash.

Project Maven is a Pentagon initiative aiming to use AI technologies for deploying and monitoring unmanned aerial vehicles (UAVs).

Naturally, Google’s involvement with the initiative received plenty of backlash both internally and externally. At least a dozen employees quit Google while many others threatened to walk out if the firm continued building military products.

The pressure forced Google to abandon the lucrative Pentagon contract. However, it just meant that it was happily picked up by another company.

According to Business Insider who broke the news, the company which stepped in to develop Project Maven was Palantir – a company founded by Peter Thiel, a serial entrepreneur, venture capitalist, and cofounder of PayPal.

Business Insider reporter Becky Peterson wrote that:

“Palantir is working with the Defense Department to build artificial intelligence that can analyze video feeds from aerial drones … Internally at Palantir, where names of clients are kept close to the vest, the project is referred to as ‘Tron,’ after the 1982 Steven Lisberger film.”

In June 2018, Thiel famously said that Google’s decision to pull out from Project Maven but push ahead with Project Dragonfly (a search project for China) amounts to “treason” and should be investigated as such.

Project Maven/Tron is described as being capable of extensive tracking and monitoring of UAVs without human input, but the unclassified information available indicates that it will not be able to fire upon targets. This is somewhat in-line with the accepted norms being established about the use of AI in the military.

Many experts accept that AI will increasingly be used in the military but are seeking to establish acceptable practices. One of the key principles is that, while an AI can track and offer advice to human operators, it should never be able to make decisions by itself which could lead to loss of life.

The rapid pace in which the Project Maven contract was picked up by another company gives credence to comments made by some tech giants that, rather than pull out from such contracts altogether – and potentially hand them to less ethical companies – it’s better to help shape them from the inside.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Palantir took over Project Maven defense contract after Google backed out appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/12/12/palantir-project-maven-defense-contract-google-out/feed/ 3
Report: UK leads AI developments in Europe, Iran in Middle-East https://news.deepgeniusai.com/2019/11/11/report-uk-ai-developments-europe-iran-middle-east/ https://news.deepgeniusai.com/2019/11/11/report-uk-ai-developments-europe-iran-middle-east/#respond Mon, 11 Nov 2019 15:31:47 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6177 The latest Scimago Institutions Rankings (SIR) indicates the UK is leading AI developments in Europe while Iran is leading in the Middle-East. SIR has ranked global research and education institutions since 2009. The ranking is based on their performance and the number of articles they’ve published in highly-regarded publications. In the field of AI, the... Read more »

The post Report: UK leads AI developments in Europe, Iran in Middle-East appeared first on AI News.

]]>
The latest Scimago Institutions Rankings (SIR) indicates the UK is leading AI developments in Europe while Iran is leading in the Middle-East.

SIR has ranked global research and education institutions since 2009. The ranking is based on their performance and the number of articles they’ve published in highly-regarded publications.

In the field of AI, the UK ranks number one in Europe and fourth globally. Iran ranks number one in the Middle-East and is ninth overall among the 152 countries featured.

Here are the top 10 countries for AI in the latest SIR:

China is leading AI developments overall. The US is in second place and leads AI developments in the Western hemisphere.

Defense considerations

Despite widespread concern, it’s almost inevitable AI will increasingly creep into military applications. With that in mind, it’s hard not to consider ongoing tensions and how it might apply to future conflicts like those in the Middle-East.

Tensions with Iran, particularly with the US and UK, have been increasing in recent years – especially in the wake of American allegations that the Iranians haven’t been meeting their obligations under the nuclear deal reached in 2015. The US controversially pulled out of the treaty and imposed sanctions on the state.

Since then, a series of conflicts have occurred. An example of such a case where AI may play a key role in the future was Iran’s downing of a US drone in June.

AI superiority from the American side may have enabled the drone to take evasive action to avoid being shot down. On the other hand, AI technologies on the Iranian side could have automated the downing of any unauthorised aircraft.

Some have likened the race to AI superiority to the nuclear arms race so we can only hope it’s less devastating. However, increasing capabilities between new and age-old rivals won’t do anything to ease such concerns.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Report: UK leads AI developments in Europe, Iran in Middle-East appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/11/11/report-uk-ai-developments-europe-iran-middle-east/feed/ 0
Pentagon is ‘falling behind’ in military AI, claims former NSWC chief https://news.deepgeniusai.com/2019/10/23/pentagon-military-ai-former-nswc-chief/ https://news.deepgeniusai.com/2019/10/23/pentagon-military-ai-former-nswc-chief/#respond Wed, 23 Oct 2019 14:50:41 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6136 The former head of US Naval Special Warfield Command (NSWC) has warned the Pentagon is falling behind adversaries in military AI developments. Speaking on Tuesday, Rear Adm. Brian Losey said AI is able to provide tactical guidance as well as anticipate enemy actions and mitigate threats. Adversaries with such technology will have a significant advantage.... Read more »

The post Pentagon is ‘falling behind’ in military AI, claims former NSWC chief appeared first on AI News.

]]>
The former head of US Naval Special Warfield Command (NSWC) has warned the Pentagon is falling behind adversaries in military AI developments.

Speaking on Tuesday, Rear Adm. Brian Losey said AI is able to provide tactical guidance as well as anticipate enemy actions and mitigate threats. Adversaries with such technology will have a significant advantage.

Losey is retired from the military but is now a partner at San Diego-based Shield AI.

Shield AI specialises in building artificial intelligence systems for the national security sector. The company’s flagship Hivemind AI enables autonomous robots to “see”, “reason”, and “search” the world. Nova is Shield AI’s first Hivemind-powered robot which autonomously searches buildings while streaming video and generating maps.

During a panel discussion at The Promise and The Risk of the AI Revolution conference, Losey said:

“We’re losing a lot of folks because of encounters with the unknown. Not knowing when we enter a house whether hostiles will be there and not really being able to adequately discern whether there are threats before we encounter them. And that’s how we incurred most of our casualties.

The idea is: can we use autonomy, can we use edge AI, can we use AI for manoeuvre to mitigate risk to operators to reduce casualties?”

AI has clear benefits today for soldiers on the battlefield, national policing, and even areas such as firefighting. In the future, it may be vital for national defense against ever more sophisticated weapons.

Some of the US’ historic adversaries, such as Russia, have already shown off developments such as killer robots and hypersonic missiles. AI will be vital to equalising the capabilities and hopefully act as a deterrent to the use of such weaponry.

“If you’re concerned about national security in the future, then it is imperative that the United States lead AI so we that we can unfold the best practices so that we’re not driven by secure AI to assume additional levels of risk when it comes to lethal actions,” Losey said.

Meanwhile, Nobel Peace Prize winner Jody Williams has warned against robots making life-and-death decisions on the battlefield. Williams said it is ‘unethical and immoral’ and can never be undone.

Williams was speaking at the UN in New York following the Project Quarterback announcement from the US military which uses AI to make decisions on what human soldiers should target and destroy.

“We need to step back and think about how artificial intelligence robotic weapons systems would affect this planet and the people living on it,” said Williams during a panel discussion.

It’s almost inevitable AI will be used for military purposes. Arguably, the best we can hope for is to quickly establish international norms for their development and usage to minimise the unthinkable potential damage.

One such norm that many researchers have backed is that AI should only make recommendations on actions to take, but a human should take accountability for any decision made.

A 2017 report by the Human Rights Watch chillingly concluded that no-one is currently accountable for a robot unlawfully killing someone in the heat of a battle.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Pentagon is ‘falling behind’ in military AI, claims former NSWC chief appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/10/23/pentagon-military-ai-former-nswc-chief/feed/ 0
Microsoft chief Brad Smith warns that killer robots are ‘unstoppable’ https://news.deepgeniusai.com/2019/09/23/microsoft-brad-smith-killer-robots-unstoppable/ https://news.deepgeniusai.com/2019/09/23/microsoft-brad-smith-killer-robots-unstoppable/#respond Mon, 23 Sep 2019 12:06:08 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6040 Microsoft chief Brad Smith issued a warning over the weekend that killer robots are ‘unstoppable’ and a new digital Geneva Convention is required. Most sci-fi fans will think of Terminator when they hear of killer robots. In the classic film series, a rogue military AI called Skynet gained self-awareness after spreading to millions of servers... Read more »

The post Microsoft chief Brad Smith warns that killer robots are ‘unstoppable’ appeared first on AI News.

]]>
Microsoft chief Brad Smith issued a warning over the weekend that killer robots are ‘unstoppable’ and a new digital Geneva Convention is required.

Most sci-fi fans will think of Terminator when they hear of killer robots. In the classic film series, a rogue military AI called Skynet gained self-awareness after spreading to millions of servers around the world. Concluding that humans would attempt to shut it down, Skynet sought to exterminate all of mankind in the interest of self-preservation.

While it was once just a popcorn flick, Terminator now offers a dire warning of what could be if precautions are not taken.

As with most technologies, AI will find itself increasingly used for military applications. The ultimate goal for general artificial intelligence is to self-learn. Combine both, and Skynet no longer seems the wild dramatisation that it once did.

Speaking to The Telegraph, Smith seems to agree. Smith points towards developments in the US, China, UK, Russia, Isreal, South Korea, and others, who are all developing autonomous weapon systems.

Wars could one day be fought on battlefields entirely with robots, a scenario that has many pros and cons. On the one hand, it reduces the risk to human troops. On the other, it makes declaring war easier and runs the risk of machines going awry.

Many technologists have likened the race to militarise AI to the nuclear arms race. In a pursuit to be the first and best, dangerous risks may be taken.

There’s still no clear responsible entity for death or injuries caused by an autonomous machine – the manufacturer, developer, or an overseer. This has also been a subject of much debate in regards to how insurance will work with driverless cars.

With military applications, many technologists have called for AI to never make a combat decision – especially one that would result in fatalities – on its own. While AI can make recommendations, a final decision must be made by a human.

Preventing unimaginable devastation

The story of Russian lieutenant colonel Stanislav Petrov in 1983 offers a warning of how a machine without human oversight may cause unimaginable devastation.

Petrov’s computers reported that an intercontinental missile had been launched by the US towards the Soviet Union. The Soviet Union’s strategy was an immediate and compulsory nuclear counter-attack against the US in such a scenario. Petrov used his instinct that the computer was incorrect and decided against launching a nuclear missile, and he was right. 

Had the decision in 1983 whether to deploy a nuclear missile been made solely on the computer, one would have been launched and met with retaliatory launches from the US and its allies.

Smith wants to see a new digital Geneva Convention in order to bring world powers together in agreement over acceptable norms when it comes to AI. “The safety of civilians is at risk today. We need more urgent action, and we need it in the form of a digital Geneva Convention, rules that will protect civilians and soldiers.” 

Many companies – including thousands of Google employees, following backlash over a Pentagon contract to develop AI tech for drones – have pledged not to develop AI technologies for harmful use.

Smith has launched a new book called Tools and Weapons. At the launch, Smith also called for stricter rules over the use of facial recognition technology. “There needs to be a new law in this space, we need regulation in the world of facial recognition in order to protect against potential abuse.”

Last month, a report from Dutch NGO PAX said leading tech firms are putting the world ‘at risk’ of killer AI. Microsoft, along with Amazon, was ranked among the highest risk. Microsoft itself warned investors back in February that its AI offerings could damage the company’s reputation. 

“Why are companies like Microsoft and Amazon not denying that they’re currently developing these highly controversial weapons, which could decide to kill people without direct human involvement?” said Frank Slijper, lead author of PAX’s report.

A global campaign simply titled Campaign To Stop Killer Robots now includes 113 NGOs across 57 countries and has doubled in size over the past year.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Microsoft chief Brad Smith warns that killer robots are ‘unstoppable’ appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/09/23/microsoft-brad-smith-killer-robots-unstoppable/feed/ 0
Report: Companies like Amazon and Microsoft are ‘putting world at risk’ of killer AI https://news.deepgeniusai.com/2019/08/22/report-companies-amazon-microsoft-world-risk-ai/ https://news.deepgeniusai.com/2019/08/22/report-companies-amazon-microsoft-world-risk-ai/#respond Thu, 22 Aug 2019 12:31:17 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5960 A survey of major players within the industry concludes that leading tech companies like Amazon and Microsoft are putting the world ‘at risk’ of killer AI. PAX, a Dutch NGO, ranked 50 firms based on three criteria: If technology they’re developing could be used for killer AI. Their involvement with military projects. If they’ve committed... Read more »

The post Report: Companies like Amazon and Microsoft are ‘putting world at risk’ of killer AI appeared first on AI News.

]]>
A survey of major players within the industry concludes that leading tech companies like Amazon and Microsoft are putting the world ‘at risk’ of killer AI.

PAX, a Dutch NGO, ranked 50 firms based on three criteria:

  1. If technology they’re developing could be used for killer AI.
  2. Their involvement with military projects.
  3. If they’ve committed to not being involved with military applications in the future.

Microsoft and Amazon are named among the world’s ‘highest risk’ tech companies putting the world at risk, while Google leads the way among large tech companies implementing proper safeguards.

Google’s ranking among the safest tech companies may be of surprise to some given the company’s reputation for mass data collection. Mountain View was also caught up in an outcry regarding its controversial ‘Project Maven’ contract with the Pentagon.

Project Maven was a contract Google had with the Pentagon to supply AI technology for military drones. Several high-profile employees resigned over the contract, while over 4,000 Google staff signed a petition demanding their management cease the project and never again “build warfare technology.”

Following the Project Maven backlash, Google CEO Sundar Pichai promised in a blog post the company will not develop technologies or weapons that cause harm, or anything which can be used for surveillance violating “internationally accepted norms” or “widely accepted principles of international law and human rights”.

Pichai’s promise not to be involved with such contracts in the future appears to have satisfied PAX in their rankings. Google has since attempted to improve its public image around its AI developments with things such as the creation of a dedicated ethics panel, but that backfired and collapsed quickly after featuring a member of a right-wing think tank and a defense drone mogul.

“Why are companies like Microsoft and Amazon not denying that they’re currently developing these highly controversial weapons, which could decide to kill people without direct human involvement?” said Frank Slijper, lead author of the report published this week.

Microsoft, which ranks among the highest risk tech companies in PAX’s list, warned investors back in February that its AI offerings could damage the company’s reputation. 

In a quarterly report, Microsoft wrote:

“Some AI scenarios present ethical issues. If we enable or offer AI solutions that are controversial because of their impact on human rights, privacy, employment, or other social issues, we may experience brand or reputational harm.”

Some of Microsoft’s forays into the technology have already proven troublesome, such as chatbot ‘Tay’ which became a racist, sexist, generally-rather-unsavoury character after internet users took advantage of its machine-learning capabilities.

Microsoft and Amazon are both currently bidding for a $10 billion Pentagon contract to provide cloud infrastructure for the US military.

“Tech companies need to be aware that unless they take measures, their technology could contribute to the development of lethal autonomous weapons,” comments Daan Kayser, PAX project leader on autonomous weapons. “Setting up clear, publicly-available policies is an essential strategy to prevent this from happening.”

You can find PAX’s full risk assessment of the companies here (PDF).

? , , , AI &

The post Report: Companies like Amazon and Microsoft are ‘putting world at risk’ of killer AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/08/22/report-companies-amazon-microsoft-world-risk-ai/feed/ 0
Putin outlines Russia’s national AI strategy priorities https://news.deepgeniusai.com/2019/05/31/putin-russia-national-ai-strategy-priorities/ https://news.deepgeniusai.com/2019/05/31/putin-russia-national-ai-strategy-priorities/#comments Fri, 31 May 2019 15:38:33 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5712 Russian President Vladimir Putin has offered the best insight yet at what shape the country’s AI strategy will take. Putin ordered his government apparatus on February 27th to formulate a national artificial intelligence strategy by June 25th. With that date quickly approaching, the world is waiting to see Russia’s AI plans. Back in September 2017,... Read more »

The post Putin outlines Russia’s national AI strategy priorities appeared first on AI News.

]]>
Russian President Vladimir Putin has offered the best insight yet at what shape the country’s AI strategy will take.

Putin ordered his government apparatus on February 27th to formulate a national artificial intelligence strategy by June 25th. With that date quickly approaching, the world is waiting to see Russia’s AI plans.

Back in September 2017, Putin famously said the nation which leads in AI “will become the ruler of the world.” Understandably, Putin’s comments generated fear of a cold war-like rush to militarise AI technology.

The Russian leader’s most recent speech won’t help to ease those concerns after reiterating that AI offers unprecedented power, including military power, to any government that leads in the field.

“Mechanisms of artificial intelligence provide real-time fast decision-making based on the analysis of huge amounts of information, which gives tremendous advantages in quality and effectiveness,” he said. “If someone can provide a monopoly in the field of artificial intelligence, then the consequences are clear to all of us: he will rule the world.”

However, there appears to be some understanding from Russia’s president of the need for a level-headed approach to AI. Putin suggested his government needs to enshrine and protect citizens’ rights and new intellectual property.

Other key areas highlighted by Putin for Russia’s AI development include:

  • Training initiatives
  • Legislative support
  • Public-private cooperation
  • Efforts to advance Russia’s STEM strengths

Putin spoke of the need to ensure “the readiness of society, citizens for the widespread introduction of such technologies. It is, therefore, necessary to provide widespread digital education, to launch retraining programs.”

An increased need for AI funding was also touched upon. Putin said cash will need to be invested from both the state and investors through public-private cooperation.

“Russia should become one of the key platforms for solving complex scientific problems with the participation of scientists from around the world,” declared Putin.

“It is fundamentally important to tune our legislation to a new technological reality, quickly and efficiently form a flexible, adequate legal basis for the development and use of artificial intelligence-based application solutions, as well as special regimes for private investment in creating breakthrough solutions,” he said.

The overview of what’s forming the base of Russia’s AI strategy was provided by Putin during a May 30th speech.

The post Putin outlines Russia’s national AI strategy priorities appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/05/31/putin-russia-national-ai-strategy-priorities/feed/ 2
EU AI Expert Group: Ethical risks are ‘unimaginable’ https://news.deepgeniusai.com/2019/04/11/eu-ai-expert-group-ethical-risks/ https://news.deepgeniusai.com/2019/04/11/eu-ai-expert-group-ethical-risks/#respond Thu, 11 Apr 2019 11:51:44 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5487 The EU Commission’s AI expert group has published its assessment of the rapidly-advancing technology and warned it has “unimaginable” ethical risks. Some of the highlighted risks includes lethal autonomous systems, tracking individuals, and ‘scoring’ people in society. On the subject of lethal autonomous systems, the experts warn machines with cognitive skills could “decide whom, when... Read more »

The post EU AI Expert Group: Ethical risks are ‘unimaginable’ appeared first on AI News.

]]>
The EU Commission’s AI expert group has published its assessment of the rapidly-advancing technology and warned it has “unimaginable” ethical risks.

Some of the highlighted risks includes lethal autonomous systems, tracking individuals, and ‘scoring’ people in society.

On the subject of lethal autonomous systems, the experts warn machines with cognitive skills could “decide whom, when and where to fight without human intervention”.

When it comes to tracking individuals, the experts foresee biometric data of people being involuntarily used such as “lie detection [or] personality assessment through micro expressions”.

Citizen scoring is on some people’s minds after being featured in an episode of dystopian series Black Mirror. The experts note that scoring criteria must be transparent and fair, with scores being challengeable

The guidelines have been several years in the making and have launched alongside a pilot project for testing how they work in practice.

Experts from various fields across Europe sit in the group, including academic lawyers from Birmingham and Oxford universities.

They concluded: “it is important to build AI systems that are worthy of trust, since human beings will only be able to confidently and fully reap its benefits when the technology, including the processes and people behind the technology, are trustworthy.”

The EU as a whole is looking to invest €20bn (£17bn) every year for the next decade to close the current gap between European developments and those in Asia and North America.

The post EU AI Expert Group: Ethical risks are ‘unimaginable’ appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/04/11/eu-ai-expert-group-ethical-risks/feed/ 0
Report: 94 percent of IT leaders want greater focus on AI ethics https://news.deepgeniusai.com/2019/03/26/report-it-leaders-focus-ai-ethics/ https://news.deepgeniusai.com/2019/03/26/report-it-leaders-focus-ai-ethics/#respond Tue, 26 Mar 2019 10:11:33 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5392 A study from SnapLogic has found that 94 percent of IT decision makers across the UK and US want a greater focus on ethical AI development. Bias in algorithms continues to be a problem and is among the biggest barriers to societal adoption. Facial recognition algorithms, for example, have been found to be far less... Read more »

The post Report: 94 percent of IT leaders want greater focus on AI ethics appeared first on AI News.

]]>
A study from SnapLogic has found that 94 percent of IT decision makers across the UK and US want a greater focus on ethical AI development.

Bias in algorithms continues to be a problem and is among the biggest barriers to societal adoption. Facial recognition algorithms, for example, have been found to be far less accurate for some parts of society than others.

Without addressing these issues, we’re in danger of automating problems such as racial profiling. Public trust in AI is already low, so there’s a collective responsibility within the industry to ensure high ethical standards.

Gaurav Dhillon, CEO at SnapLogic, commented:

“AI is the future, and it’s already having a significant impact on business and society. However, as with many fast-moving developments of this magnitude, there is the potential for it to be appropriated for immoral, malicious, or simply unintended purposes.

We should all want AI innovation to flourish, but we must manage the potential risks and do our part to ensure AI advances in a responsible way.”

SnapLogic’s report found that over half (53%) of the IT leaders believe responsibility for ethical AI development lies with the organisation developing it, regardless of whether they’re a commercial business or academic institution.

Far less (17%) blame individual developers working on AI projects. Respondents in the US, however, are over twice as likely (21%) to blame individuals than the UK (9%).

Some global bodies are emerging which aim to establish AI standards and fair rules. Understandably, there’s great concern over AI’s role in military technology. A so-called ‘AI arms race’ between global powers like China, the US, and Russia could lead to irresponsible developments with devastating consequences.

However, just 16 percent of the respondents believe an independent global consortium – comprising of representatives from government, academia, research institutions, and businesses – as the only way to establish much-needed standards, rules, and protocols.

IT leaders are welcoming of expert groups on AI such as the European Commission High-Level Expert Group on Artificial Intelligence. Half of the respondents believe organisations will take guidance and recommendations from such groups. Brits are almost double (15%) as likely to believe organisations will disregard such groups as their American counterparts (9%).

Just five percent of UK IT leaders believe advice from AI expert groups will be useless if not enforced by law.

87 percent of all respondents want AI to be regulated, although there’s some debate over how. 32 percent believe it should come from a combination of government and industry, while 25 percent want an independent industry consortium.

There are discrepancies on the appetite for regulation based on industry, too. Almost a fifth (18%) of IT decision makers in manufacturing are against the regulation, followed by 13 percent in the ‘Technology’ sector, and the same percentage in the ‘Retail, Distribution and Transport’ sector. The reasons given was close to an even split between the belief it would slow down innovation, and that developments should be at the discretion of its developers.

“Regulation has its merits and may well be needed, but it should be implemented thoughtfully such that data access and information flow are retained,” continues Dhillon. “Absent that, AI systems will be working from incomplete or erroneous data, thwarting the advancement of future AI innovation.”

AI will be revolutionary – in fact, some call it the fourth industrial revolution. However, as a great fictional man once said: “With great power, comes great responsibility.”

The post Report: 94 percent of IT leaders want greater focus on AI ethics appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/03/26/report-it-leaders-focus-ai-ethics/feed/ 0