US Department of Defense adopts ethical principles for AI use

Editor is editor in chief of Deep Genius AI News, with a passion for how technologies influence business and several Mobile World Congress events under his belt. Editor has interviewed a variety of leading figures in his career, from former Mafia boss Michael Franzese, to Steve Wozniak, and Jean Michel Jarre. Editor can be found tweeting at @Editor_T_Bourne.

The US Department of Defense (DoD) has formally adopted a set of principles to use artificial intelligence (AI) for military use.

In October 2019, the recommendations for use of the technology were provided to Secretary of Defense Dr. Mark T. Esper by the Defense Innovation Board. These recommendations came after 15 months of consultation with leading AI specialists in several industries like the government, academia, commercial, and the public.

The move aligns with the DoD’s AI strategy objective directing the country’s military lead in AI ethics and the legal use of AI systems. The principles will be designed on the US military’s existing ethics framework based on the US Constitution, Title 10 of the US Code, Law of War, existing international treaties and longstanding norms and values. As the existing framework delivers technology-neutral and enduring foundation for ethical behaviour, the use of AI increases new ethical uncertainties and risks. This is where the new principles will play their role by addressing the new challenges.

The next in line to announce ethics and transparency as its motto is the European Union, which has launched strategies for AI and the “data economy”. In its statement, the European Commission (EC) said: “European society powered by digital solutions that put people first, open up new opportunities for businesses, and boost the development of trustworthy technology to foster an open and democratic society and a vibrant and sustainable economy”. According to the EC, the focus would be on three key objectives in digital: “technology that works for people, a fair and competitive economy, and an open, democratic and sustainable society”.

According to a UK government report issued earlier this month, the government was ‘failing on openness’ with regards to its AI usage, although a specific regulator was not proposed as the answer. The report added that fears over ‘black box AI’, whereby data produces results through unexplainable methods, were largely misplaced. The report advocated the use of the Nolan Principles in bringing through AI for the UK public sector, arguing they did not need reformulating.

? Attend the co-located 

Tags: