Business leaders ensuring responsible use of AI within organisations, research finds

Editor is editor in chief of Deep Genius AI News, with a passion for how technologies influence business and several Mobile World Congress events under his belt. Editor has interviewed a variety of leading figures in his career, from former Mafia boss Michael Franzese, to Steve Wozniak, and Jean Michel Jarre. Editor can be found tweeting at @Editor_T_Bourne.

A recently conducted joint study by SAS, Accenture Applied Intelligence, Intel, and Forbes Insights has argued that business leaders are taking all kinds of measures into consideration to ensure responsible use of AI within their organisations.

The report, titled “AI Momentum, Maturity and Models for Success”, notes that most AI adopters account for 72% of global organisations who conduct training for their technologists and have ethics committees in place to review the use of AI. Such technologists are totalled at 70%, whereas ethics committees at 63%. A total of 305 international business leaders were surveyed for the study, among which more than half of them were chief information officers, chief technology offers, and chief analytics officers. It was revealed that 92% of AI leaders in organisations get training in ethics compared to 48% of other AI adopters.

Rumman Chowdhury, responsible AI lead at Accenture Applied Intelligence, said: “Organisations have begun addressing concerns and aberrations that AI has been known to cause, such as biased and unfair treatment of people. These are positive steps; however, organisations need to move beyond directional AI ethics codes that are in the spirit of the Hippocratic Oath to ‘do no harm’.

“They need to provide prescriptive, specific and technical guidelines to develop AI systems that are secure, transparent, explainable, and accountable – to avoid unintended consequences and compliance challenges that can be harmful to individuals, businesses, and society,” added Chowdhury.

Companies at the heart of this change are also examining responsibility and ethics in AI. For instance, IBM is launching a tool called the Fairness 360 Kit, that will analyse how and why algorithms make decisions in real time. It is also said that it can scan signs of biasness and recommend adjustments. Through a visual dashboard, users will be able to see how their algorithms are making decisions and which factors are being used in making the final recommendations.