UK govt ‘failing on openness’ around public sector AI – but specific regulator not the answer

Editor is editor in chief of Deep Genius AI News, with a passion for how technologies influence business and several Mobile World Congress events under his belt. Editor has interviewed a variety of leading figures in his career, from former Mafia boss Michael Franzese, to Steve Wozniak, and Jean Michel Jarre. Editor can be found tweeting at @Editor_T_Bourne.

The UK does not need a specific regulator for artificial intelligence (AI), according to a new government report – yet more clarity needs to be given around usage and ethics in the public sector.

The report, ‘Artificial Intelligence and Public Standards: A Review by the Committee on Standards in Public Life’ (pdf, no opt-in, 78 pages), said the government was ‘failing on openness’, although adding that fears over ‘black box AI’, whereby data produces results through unexplainable methods, were largely misplaced.

“Public sector organisations are not sufficiently transparent about their use of AI and it is too difficult to find out where machine learning is currently being used in government,” the report notes. “It is too early to judge if public sector bodies are successfully upholding accountability.”

The report advocated the use of the Nolan Principles – seven ethical standards expected of public office holders – in bringing through AI for the UK public sector, arguing they did not need reformulating. Yet in three areas – openness, accountability, and objectivity – the report said current standards fell short.

Of the 15 overall recommendations the report made, many focused around preparation, ethics and transparency:

  • The public needs to understand the high level ethical principles that govern the use of AI in the public sector (currently the FAST SUM Principles, the OECD AI Principles, and the Data Ethics Framework)
  • All public sector organisations should publish a statement on how their use of AI complies with relevant laws and regulations before they are deployed in public service delivery
  • A specific AI regulator is not needed, however a regulatory assurance body should be formed to identify gaps in the regulatory landscape and provide advice to individual regulators and government on the issues associated with AI
  • The government should use its purchasing power in the market to set procurement requirements to ensure private companies developing AI solutions for the public sector meet the right standards
  • The government should consider how an AI impact assessment requirement could be integrated into existing processes to evaluate the potential effects of AI on public standards
  • The government should establish guidelines for public bodies about the declaration and disclosure of their AI systems

Commenting after the report’s release Alex Guillen, technology strategist at IT services provider Insight, noted the recommendations were feasible, but put this alongside a word of caution.

“Introducing AI into government while still following the Nolan Principles should be perfectly possible,” Guillen told AI News. “First, the public sector needs to remember that currently, the most effective uses of technologies such as AI and machine learning act to enhance, rather than replace, human workers. Helping public sector workers make more informed decisions, or act faster, will not only improve public services; it will help satisfy the 69% of people polled who said they would be more comfortable with public bodies using AI if humans were making the final judgement on any decision.

“However, regardless of how AI is used in the public sector, it needs to be treated as an employee – and given the training and information it needs in order to do its job,” Guillen added. “As with any technology that relies on data, garbage in means garbage out. From facial recognition in policing to helping diagnose and treat patients on the NHS, AI needs the right data and the right governance to avoid causing more problems than it solves.

“Otherwise any implementation will be dogged by ethical concerns and accusations of data bias or discrimination.”

? Attend the co-located 

Tags: , ,