The White House has urged its European allies to avoid overregulation of AI to prevent Western innovation from being hindered.
While the news has gone somewhat under the radar given recent events, the Americans are concerned that overregulation may cause Western nations to fall behind the rest of the world.
In a statement released by the Office of Science and Technology Policy, the White House wrote:
“Europe and our allies should avoid heavy handed innovation-killing models, and instead consider a similar regulatory approach.
The best way to counter authoritarian uses of AI is to make sure America and our international partners remain the global hubs of innovation, shaping the evolution of technology in a manner consistent with our common values.”
The UK is expected to retain its lead as the European hub for AI innovation with vast amounts of private and public sector investment, successful companies like DeepMind, and world class universities helping to address the global talent shortage. In Oxford Insights’ 2017 Government AI Readiness Index, the UK ranked number one due to areas such as digital skills training and data quality. The Index considers public service reform, economy and skills, and digital infrastructure.
Despite its European AI leadership, the UK would struggle to match the levels of funding afforded to firms residing in superpowers like the US and China. Many experts have suggested the UK should instead focus on leading in the ethical integration of AI and developing sensible regulations, an area it has much experience in.
Here’s a timeline of some recent work from the UK government towards this goal:
- September 2016 – the House of Commons Science and Technology Committee published a 44-page report on “Robotics and Artificial Intelligence” which investigates the economic and social implications of employment changes, ethical and legal issues around safety, verification, bias, privacy, and accountability; and strategies to enhance research, funding, and innovation
- January 2017 – an All Party Parliamentary Group on Artificial Intelligence (APPG AI) was established to address ethical issues, social impact, industry norms, and regulatory options for AI in parliament.
- June 2017 – parliament established the Select Committee on AI to further consider the economic, ethical and social implications of advances in artificial intelligence, and to make recommendations. All written and oral evidence received by the committee can be seen here.
- April 2018 – the aforementioned committee published a 183-page report, “AI in the UK: ready, willing and able?” which considers AI development and governance in the UK. It acknowledges that the UK cannot compete with the US or China in terms of funding or people but suggests the country may have a competitive advantage in considering the ethics of AI.
- September 2018 – the UK government launched an experiment with the World Economic Forum to develop procurement policies for AI. The partnership will bring together diverse stakeholders to collectively develop guidelines to capitalise on governments’ buying power to support the responsible deployment and design of AI technologies.
Western nations are seen as being at somewhat of a disadvantage due to sensitivities around privacy. EU nations, in particular, have strict data collection regulations such as GDPR which limits the amount of data researchers can collect to train AIs.
“Very often we hear ‘Where are the British and European Googles and Facebooks?’ Well, it’s because of barriers like this which stop organisations like that being possible to grow and develop,” said Peter Wright, solicitor and managing director of Digital Law UK.
Dependent on the UK’s future trade arrangement with the EU, it could, of course, decide to chart its own regulatory path following Brexit.
Speaking to reporters in a call, US CTO Michael Kratsios said: “Pre-emptive and burdensome regulation does not only stifle economic innovation and growth, but also global competitiveness amid the rise of authoritarian governments that have no qualms with AI being used to track, surveil, and imprison their own people.”
In the same call, US deputy CTO Lynne Parker commented: “As countries around the world grapple with similar questions about the appropriate regulation of AI, the US AI regulatory principles demonstrate that America is leading the way to shape the evolution in a way that reflects our values of freedom, human rights, and civil liberties.
“The new European Commission has said they intend to release an AI regulatory document in the coming months. After a productive meeting with Commissioner Vestager in November, we encourage Europe to use the US AI principles as a framework. The best way to counter authoritarian uses of AI is to make America and our national partners remain the global hub of innovation, advancing our common values.”
A similar regulation to GDPR in California called CCPA was also signed into law in June 2018. “I think the examples in the US today at state and local level are examples of overregulation which you want to avoid on the national level,” said a government official.
“The White House warns European allies not to overregulate AI”