Google CEO Sundar Pichai has called for sensible AI regulation that does not limit the huge potential benefits to society.
Writing in a FT editorial, Pichai said: “…there is no question in my mind that artificial intelligence needs to be regulated. It is too important not to.”
Few people debate the need for AI regulation but there are differing opinions when it comes to how much. Overregulation limits innovation while lack of regulation can pose serious dangers – even existential depending on who you listen to.
Pichai says AI is “one of the most promising new technologies” that has “the potential to improve billions of lives,” but warns of the possible risks if development is left unchecked.
“History is full of examples of how technology’s virtues aren’t guaranteed,” Pichai wrote. “The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread.”
Google is one of the companies which people have voiced concerns about given its reach and questionable record when it comes to user privacy. Pichai’s words today will offer some comfort that Google’s leadership wants sensible regulation to guide its efforts.
So far, Google has shown how AI can be used for good. A study by Google, published in science journal Nature, showed how its AI model was able to spot breast cancer in mammograms with “greater accuracy, fewer false positives, and fewer false negatives than experts.”
Governments around the world are beginning to shape AI regulations. The UK, Europe’s leader in AI developments and investments, aims to focus on promoting ethical AI rather than attempt to match superpowers like China and the US in other areas.
In a report last year, the Select Committee on Artificial Intelligence recommended the UK capitalises on its “particular blend of national assets” to “forge a distinctive role for itself as a pioneer in ethical AI”.
The EU, which the UK leaves at the end of this month, recently published its own comprehensive proposals on AI regulation which many believe are too stringent. The US warned its European allies against overregulation of AI earlier this month.
In a statement released by the Office of Science and Technology Policy, the White House wrote:
“Europe and our allies should avoid heavy handed innovation-killing models, and instead consider a similar regulatory approach.
The best way to counter authoritarian uses of AI is to make sure America and our international partners remain the global hubs of innovation, shaping the evolution of technology in a manner consistent with our common values.”
Pichai refrains from denouncing either the White House’s calls for light AI regulation, or the EU’s plans for stringent rules. Instead, Pichai only calls for the need to balance “potential harms… with social opportunities.”
Google has certainly not been devoid of criticism over its forays into AI. The company was forced to back out from a Pentagon contract in 2018 called Project Maven over backlash about Google building AI technology for deploying and monitoring unmanned aerial vehicles (UAVs).
Following the decision to back out from Project Maven, Pichai outlined Google’s ethical principles when it comes to AI:
- Be socially beneficial.
- Avoid creating or reinforcing unfair bias.
- Be built and tested for safety.
- Be accountable to people.
- Incorporate privacy design principles.
- Uphold high standards of scientific excellence.
- Be made available for uses that accord with these principles.
Pichai promised the company “will work to limit potentially harmful or abusive applications” and will block the use of their technology if they “become aware of uses that are inconsistent” with the principles.
Time will tell whether Google will abide by its principles when it comes to AI, but it’s heartening to see Pichai call for sensible regulation to help enforce it across the industry.
Interested in hearing industry leaders discuss subjects like this? , , , AI &