AI News sat down with Faculty’s head of research Ilya Feige to discuss safe and fair practices in artificial intelligence development.
Feige had just finished giving a talk entitled ‘Fairness in AI: Latest developments in AI safety’ at this year’s AI Expo Global. We managed to grab him to get more of his thoughts on the issue.
Rightfully, people are becoming increasingly concerned about unfair and unsafe AIs. Human biases are seeping into algorithms which poses a very real danger that prejudices and oppression could become automated by accident.
AI News reported last week on research from New York University that found inequality in STEM-based careers is causing algorithms to work better or worse for some parts of society over others.
Similar findings, by Joy Buolamwini and her team from the Algorithmic Justice League, highlighted a disparity in the effectiveness of the world’s leading facial recognition systems between genders and skin tones.
In an ideal world, all parts of society would be equally represented tomorrow. The reality is that issue is going to take much longer to rectify, but AI technologies are becoming increasingly used across society today.
AI News asked Feige for his perspective and how the impact of that problem can be reduced much sooner.
“I think the most important thing for organisations to do is to spend more time thinking about bias and on ensuring that every model they build is unbiased because a demographically disparate team can build non-disparate tech.”
Some companies are seeking to build AIs which can scan for bias in other algorithms. We asked Feige for his view on whether he believes this is an ideal solution.
“Definitely, I showed one in my talk. We have tests for: You give me a black box algorithm, I have no idea what your algorithm does – but I can give an input, calculate the output, and I can just tell you how biased it is according to various definitions of bias.”
“We can go even further and say: Let’s modify your algorithm and give it back so it’s unbiased according to one of those definitions.”
In the Western world, we consider ourselves fairly liberal and protective of individual freedoms. China, potentially the world’s leader in AI, has a questionable human rights record and is known for invasive surveillance and mass data collection. Meanwhile, Russia has a reputation for military aggression which some are concerned will drive its AI developments. Much of the Middle East, while not considered leaders in AI, is behind most of the world in areas such as female and gay rights.
We asked Feige for his thoughts on whether these regional attitudes could find their way into AI developments.
“It’s an interesting question. It’s not that some regions will take the issue more or less seriously, they just have different … we’ll say preferences. I suspect China takes surveillance and facial recognition seriously – more seriously than the UK – but they do so in order to leverage it for mass surveillance, for population control.”
“The UK is trying to walk a fine line in efficiently using that very useful technology but not undermine personal privacy and freedom of individuals.”
During his talk, Feige made the point that he’s less concerned about AI biases due to the fact that – unlike humans – algorithms can be controlled.
“This is a real source of optimism for me, just because human decision-making is incredibly biased and everyone knows that.”
Feige asked the audience to raise a hand if they were concerned about AI bias which prompted around half to do so. The same question was asked regarding human bias and most of the room had their hand up.
“You can be precise with machine learning algorithms. You can say: ‘This is the objective I’m trying to achieve, I’m trying to maximise the probability of a candidate being successful at their job according to historical people in their role’. Or, you can be precise about the data the model is trained on and say: ‘I’m going to ignore data from before this time period because things were ‘different’ back then’”.
“Humans have fixed past experiences they can’t control. I can’t change the fact my mum did most of the cooking when I was growing up and I don’t know how it affects my decision-making.”
“I also can’t force myself to hire based on success in their jobs, which I try to do. It’s hard to know if really I just had a good conversation about the football with the candidate.”
Faculty, of which Feige has the role of head of research, is a European company based in London. With the EU Commission recently publishing its guidelines on AI development, we took the opportunity to get his views on them.
“At a high-level, I think they’re great. They align quite a bit with how we think about these things. My biggest wish, whenever a body like that puts together some principles, is that there’s a big gap between that level of guidelines and what is useful for practitioners. Making those more precise is really important and those weren’t precise enough by my standards.”
“But not to just advocate putting the responsibility on policymakers. There’s also an onus on practitioners to try and articulate what bias looks like statistically and how that may apply to different problems, and then say: ‘Ok policy body, which of these is most relevant and can you now make those statements in this language’ and basically bridge the gap.”
Google recently created, then axed, a dedicated ‘ethics board’ for its AI developments. Such boards seem a good idea but representing society can be a minefield. Google’s faced criticism for having a conservative figure with strong anti-LGBTQ and immigrant views on the board.
Feige provided his take on whether companies should have an independent AI oversight board to ensure their developments are safe and ethical.
“To some degree, definitely. I suspect there are some cases you want that oversight board to be very external and like a regulator with a lot of overhead and a lot of teeth.”
“At Faculty, each one of our product teams has a shadow team – which has practically the same skill set – who monitor and oversee the work done by the project team to ensure it follows our internal set of values and guidelines.”
“I think the fundamental question here is how to do this in a productive way and ensure AI safety but that it doesn’t grind innovation to a halt. You can imagine where the UK has a really strong oversight stance and then some other country with much less regulatory oversight has companies which become large multinationals and operate in the UK anyway.”
Getting the balance right around regulation is difficult. Our sister publication IoT News interviewed a digital lawyer who raised the concern that Europe’s strict GDPR regulations will cause AI companies in the continent to fall behind their counterparts in Asia and America which have access to far more data.
Feige believes there is the danger of this happening, but European countries like the UK – whether it ultimately remains part of the EU and subject to regulations like GDPR or not – can use it as an opportunity to lead in AI safety.
Three reasons are provided why the UK could achieve this:
- The UK has significant AI talent and renowned universities.
- It has a fairly unobjectionable record and respected government (Feige clarifies in comparison to how some countries view the US and China).
- The UK has a fairly robust existing regulatory infrastructure – especially in areas such as financial services.
Among the biggest concerns about AI continues to be around its impact on the workforce, particularly whether it will replace low-skilled workers. We wanted to know whether using legislation to protect human workers is a good idea.
“You could ask the question a hundred years ago: ‘Should automation come into agriculture because 90 percent of the population works in it?’ and now it’s almost all automated. I suspect individuals may be hurt by automation but their children will be better off by it.”
“I think any heavy-handed regulation will have unintended consequences and should be thought about well.”
Our discussion with Feige was insightful and provided optimism that AI can be developed safely and fairly, as long as there’s a will to do so.
You can watch our full interview with Feige from AI Expo Global 2019 below: