policy – AI News https://news.deepgeniusai.com Artificial Intelligence News Fri, 27 Nov 2020 16:10:36 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png policy – AI News https://news.deepgeniusai.com 32 32 CDEI launches a ‘roadmap’ for tackling algorithmic bias https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/ https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/#respond Fri, 27 Nov 2020 16:10:35 +0000 https://news.deepgeniusai.com/?p=10058 A review from the Centre for Data Ethics and Innovation (CDEI) has led to the creation of a “roadmap” for tackling algorithmic bias. The analysis was commissioned by the UK government in October 2018 and will receive a formal response. Algorithms bring substantial benefits to businesses and individuals able to use them effectively. However, increasing... Read more »

The post CDEI launches a ‘roadmap’ for tackling algorithmic bias appeared first on AI News.

]]>
A review from the Centre for Data Ethics and Innovation (CDEI) has led to the creation of a “roadmap” for tackling algorithmic bias.

The analysis was commissioned by the UK government in October 2018 and will receive a formal response.

Algorithms bring substantial benefits to businesses and individuals able to use them effectively. However, increasing evidence suggests biases are – often unconsciously – making their way into algorithms and creating an uneven playing field.

The CDEI is the UK government’s advisory body on the responsible use of AI and data-driven technology. CDEI has spent the past two years examining the issue of algorithmic bias and how it can be tackled.

Adrian Weller, Board Member for the Centre for Data Ethics and Innovation, said:

“It is vital that we work hard now to get this right as adoption of algorithmic decision-making increases. Government, regulators, and industry need to work together with interdisciplinary experts, stakeholders, and the public to ensure that algorithms are used to promote fairness, not undermine it.

The Centre for Data Ethics and Innovation has today set out a range of measures to help the UK to achieve this, with a focus on enhancing transparency and accountability in decision-making processes that have a significant impact on individuals.

Not only does the report propose a roadmap to tackle the risks, but it highlights the opportunity that good use of data presents to address historical unfairness and avoid new biases in key areas of life.”

The report focuses on four key sectors where algorithmic bias poses the biggest risk: policing, recruitment, financial services, and local government.

Today’s facial recognition algorithms are relatively effective when used on white males, but research has consistently shown how ineffective they are with darker skin colours and females. The error rate is, therefore, higher when facial recognition algorithms are used on some parts of society over others.

In June, Detroit Police chief Editor Craig said facial recognition would misidentify someone around 96 percent of the time—not particularly comforting when they’re being used to perform mass surveillance of protests.

Craig’s comments were made just days after the ACLU (American Civil Liberties Union) lodged a complaint against Detroit Police following the harrowing wrongful arrest of black male Robert Williams due to a facial recognition error.

And that’s just one example of where AI can unfairly impact some parts of society over another.

“Fairness is a highly prized human value,” the report’s preface reads. “Societies in which individuals can flourish need to be held together by practices and institutions that are regarded as fair.”

Ensuring fairness in algorithmic decision-making

Transparency is required for algorithms. In financial services, a business loan or mortgage could be rejected without transparency simply because a person was born in a poor neighbourhood. Job applications could be rejected not on a person’s actual skill but dependent on where they were educated.

Such biases exist in humans and our institutions today, but automating them at scale is a recipe for disaster. Removing bias from algorithms is not an easy task but if achieved would lead to increased fairness by taking human biases out of the equation.

“It is well established that there is a risk that algorithmic systems can lead to biased decisions, with perhaps the largest underlying cause being the encoding of existing human biases into algorithmic systems. But the evidence is far less clear on whether algorithmic decision-making tools carry more or less risk of bias than previous human decision-making processes. Indeed, there are reasons to think that better use of data can have a role in making decisions fairer, if done with appropriate care.

When changing processes that make life-affecting decisions about individuals we should always proceed with caution. It is important to recognise that algorithms cannot do everything. There are some aspects of decision-making where human judgement, including the ability to be sensitive and flexible to the unique circumstances of an individual, will remain crucial.”

The report’s authors examined the aforementioned four key sectors to determine their current “maturity levels” in algorithmic decision-making.

In recruitment, the authors found rapid growth in the use of algorithms to make decisions at all stages. They note that adequate data is being collected to monitor outcomes but found that understanding of how to avoid human biases creeping in is lacking.

“More guidance is needed on how to ensure that these tools do not unintentionally discriminate against groups of people, particularly when trained on historic or current employment data.”

The financial services industry has relied on data to make decisions for longer than arguably any other to determine things like how likely it is an individual can repay a debt.

“Specific groups are historically underrepresented in the financial system, and there is a risk that these historic biases could be entrenched further through algorithmic systems.”

CDEI found limited use of algorithmic decision-making in UK policing but found variance across forces with regards to both usage and managing ethical risks.

“The use of data analytics tools in policing carries significant risk. Without sufficient care, processes can lead to Review into bias in algorithmic decision-making: Executive summary Centre for Data Ethics and Innovation 8 outcomes that are biased against particular groups, or systematically unfair.

In many scenarios where these tools are helpful, there is still an important balance to be struck between automated decision-making and the application of professional judgement and discretion.”

Finally, in local government, CDEI noted an increased use of algorithms to inform decision-making but most are in their early stages of deployment. Such tools can be powerful assets for societal good – like helping to plan where resources should be allocated to maintain vital services – but can also carry significant risks.

“Evidence has shown that certain people are more likely to be overrepresented in data held by local authorities and this can then lead to biases in predictions and interventions.”

The CDEI makes a number of recommendations in its report but among them is:

  • Clear and mandatory transparency over how algorithms are used for public decision-making and steps taken to ensure the fair treatment of individuals.
  • Full accountability for organisations implementing such technologies.
  • Improving the diversity of roles involved with developing and deploying decision-making tools.
  • Updating model contracts and framework agreements for public sector procurement to incorporate minimum standards around the ethical use of AI.
  • The government working with regulators to provide clear guidance on the collection and use of protected characteristic data in outcome monitoring and decision-making processes. They should then encourage the use of that guidance and data to address current and historic bias in key sectors.
  • Ensuring that the Equality and Human Rights Commission has sufficient resources to investigate cases of alleged algorithmic discrimination.

CDEI is overseen by an independent board which is made up of experts from across industry, civil society, academia, and government; it is an advisory body and does not directly set policies. The Department for Digital, Culture, Media & Sport is consulting on whether a statutory status would help the CDEI to deliver its remit as part of the National Data Strategy.

You can find a full copy of the CDEI’s report into tackling algorithmic bias here (PDF)

(Photo by Matt Duncan on Unsplash)

The post CDEI launches a ‘roadmap’ for tackling algorithmic bias appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/feed/ 0
Elon Musk wants more stringent AI regulation, including for Tesla https://news.deepgeniusai.com/2020/02/19/elon-musk-stringent-ai-regulation-tesla/ https://news.deepgeniusai.com/2020/02/19/elon-musk-stringent-ai-regulation-tesla/#respond Wed, 19 Feb 2020 13:28:24 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6419 Elon Musk has once again called for more stringent regulations around the development of AI technologies. The founder of Tesla and SpaceX has been one of the most vocal prominent figures in expressing concerns about AI – going as far as to call it humanity’s “biggest existential threat” if left unchecked. Of course, given the... Read more »

The post Elon Musk wants more stringent AI regulation, including for Tesla appeared first on AI News.

]]>
Elon Musk has once again called for more stringent regulations around the development of AI technologies.

The founder of Tesla and SpaceX has been one of the most vocal prominent figures in expressing concerns about AI – going as far as to call it humanity’s “biggest existential threat” if left unchecked.

Of course, given the nature of the companies Musk has founded, he is also well aware of AI’s potential.

Back in 2015, Musk co-founded OpenAI – an organisation founded with the aim of pursuing and promoting ethical AI development. Musk ended up leaving OpenAI in February last year over disagreements with the company’s work.

Earlier this week, Musk said that OpenAI should be more transparent and specifically said his confidence is “not high” in former Google engineer Dario Amodei when it comes to safety.

Responding to a piece by MIT Technology Review about OpenAI, Musk tweeted: “All orgs developing advanced AI should be regulated, including Tesla.”

In response to a further question of whether such regulations should be via individual governments or global institutions like the UN, Musk said he believes both.

Musk’s tweet generated some feedback from other prominent industry figures, including legendary Id Software founder John Carmack who recently stepped back from video game development to focus on independent AI research.

Carmack asked Musk: “How would you imagine that working for someone like me? Cloud vendors refuse to spawn larger clusters without a government approval? I would not be supportive.”

Coder Pranay Pathole shared a similar scepticism to Musk’s call as Carmack, saying: “Large companies ask for regulations acting all virtuous. What they are really doing is creating barriers for entry for new competition because only they can afford to comply with the new regulations.”

The debate over the extent of AI regulations and how they should be implemented will likely go on for some time – we can only hope to get them right before a disaster occurs. If you want to help Musk in building AI, he’s hosting a “super fun” hackathon at his place.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Elon Musk wants more stringent AI regulation, including for Tesla appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/02/19/elon-musk-stringent-ai-regulation-tesla/feed/ 0
The White House warns European allies not to overregulate AI https://news.deepgeniusai.com/2020/01/07/white-house-warns-european-allies-overregulate-ai/ https://news.deepgeniusai.com/2020/01/07/white-house-warns-european-allies-overregulate-ai/#comments Tue, 07 Jan 2020 13:48:00 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6328 The White House has urged its European allies to avoid overregulation of AI to prevent Western innovation from being hindered. While the news has gone somewhat under the radar given recent events, the Americans are concerned that overregulation may cause Western nations to fall behind the rest of the world. In a statement released by... Read more »

The post The White House warns European allies not to overregulate AI appeared first on AI News.

]]>
The White House has urged its European allies to avoid overregulation of AI to prevent Western innovation from being hindered.

While the news has gone somewhat under the radar given recent events, the Americans are concerned that overregulation may cause Western nations to fall behind the rest of the world.

In a statement released by the Office of Science and Technology Policy, the White House wrote:

“Europe and our allies should avoid heavy handed innovation-killing models, and instead consider a similar regulatory approach.

The best way to counter authoritarian uses of AI is to make sure America and our international partners remain the global hubs of innovation, shaping the evolution of technology in a manner consistent with our common values.”

The UK is expected to retain its lead as the European hub for AI innovation with vast amounts of private and public sector investment, successful companies like DeepMind, and world class universities helping to address the global talent shortage. In Oxford Insights’ 2017 Government AI Readiness Index, the UK ranked number one due to areas such as digital skills training and data quality. The Index considers public service reform, economy and skills, and digital infrastructure.

Despite its European AI leadership, the UK would struggle to match the levels of funding afforded to firms residing in superpowers like the US and China. Many experts have suggested the UK should instead focus on leading in the ethical integration of AI and developing sensible regulations, an area it has much experience in.

Here’s a timeline of some recent work from the UK government towards this goal:

  • September 2016 – the House of Commons Science and Technology Committee published a 44-page report on “Robotics and Artificial Intelligence” which investigates the economic and social implications of employment changes, ethical and legal issues around safety, verification, bias, privacy, and accountability; and strategies to enhance research, funding, and innovation
  • January 2017 – an All Party Parliamentary Group on Artificial Intelligence (APPG AI) was established to address ethical issues, social impact, industry norms, and regulatory options for AI in parliament.
  • June 2017 – parliament established the Select Committee on AI to further consider the economic, ethical and social implications of advances in artificial intelligence, and to make recommendations. All written and oral evidence received by the committee can be seen here.
  • April 2018 – the aforementioned committee published a 183-page report, “AI in the UK: ready, willing and able?” which considers AI development and governance in the UK. It acknowledges that the UK cannot compete with the US or China in terms of funding or people but suggests the country may have a competitive advantage in considering the ethics of AI.
  • September 2018 – the UK government launched an experiment with the World Economic Forum to develop procurement policies for AI. The partnership will bring together diverse stakeholders to collectively develop guidelines to capitalise on governments’ buying power to support the responsible deployment and design of AI technologies.

Western nations are seen as being at somewhat of a disadvantage due to sensitivities around privacy. EU nations, in particular, have strict data collection regulations such as GDPR which limits the amount of data researchers can collect to train AIs.

“Very often we hear ‘Where are the British and European Googles and Facebooks?’ Well, it’s because of barriers like this which stop organisations like that being possible to grow and develop,” said Peter Wright, solicitor and managing director of Digital Law UK.

Dependent on the UK’s future trade arrangement with the EU, it could, of course, decide to chart its own regulatory path following Brexit.

Speaking to reporters in a call, US CTO Michael Kratsios said: “Pre-emptive and burdensome regulation does not only stifle economic innovation and growth, but also global competitiveness amid the rise of authoritarian governments that have no qualms with AI being used to track, surveil, and imprison their own people.”

In the same call, US deputy CTO Lynne Parker commented: “As countries around the world grapple with similar questions about the appropriate regulation of AI, the US AI regulatory principles demonstrate that America is leading the way to shape the evolution in a way that reflects our values of freedom, human rights, and civil liberties.

“The new European Commission has said they intend to release an AI regulatory document in the coming months. After a productive meeting with Commissioner Vestager in November, we encourage Europe to use the US AI principles as a framework. The best way to counter authoritarian uses of AI is to make America and our national partners remain the global hub of innovation, advancing our common values.”

A similar regulation to GDPR in California called CCPA was also signed into law in June 2018. “I think the examples in the US today at state and local level are examples of overregulation which you want to avoid on the national level,” said a government official.

The post The White House warns European allies not to overregulate AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/01/07/white-house-warns-european-allies-overregulate-ai/feed/ 1
DeepMind co-founder moves to Google to work on AI policy https://news.deepgeniusai.com/2019/12/06/deepmind-cofounder-google-work-ai-policy/ https://news.deepgeniusai.com/2019/12/06/deepmind-cofounder-google-work-ai-policy/#comments Fri, 06 Dec 2019 17:32:56 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6281 DeepMind co-founder Mustafa Suleyman has announced he’s making the full jump to Google to work on AI policy at the company. Google acquired DeepMind for $400 million in 2014 and the firm became a subsidiary of Google’s parent company Alphabet in 2015. Suleyman co-founded DeepMind and originally became its chief product officer. After Google’s acquisition... Read more »

The post DeepMind co-founder moves to Google to work on AI policy appeared first on AI News.

]]>
DeepMind co-founder Mustafa Suleyman has announced he’s making the full jump to Google to work on AI policy at the company.

Google acquired DeepMind for $400 million in 2014 and the firm became a subsidiary of Google’s parent company Alphabet in 2015.

Suleyman co-founded DeepMind and originally became its chief product officer. After Google’s acquisition in 2014, Suleyman became DeepMind’s head of applied AI.

In a tweet, Suleyman announced he’s now moving to Google itself to work on AI policy alongside Ken Walker, Jeff Dean, and others.

Suleyman is a longstanding proponent for AI ethics, so his presence at Google may help to ensure the web giant pushes boundaries in terms of technology but while respecting things like human rights.

Of course, DeepMind’s work hasn’t been without its controversies.

An app called Streams, developed by DeepMind for doctors and nurses, was ruled by the UK government in 2017 to have gained inappropriate access to medical data from 1.6 million patients.

At the time, Suleyman wrote in a blog post:

“DeepMind operates autonomously from Google, and we’ve been clear from the outset that at no stage will patient data ever be linked or associated with Google accounts, products or services.”

Critics believe this promise was broken when Streams was grabbed by Google itself. One thing which can’t be criticised, however, is DeepMind has shown its ability to get working AI solutions into the market. Suleyman appears to be part of Google’s desire to tap some of DeepMind’s ability to ship working AI solutions.

In a blog post, DeepMind co-founder Demis Hassabis spoke highly of the role Suleyman had played at the firm:

“As a serial entrepreneur, Mustafa played a key role over the past decade, helping to get DeepMind off the ground, and launched a series of innovative collaborations with Google to reduce energy consumption in data centres, improve Android battery performance, optimise Google Play, and find ways to improve the lives of patients, nurses and doctors alike,” he said.

“Mustafa leaves DeepMind having helped set us up for long-term success, and I’m looking forward to what he’ll achieve in the years ahead as he joins Google in a new role.”

Suleyman will be starting his new role at Google in January.

(Image Credit: Mustafa Suleyman by Joi Ito under CC BY 2.0 license)

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post DeepMind co-founder moves to Google to work on AI policy appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/12/06/deepmind-cofounder-google-work-ai-policy/feed/ 1
CBI: UK tech dominance is ‘at risk’ due to public mistrust of AI https://news.deepgeniusai.com/2019/08/09/cbi-uk-tech-risk-public-mistrust-ai/ https://news.deepgeniusai.com/2019/08/09/cbi-uk-tech-risk-public-mistrust-ai/#respond Fri, 09 Aug 2019 00:52:03 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5920 Business industry group the CBI has warned that UK tech dominance is ‘at risk’ due to public mistrust of AI. In a report today, the CBI warns artificial intelligence companies of the need to ensure they’re approaching the technology in an ethical manner to help build trust. Among the measures suggested to achieve trust include... Read more »

The post CBI: UK tech dominance is ‘at risk’ due to public mistrust of AI appeared first on AI News.

]]>
Business industry group the CBI has warned that UK tech dominance is ‘at risk’ due to public mistrust of AI.

In a report today, the CBI warns artificial intelligence companies of the need to ensure they’re approaching the technology in an ethical manner to help build trust.

Among the measures suggested to achieve trust include ensuring customers know how their data is being used by AI, what decisions are being taken, and challenging unfair biases.

Overall, the CBI remains bullish on AI’s potential to add billions to the UK economy.

Felicity Burch, CBI Director of Digital and Innovation, said:

“At a time of slowing global growth, AI could add billions of pounds to the UK economy and transform our stuttering productivity performance.

The Government has set the right tone by establishing the Centre for Data Ethics & Innovation, but it’s up to business to put ethics into practice.

Ethics can be an intimidating word. But at the end of the day, meaningful ethics is similar to issues organisations already think about: effective governance, employee empowerment, and customer engagement.

The same actions that embed an ethical approach to AI will also make businesses more competitive. We know that diverse businesses are more likely to outperform their rivals. When it comes to AI, businesses who prioritise fairness and inclusion are more likely to create algorithms that make better decisions, giving them the edge.

With the global tech race heating up, the UK is well placed to lead the world in developing ethical AI, which uses data safely and drives progress in business and society.”

A previous study, conducted by PwC, estimates that UK GDP could be around 10 percent higher in 2030 due to AI – the equivalent of an additional £232bn.

Earlier this week, AI News reported on findings by Fountech.ai which highlights the scale of public distrust in artificial intelligence among UK adults.

In a survey of more than 2,000 people, Fountech.ai found over two-thirds (67%) are concerned about the impact of artificial intelligence on their careers.

Another finding in Fountech.ai’s study is that just over half (58%) find algorithms which make recommendations, such as what to buy or watch next, ‘creepy’.

The potential for AI is clear, but the studies conducted by the CBI and others show its full potential can only be unlocked by building public trust in the technology.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post CBI: UK tech dominance is ‘at risk’ due to public mistrust of AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/08/09/cbi-uk-tech-risk-public-mistrust-ai/feed/ 0
AI Expo Global: Fairness and safety in artificial intelligence https://news.deepgeniusai.com/2019/05/01/ai-expo-fairness-safety-artificial-intelligence/ https://news.deepgeniusai.com/2019/05/01/ai-expo-fairness-safety-artificial-intelligence/#respond Wed, 01 May 2019 16:36:31 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5594 AI News sat down with Faculty’s head of research Ilya Feige to discuss safe and fair practices in artificial intelligence development. Feige had just finished giving a talk entitled ‘Fairness in AI: Latest developments in AI safety’ at this year’s AI Expo Global. We managed to grab him to get more of his thoughts on... Read more »

The post AI Expo Global: Fairness and safety in artificial intelligence appeared first on AI News.

]]>
AI News sat down with Faculty’s head of research Ilya Feige to discuss safe and fair practices in artificial intelligence development.

Feige had just finished giving a talk entitled ‘Fairness in AI: Latest developments in AI safety’ at this year’s AI Expo Global. We managed to grab him to get more of his thoughts on the issue.

Rightfully, people are becoming increasingly concerned about unfair and unsafe AIs. Human biases are seeping into algorithms which poses a very real danger that prejudices and oppression could become automated by accident.

AI News reported last week on research from New York University that found inequality in STEM-based careers is causing algorithms to work better or worse for some parts of society over others.

Similar findings, by Joy Buolamwini and her team from the Algorithmic Justice League, highlighted a disparity in the effectiveness of the world’s leading facial recognition systems between genders and skin tones.

In an ideal world, all parts of society would be equally represented tomorrow. The reality is that issue is going to take much longer to rectify, but AI technologies are becoming increasingly used across society today.

AI News asked Feige for his perspective and how the impact of that problem can be reduced much sooner.

“I think the most important thing for organisations to do is to spend more time thinking about bias and on ensuring that every model they build is unbiased because a demographically disparate team can build non-disparate tech.”

Some companies are seeking to build AIs which can scan for bias in other algorithms. We asked Feige for his view on whether he believes this is an ideal solution.

“Definitely, I showed one in my talk. We have tests for: You give me a black box algorithm, I have no idea what your algorithm does – but I can give an input, calculate the output, and I can just tell you how biased it is according to various definitions of bias.”

“We can go even further and say: Let’s modify your algorithm and give it back so it’s unbiased according to one of those definitions.”

In the Western world, we consider ourselves fairly liberal and protective of individual freedoms. China, potentially the world’s leader in AI, has a questionable human rights record and is known for invasive surveillance and mass data collection. Meanwhile, Russia has a reputation for military aggression which some are concerned will drive its AI developments. Much of the Middle East, while not considered leaders in AI, is behind most of the world in areas such as female and gay rights.

We asked Feige for his thoughts on whether these regional attitudes could find their way into AI developments.

“It’s an interesting question. It’s not that some regions will take the issue more or less seriously, they just have different … we’ll say preferences. I suspect China takes surveillance and facial recognition seriously – more seriously than the UK – but they do so in order to leverage it for mass surveillance, for population control.”

“The UK is trying to walk a fine line in efficiently using that very useful technology but not undermine personal privacy and freedom of individuals.”

During his talk, Feige made the point that he’s less concerned about AI biases due to the fact that – unlike humans – algorithms can be controlled.

“This is a real source of optimism for me, just because human decision-making is incredibly biased and everyone knows that.”

Feige asked the audience to raise a hand if they were concerned about AI bias which prompted around half to do so. The same question was asked regarding human bias and most of the room had their hand up.

“You can be precise with machine learning algorithms. You can say: ‘This is the objective I’m trying to achieve, I’m trying to maximise the probability of a candidate being successful at their job according to historical people in their role’. Or, you can be precise about the data the model is trained on and say: ‘I’m going to ignore data from before this time period because things were ‘different’ back then’”.

“Humans have fixed past experiences they can’t control. I can’t change the fact my mum did most of the cooking when I was growing up and I don’t know how it affects my decision-making.”

“I also can’t force myself to hire based on success in their jobs, which I try to do. It’s hard to know if really I just had a good conversation about the football with the candidate.”

Faculty, of which Feige has the role of head of research, is a European company based in London. With the EU Commission recently publishing its guidelines on AI development, we took the opportunity to get his views on them.

“At a high-level, I think they’re great. They align quite a bit with how we think about these things. My biggest wish, whenever a body like that puts together some principles, is that there’s a big gap between that level of guidelines and what is useful for practitioners. Making those more precise is really important and those weren’t precise enough by my standards.”

“But not to just advocate putting the responsibility on policymakers. There’s also an onus on practitioners to try and articulate what bias looks like statistically and how that may apply to different problems, and then say: ‘Ok policy body, which of these is most relevant and can you now make those statements in this language’ and basically bridge the gap.”

Google recently created, then axed, a dedicated ‘ethics board’ for its AI developments. Such boards seem a good idea but representing society can be a minefield. Google’s faced criticism for having a conservative figure with strong anti-LGBTQ and immigrant views on the board.

Feige provided his take on whether companies should have an independent AI oversight board to ensure their developments are safe and ethical.

“To some degree, definitely. I suspect there are some cases you want that oversight board to be very external and like a regulator with a lot of overhead and a lot of teeth.”

“At Faculty, each one of our product teams has a shadow team – which has practically the same skill set – who monitor and oversee the work done by the project team to ensure it follows our internal set of values and guidelines.”

“I think the fundamental question here is how to do this in a productive way and ensure AI safety but that it doesn’t grind innovation to a halt. You can imagine where the UK has a really strong oversight stance and then some other country with much less regulatory oversight has companies which become large multinationals and operate in the UK anyway.”

Getting the balance right around regulation is difficult. Our sister publication IoT News interviewed a digital lawyer who raised the concern that Europe’s strict GDPR regulations will cause AI companies in the continent to fall behind their counterparts in Asia and America which have access to far more data.

Feige believes there is the danger of this happening, but European countries like the UK – whether it ultimately remains part of the EU and subject to regulations like GDPR or not – can use it as an opportunity to lead in AI safety.

Three reasons are provided why the UK could achieve this:

  1. The UK has significant AI talent and renowned universities.
  2. It has a fairly unobjectionable record and respected government (Feige clarifies in comparison to how some countries view the US and China).
  3. The UK has a fairly robust existing regulatory infrastructure – especially in areas such as financial services.

Among the biggest concerns about AI continues to be around its impact on the workforce, particularly whether it will replace low-skilled workers. We wanted to know whether using legislation to protect human workers is a good idea.

“You could ask the question a hundred years ago: ‘Should automation come into agriculture because 90 percent of the population works in it?’ and now it’s almost all automated. I suspect individuals may be hurt by automation but their children will be better off by it.”

“I think any heavy-handed regulation will have unintended consequences and should be thought about well.”

Our discussion with Feige was insightful and provided optimism that AI can be developed safely and fairly, as long as there’s a will to do so.

You can watch our full interview with Feige from AI Expo Global 2019 below:

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post AI Expo Global: Fairness and safety in artificial intelligence appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/05/01/ai-expo-fairness-safety-artificial-intelligence/feed/ 0
#MWC19: AI requires innovation, values, and trust https://news.deepgeniusai.com/2019/02/25/mwc-ai-innovation-values-trust/ https://news.deepgeniusai.com/2019/02/25/mwc-ai-innovation-values-trust/#respond Mon, 25 Feb 2019 18:36:52 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5113 During an MWC keynote, a range of experts and policymakers explained the keywords they believe are behind ensuring responsible AI deployments. The keynote featured IBM’s SVP of Global Business Services, Mark Foster; the EU’s Digital Economy and Society Commissioner, Mariya Gabriel; and the Secretary-General of the Organisation for Economic Co-operation and Development, Angel Gurria. Foster... Read more »

The post #MWC19: AI requires innovation, values, and trust appeared first on AI News.

]]>
During an MWC keynote, a range of experts and policymakers explained the keywords they believe are behind ensuring responsible AI deployments.

The keynote featured IBM’s SVP of Global Business Services, Mark Foster; the EU’s Digital Economy and Society Commissioner, Mariya Gabriel; and the Secretary-General of the Organisation for Economic Co-operation and Development, Angel Gurria.

Foster opened the session with a foreboding tone: “What we want to talk about is something very serious and I think critical to acceptance of all the great technology we’re looking at outside this building.”

A glance around the exhibition floors shows how AI and the IoT are maturing. In fact, this event is less about mobile technology than it’s ever been.

This year, MWC has employed an opt-in facial recognition system from Breez to enable attendees to access the venue. It was incredibly quick, but there’s a definite unease about how that data is being stored and used.

Building trust will be essential to ensure the full potential of such technologies can be unlocked. Another technology emerging, the blockchain, will help with the security aspects.

Foster notes how we’re at an inflexion point in technology with AI, the IoT, 5G, and blockchain converging to unlock an immeasurable number of possibilities.

“With the combination of AI, automation, blockchain, 5G… we’re at a time when there’s a convergence coming together at scale for one of those moments which changes how business gets done,” he says.

By mitigating the risks, these technologies can benefit mankind. A failure to do so, however, could be devastating and will damage trust.

“We’re seeing a fantastic capacity to take advantage of so many amazing new technologies,” he comments. “We’re also facing a time when people are more concerned than ever about the implications of abusing those technologies.”

Some of the considerations Foster highlights include data privacy, inclusiveness, and ensuring we do not expand the digital divide.

IBM recently conducted some research in which it talked with some 1,200 CEOs around the world and found data responsibility was the number one AI issue on their minds. Some 91 percent said they expected new demands from their customers about the ethical way they’re introducing AI. 92 percent said they expected more regulation.

Foster was followed by a couple of individuals representing two institutions exploring such regulations. The first was Mariya Gabriel who is the EU’s digital economy and society commissioner.

The EU is investing heavily in digital technologies. In fact, Gabriel claims the bloc’s €4 billion 5G investment represents the largest in the world. She points towards an ongoing 139 5G trials as world-leading, and that 10 cross-border 5G motorways are now open.

“IoT, AI, 5G, big data, are all part of what will be the future of our industries. All industries. This is a fact,” says Gabriel. “We need to reap its opportunities, mitigate its risks, and make sure it’s respectful of our values as much as driven by innovation.”

You won’t hear the EU speak much without talking about values. It’s often debated how much the EU practices what it preaches, but it’s making a clear effort with AI to develop some ground rules. What it won’t do is rush headfirst into deployments.

“When 5G becomes mission critical, it needs to be secure,” comments Gabriel. “Nobody is helped by premature decisions.”

The EU is enlisting 52 experts to develop guidelines on the ethical implementation of AI. An initial version was published last December and it was opened up for comments on 1st February 2019. Over 500 have been received so far.

Gabriel says the next step is a trial phase for the guidelines. The goal, she claims, is to make ‘ ’ AI a reality. She hails the digital single market as tearing down things like roaming charges, while GDPR is becoming a ‘world reference’ (her words, not mine.)

“Europe has to have a common approach or there’s a risk of fragmentation,” says Gabriel. “Diverging decisions taken by member states trying to protect themselves we know damages the digital single market.”

While the EU is concentrating on Europe, other institutions are looking to influence global policies in world-changing innovations such as AI.

Next on the agenda was a representative from one such institution. Angel Gurria is the Secretary-General of the OECD (Organisation for Economic Co-operation and Development).

Gurria also notes how the ‘rapid digitalisation’ from the aforementioned technologies brings opportunities and challenges. He starts with the benefits of technologies such as AI for things like making better decisions.

“AI is not only dynamising economies and facilitating lives,” says Gurria. “It’s also helping people make better predictions and better decisions; whether it’s the shop floor manager or a doctor in the operating room.”

Gurria transitions into how machines work better with human control, a suggestion they should be designed to enhance productivity rather than replace people. “Less artificial, more intelligent,” he says.

AIs replacing jobs is one of the biggest societal fears. Low-skilled jobs are expected to be most at risk. Some research indicates they’ll create as many as they destroy, but many will find themselves without the skills necessary for these new roles.

Gurria highlights the OECD estimates 14 percent of jobs in the countries it operates are deemed at ‘high risk’ of being replaced by automation. A further 32 percent is considered at risk of ‘significant disruption’ over the next 10-20 years. Added together, close to half the workforce is at risk of being displaced or disrupted.

“Disruption is a good word when you come to these exhibitions,” jokes Gurria. “In the traditional sense, however, it means people are going to feel underqualified after the effects of the technology.”

Another concern raised by Gurria is that of ‘automatised discrimination’ that affects life-changing things for an individual like hiring processes, loan approvals, or even the criminal justice system.

Seeing Gurria follow Gabriel, I couldn’t help but think it seemed like both should be working together on this issue rather than various sets of AI guidelines. As if he could read my thoughts, Gurria says the institutions – including others, not just the EU – speak to each other and it does not mean work is being duplicated.

“I just met with the head of UNESCO, and she has an advisory group too,” explains Gurria. “Now we have three large institutions – one specialised in European issues, UNESCO which is worldwide, and the OECD which is about policies for better lives.”

The OECD is launching the results of its two-year ‘Going Digital’ project at a dedicated summit held between the 11-12th March 2019. At the summit, the project’s main findings and policy messages will be presented.

We’ll have coverage of the OECD’s summit next month, but until then this MWC session gave us all plenty to think about. Primarily, that AI needs to be developed with innovation, values, and trust.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post #MWC19: AI requires innovation, values, and trust appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/02/25/mwc-ai-innovation-values-trust/feed/ 0
White House will take a ‘hands-off’ approach to AI regulation https://news.deepgeniusai.com/2018/05/11/white-house-hands-off-ai-regulation/ https://news.deepgeniusai.com/2018/05/11/white-house-hands-off-ai-regulation/#respond Fri, 11 May 2018 12:16:37 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3083 The White House has decided it will take a ‘hands-off’ approach to AI regulation despite many experts calling for safe and ethical standards to be set. Some of the world’s greatest minds have expressed concern about the development of AI without regulations — including the likes of Elon Musk, and the late Stephen Hawking. Musk... Read more »

The post White House will take a ‘hands-off’ approach to AI regulation appeared first on AI News.

]]>
The White House has decided it will take a ‘hands-off’ approach to AI regulation despite many experts calling for safe and ethical standards to be set.

Some of the world’s greatest minds have expressed concern about the development of AI without regulations — including the likes of Elon Musk, and the late Stephen Hawking.

Musk famously said unregulated AI could post “the biggest risk we face as a civilisation”, while Hawking similarly warned “the development of full artificial intelligence could spell the end of the human race.”

The announcement that developers will be free to experiment with AI as they see fit was made during a meeting with representatives of 40 companies including Google, Facebook, and Intel.

Strict regulations can stifle innovation, and the U.S has made clear it wants to emerge a world leader in the AI race.

Western nations are often seen as somewhat at a disadvantage to Eastern countries like China, not because they have less talent, but citizens are more wary about data collection and their privacy in general. However, there’s a strong argument to be made for striking a balance.

Making the announcement, White House Science Advisor Michael Kratsios noted the government did not stand in the way of Alexander Graham Bell or the Wright brothers when they invented the telephone and aeroplane. Of course, telephones and aeroplanes weren’t designed with the ultimate goal of becoming self-aware and able to make automated decisions.

Both telephones and aeroplanes, like many technological advancements, have been used for military applications. However, human operators have ultimately always made the decisions. AI could be used to automatically launch a nuclear missile if left unchecked.

Recent AI stories have some people unnerved. A self-driving car from Uber malfunctioned and killed a pedestrian. At Google I/O, the company’s AI called a hair salon and the receptionist had no idea they were not speaking to a human.

People not feeling comfortable with AI developments is more likely to stifle innovation than balanced regulations.

What are your thoughts on the White House’s approach to AI regulation?

 

The post White House will take a ‘hands-off’ approach to AI regulation appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/05/11/white-house-hands-off-ai-regulation/feed/ 0
House of Lords: The UK can lead in AI by putting ethics first https://news.deepgeniusai.com/2018/04/16/uk-ai-house-of-lords-ethics/ https://news.deepgeniusai.com/2018/04/16/uk-ai-house-of-lords-ethics/#respond Mon, 16 Apr 2018 15:08:52 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3008 A report published today by the House of Lords reveals the current outlook for AI in the United Kingdom and suggests practical measures to secure its place as a global leader. While the world faces a shortage of AI talent, the UK’s leading universities produce candidates who are often snapped up quickly. Some of the... Read more »

The post House of Lords: The UK can lead in AI by putting ethics first appeared first on AI News.

]]>
A report published today by the House of Lords reveals the current outlook for AI in the United Kingdom and suggests practical measures to secure its place as a global leader.

While the world faces a shortage of AI talent, the UK’s leading universities produce candidates who are often snapped up quickly. Some of the biggest players in the space invest heavily in UK companies — most notable, perhaps, is Google’s £400 million acquisition of Cambridge-based DeepMind.

The Lords’ report highlights one key area where the UK cannot exceed the likes of the U.S. or China; government funding.

“Given the disparities in available resources, the UK is unlikely to be able to rival the scale of investments made in the United States and China,” wrote the authors.

Rather than attempt to compete with the investments made by larger nations, the Select Committee on Artificial Intelligence recommends the UK capitalises on its ‘particular blend of national assets’ to ‘forge a distinctive role for itself as a pioneer in ethical AI’.

Ethical practices are the most debated topic in artificial intelligence right now, and perhaps for the foreseeable future. Just last week, AI News reported Microsoft dropped some potential AI deals over concerns the company had about how the technology would be used.

Some of the world’s greatest minds have chimed in to offer their opinions and brought the debate into the minds of everyday consumers. When you’ve got the likes of Stephen Hawking and Elon Musk warning of unregulated AI posing an existential threat to mankind, it’s bound to become a major talking point.

The report cites figures provided by Goldman Sachs who claim that, between 2012 and 2016, the UK invested around $850 million in AI — the third largest investor of any country. While this sounds impressive, it gets put into perspective when you hear China invested around $2.6 billion over the same period, and approximately $18.2 billion by the U.S.

In fact, with France announcing it plans to invest around $1.8 billion in AI by 2022, the UK is in danger of falling behind its first competitor elsewhere in Europe.

Rather than compete against other nations in direct government funding, the committee suggests the UK leads in ethical development with its strengths in law, research, financial services, and civic institutions.

The first part of this approach would be to sponsor basic research into responsible AI development. Next would be to convene a global summit in London next year to create a “common framework for the ethical development and deployment of artificial intelligence systems.”

A draft ‘AI Code’ is provided as part of the report to lay a groundwork for what this framework should look like — to be adopted nationally, and potentially internationally.

Here are the five basic principles:

    1. Artificial intelligence should be developed for the common good and benefit of humanity.
    1. Artificial intelligence should operate on principles of intelligibility and fairness.
    1. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
    1. All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
  1. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

The report makes clear the UK can take a leading role in AI by playing to its strengths, but it must take a strategic, ethics-first approach. If the nation attempts to compete on funding alone it will fall behind.

A copy of the full report can be found here (PDF).

Do you agree with the Lords’ report on AI in the UK?

 

The post House of Lords: The UK can lead in AI by putting ethics first appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/04/16/uk-ai-house-of-lords-ethics/feed/ 0
Mayor of London’s AI study will ensure policies help the city thrive https://news.deepgeniusai.com/2018/03/22/mayor-london-study-ai/ https://news.deepgeniusai.com/2018/03/22/mayor-london-study-ai/#respond Thu, 22 Mar 2018 14:18:44 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=2940 London’s mayor has commissioned a study to engage with the AI community and ensure policies help the city to become a global hub for the emerging technology. Sadiq Khan, Mayor of London, commissioned the specialist AI researchers at CognitionX to undertake the study. The results will be used to form London’s policy towards the AI... Read more »

The post Mayor of London’s AI study will ensure policies help the city thrive appeared first on AI News.

]]>
London’s mayor has commissioned a study to engage with the AI community and ensure policies help the city to become a global hub for the emerging technology.

Sadiq Khan, Mayor of London, commissioned the specialist AI researchers at CognitionX to undertake the study. The results will be used to form London’s policy towards the AI industry.

CognitionX will be looking at four areas in particular:

  • Stimulating adoption and deployment
  • Attracting AI entrepreneurs and businesses to London
  • Removing barriers
  • Supporting growth

Lawmakers around the world are scrambling to form policies around AI. Some are embracing it with open arms, while others have concerns about the potential impact to jobs, privacy, and even safety.

Khan’s approach is the first — that I’m aware of, at least — to heavily involve the AI community itself to form policies.

In a statement, he said:

“This report will help to uncover the opportunities to unlock innovation and investment in London, in order to maximise the economic impact of AI on the city.

It will also identify the challenges we face in positioning the capital as the best place to start, grow or relocate an AI business. Artificial Intelligence has the potential to transform almost every industry across the capital. London is in a strong position in the data economy and is already home to innovative, fast-growing companies like Deepmind, CityMapper and Satalia – not to mention the kind of work being done to improve public services, such as the data-driven approach to understanding rent arrears emerging through Hackney Council’s partnership with Pivigo.

London has a tremendous opportunity to build a world-class AI hub which serves a range of industries – from healthcare to finance to law – and which also helps build the AI-driven economy of the future in a way that works for all Londoners.”

London’s mayor is no stranger to supporting new technology ventures. His office backs the London Co-Investment Fund (LCIF) which in January announced its 100th investment, and revealed it’s created 1,000 jobs — while saving 300 — since its creation.

Just yesterday, AI News reported that — across the channel in France — President Emmanuel Macron is advancing his own plans for AI. Macron is calling for Europeans to relax about the use of their data by AI companies to prevent those operating in the region falling behind their international counterparts.

What are your thoughts on the study?

 

The post Mayor of London’s AI study will ensure policies help the city thrive appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/03/22/mayor-london-study-ai/feed/ 0