wef – AI News https://news.deepgeniusai.com Artificial Intelligence News Tue, 09 Jun 2020 12:30:31 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png wef – AI News https://news.deepgeniusai.com 32 32 UK releases guidelines to help governments accelerate ‘trusted’ AI deployments https://news.deepgeniusai.com/2020/06/09/uk-guidelines-help-governments-trusted-ai-deployments/ https://news.deepgeniusai.com/2020/06/09/uk-guidelines-help-governments-trusted-ai-deployments/#respond Tue, 09 Jun 2020 12:30:29 +0000 https://news.deepgeniusai.com/?p=9679 The UK has released new guidelines during the World Economic Forum (WEF) to help governments accelerate the deployment of trusted AI solutions. AI is proving itself to be an important tool in tackling some of the biggest issues the world faces today; including coronavirus and climate change. However, some public distrust remains. “The current pandemic... Read more »

The post UK releases guidelines to help governments accelerate ‘trusted’ AI deployments appeared first on AI News.

]]>
The UK has released new guidelines during the World Economic Forum (WEF) to help governments accelerate the deployment of trusted AI solutions.

AI is proving itself to be an important tool in tackling some of the biggest issues the world faces today; including coronavirus and climate change. However, some public distrust remains.

“The current pandemic has shown us more needs to be done to speed up the adoption of trusted AI around the world,” said Kay Firth-Butterfield, Head of Artificial Intelligence and Machine Learning at the World Economic Forum.

“We moved from guidelines to practical tools, tested and iterated them – but this is still just a start. Now we will be working to scale them to countries around the world.”

The guidelines released today aim to “help society tackle big data problems faster” while also preparing them for future risks. The UK government adopted the guidelines across its various departments.

“The UK is a global leader in AI and I am pleased we are working with the World Economic Forum and international partners to develop guidelines to ensure its safe and ethical deployment,” said Caroline Dinenage, Digital Minister of the United Kingdom.

“By taking a dynamic approach we can boost innovation, create competitive markets and support public trust in artificial intelligence. I urge public sector organisations around the world to adopt these guidelines and consider carefully how they procure and deploy these technologies.”

For the past year, the WEF has worked alongside the UK’s Office for AI; companies such as Deloitte, Salesforce, and Splunk; 15 other countries; and more than 150 members of government, academia, civil society, and the private sector.

“As a trusted AI advisor to governments around the world, we were thrilled to collaborate with the World Economic Forum and the government of the UK in the development of procurement guidelines that help the public sector put AI at the service of its constituents in a manner that is both efficient and ethical,” said Shelby Austin, Managing Partner, Growth & Investments and Omnia AI, Deloitte, Canada.

“As our societies reorganize and make progress in our fight against COVID-19, the need for multi-stakeholder cooperation has never been more apparent. We believe in these joint efforts, and we believe in the power of data-driven decision-making to help our countries recover and thrive.”

The result of the joint effort was the “Procurement in a Box” toolkit which provides guidance from conducting drafting proposals and conducting risk assessments, all the way to purchasing AI solutions and deploying them in a trusted manner.

A proposal for a chatbot allowing executives for the Dubai Electricity and Water Authority (DEWA) to obtain answers to data-related questions was used to test the guidelines. DEWA’s chatbot was successful and serves as an early example of how rapid but safe AI deployments can be achieved using the guidelines.

“In an era that will continue to be dominated by the transformative technologies emerging from the Fourth Industrial Revolution, integrating AI into the public sector for everyday use will significantly elevate the performance of government departments,” said Khalfan Belhoul, CEO of the Dubai Future Foundation, the host entity of Centre for the Fourth Industrial Revolution UAE.

You can find a copy of the Procurement in a Box toolkit here (PDF)

(Photo by Franck V. on Unsplash)

The post UK releases guidelines to help governments accelerate ‘trusted’ AI deployments appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/09/uk-guidelines-help-governments-trusted-ai-deployments/feed/ 0
Joy Buolamwini: Fighting algorithmic bias needs to be ‘a priority’ https://news.deepgeniusai.com/2019/01/24/joy-buolamwini-algorithmic-bias-priority/ https://news.deepgeniusai.com/2019/01/24/joy-buolamwini-algorithmic-bias-priority/#comments Thu, 24 Jan 2019 15:09:20 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4584 Algorithmic Justice League founder Joy Buolamwini gave a speech during the World Economic Forum this week on the need to fight AI bias. Buolamwini is also an MIT Media Lab researcher and went somewhat viral for her TED Talk in 2016 titled ‘How I’m fighting bias in algorithms’. Her latest speech included a presentation in... Read more »

The post Joy Buolamwini: Fighting algorithmic bias needs to be ‘a priority’ appeared first on AI News.

]]>
Algorithmic Justice League founder Joy Buolamwini gave a speech during the World Economic Forum this week on the need to fight AI bias.

Buolamwini is also an MIT Media Lab researcher and went somewhat viral for her TED Talk in 2016 titled ‘How I’m fighting bias in algorithms’.

Her latest speech included a presentation in which Buolamwini went over an analysis of the current popular facial recognition algorithms.

Here were the overall accuracy results when guessing the gender of a face:

  • Microsoft: 93.7 percent
  • Face++: 90 percent
  • IBM: 87.9 percent

Shown in this way, there appears to be little problem. Of course, society is a lot more diverse and algorithms need to be accurate for all.

When separated between males and females, a greater disparity becomes apparent:

  • Microsoft: 89.3 percent (females), 97.4 percent (males)
  • Face++: 78.7 percent (females), 99.3 percent (males)
  • IBM: 79.7 percent (females), 94.4 percent (males)

Here we begin to see the underrepresentation of females in STEM careers begin to come into effect. China-based Face++ suffers the worst, likely a result of the country’s more severe gender gap (PDF) over the US.

Splitting between skin type also increases the disparity:

  • Microsoft: 87.1 percent (darker), 99.3 percent (lighter)
  • Face++: 83.5 percent (darker), 95.3 percent (lighter)
  • IBM: 77.6 percent (darker), 96.8 percent (lighter)

The difference here is likely again to do with a racial disparity in STEM careers. A gap between 12-19 percent is observed between darker and lighter skin tones.

So far, the results are in line with a 2010 study by researchers at NIST and the University of Texas in Dallas. The researchers found (PDF) algorithms designed and tested in East Asia are better at recognising East Asians, while those developed in Western countries are more accurate when detecting Caucasians.

“We did something that hadn’t been done in the field before, which was doing intersectional analysis,” explains Buolamwini. “If we only do single axis analysis – we only look at skin type, only look at gender… – we’re going to miss important trends.”

Here is where the results get most concerning. Results are in descending order from most accurate to least:

Microsoft

Lighter Males (100 percent)

Lighter Females (98.3 percent)

Darker Males (94 percent)

Darker Females (79.2 percent)

Face++

Darker Males (99.3 percent)

Lighter Males (99.2 percent)

Lighter Females (94 percent)

Darker Females (65.5 percent)

IBM

Lighter Males (99.7 percent)

Lighter Females (92.9 percent)

Darker Males (88 percent)

Darker Females (65.3 percent)

The lack of accuracy with regards to females with darker skin tones is of particular note. Two of the three algorithms would get it wrong in approximately one-third of occasions.

Just imagine surveillance being used with these algorithms. Lighter skinned males would be recognised in most cases, but darker skinned females would be stopped often. That could be a lot of mistakes in areas with high footfall such as airports.

Prior to making her results public, Buolamwini sent the results to each company. IBM responded the same day and said their developers would address the issue.

When she reassessed IBM’s algorithm, the accuracy when assessing darker males jumped from 88 percent to 99.4 percent, for darker females from 65.3 percent to 83.5 percent, for lighter females from 92.9 percent to 97.6 percent, and for lighter males it stayed the same at 97 percent.

Buolamwini commented: “So for everybody who watched my TED Talk and said: ‘Isn’t the reason you weren’t detected because of, you know, physics? Your skin reflectance, contrast, et cetera,’ — the laws of physics did not change between December 2017, when I did the study, and 2018, when they launched the new results.”

“What did change is they made it a priority.”

You can watch Buolamwini’s full presentation at the WEF here.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Joy Buolamwini: Fighting algorithmic bias needs to be ‘a priority’ appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/01/24/joy-buolamwini-algorithmic-bias-priority/feed/ 4
AI is at risk of bias due to serious gender gap problem https://news.deepgeniusai.com/2018/12/18/ai-risk-bias-gender-gap-problem/ https://news.deepgeniusai.com/2018/12/18/ai-risk-bias-gender-gap-problem/#respond Tue, 18 Dec 2018 18:11:42 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4341 AI needs to be created by a diverse range of developers to prevent bias, but the World Economic Forum (WEF) has found a serious gender gap. Gender gaps in STEM careers have been a problem for some time, but it’s not often the end product matters what gender it was developed by. AI is about... Read more »

The post AI is at risk of bias due to serious gender gap problem appeared first on AI News.

]]>
AI needs to be created by a diverse range of developers to prevent bias, but the World Economic Forum (WEF) has found a serious gender gap.

Gender gaps in STEM careers have been a problem for some time, but it’s not often the end product matters what gender it was developed by. AI is about to be everywhere, and it matters that it’s representative of those it serves.

In a report published this week, the WEF wrote:

“The equal contribution of women and men in this process of deep economic and societal transformation is critical.

More than ever, societies cannot afford to lose out on the skills, ideas and perspectives of half of humanity to realize the promise of a more prosperous and humancentric future that well-governed innovation and technology can bring.”

Shockingly, the WEF report found less than one-fourth of roles in the industry are being filled by women. To put that in perspective, the AI gender gap is around three times larger than other industry talent pools.

“It is absolutely crucial that those people who create AI are representative of the population as a whole,” said Kay Firth-Butterfield, WEF’s head of artificial intelligence and machine learning.

Bias in coding has the potential for AI to perform better for certain groups of society than others, potentially giving them an advantage. This bias is rarely intentional but has already found its way into AI developments.

A recent test of Amazon’s facial recognition technology by the ACLU (American Civil Liberties Union) found it erroneously labelled those with darker skin colours as criminals more often.

Similarly, a 2010 study by researchers at NIST and the University of Texas in Dallas found that algorithms designed and tested in East Asia are better at recognising East Asians, while those designed in Western countries are more accurate at detecting Caucasians.

More recently, Google released a predictive text feature within Gmail where the algorithm made biased assumptions referring to a nurse with female pronouns.

It’s clear, addressing the gender gap is more pressing than ever.

You can find the full report here.

 AI & >.

The post AI is at risk of bias due to serious gender gap problem appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/12/18/ai-risk-bias-gender-gap-problem/feed/ 0