algorithms – AI News https://news.deepgeniusai.com Artificial Intelligence News Thu, 12 Nov 2020 11:13:54 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png algorithms – AI News https://news.deepgeniusai.com 32 32 Synthesized’s free tool aims to detect and remove algorithmic biases https://news.deepgeniusai.com/2020/11/12/synthesized-free-tool-detect-remove-algorithmic-biases/ https://news.deepgeniusai.com/2020/11/12/synthesized-free-tool-detect-remove-algorithmic-biases/#respond Thu, 12 Nov 2020 11:13:52 +0000 https://news.deepgeniusai.com/?p=10016 Synthesized has launched a free tool which aims to quickly identify and remove dangerous biases in algorithms. As humans, we all have biases. These biases, often unconsciously, end up in algorithms which are designed to be used across society. In practice, this could mean anything from a news app serving more left-wing or right-wing content—through... Read more »

The post Synthesized’s free tool aims to detect and remove algorithmic biases appeared first on AI News.

]]>
Synthesized has launched a free tool which aims to quickly identify and remove dangerous biases in algorithms.

As humans, we all have biases. These biases, often unconsciously, end up in algorithms which are designed to be used across society.

In practice, this could mean anything from a news app serving more left-wing or right-wing content—through to facial recognition systems which flag some races and genders more than others.

A 2010 study (PDF) by researchers at NIST and the University of Texas found that algorithms designed and tested in East Asia are better at recognising East Asians, while those developed in Western countries are more accurate when detecting Caucasians.

Dr Nicolai Baldin, CEO and Founder of Synthesized, said:

“The reputational risk of all organisations is under threat due to biased data and we’ve seen this will no longer be tolerated at any level. It’s a burning priority now and must be dealt with as a matter of urgency, both from a legal and ethical standpoint.

Last year, Algorithmic Justice League founder Joy Buolamwini gave a presentation during the World Economic Forum on the need to fight AI bias. Buolamwini highlighted the massive disparities in effectiveness when popular facial recognition algorithms were applied to various parts of society.

Synthesized claims its platform is able to automatically identify bias across data attributes like gender, age, race, religion, sexual orientation, and more. 

The platform was designed to be simple-to-use with no coding knowledge required. Users only have to upload a structured data file – as simple as a spreadsheet – to begin analysing for potential biases. A ‘Total Fairness Score’ will be provided to show what percentage of the provided dataset contained biases.

“Synthesized’s Community Edition for Bias Mitigation is one of the first offerings specifically created to understand, investigate, and root out bias in data,” explains Baldin. “We designed the platform to be very accessible, easy-to-use, and highly scalable, as organisations have data stored across a huge range of databases and data silos.”

Some examples of how Synthesized’s tool could be used across industries include:

  • In finance, to create fairer credit ratings
  • In insurance, for more equitable claims
  • In HR, to eliminate biases in hiring processes
  • In universities, for ensuring fairness in admission decisions

Synthesized’s platform uses a proprietary algorithm which is said to be quicker and more accurate than existing techniques for removing biases in datasets. A new synthetic dataset is created which, in theory, should be free of biases.

“With the generation of synthetic data, Synthesized’s platform gives its users the ability to equally distribute all attributes within a dataset to remove bias and rebalance the dataset completely,” the company says.

“Users can also manually change singular data attributes within a dataset, such as gender, providing granular control of the rebalancing process.”

If only MIT used such a tool on its dataset it was forced to remove in July after being found to be racist and misogynistic.

You can find out more about Synthesized’s tool and how to get started here.

(Photo by Agence Olloweb on Unsplash)

The post Synthesized’s free tool aims to detect and remove algorithmic biases appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/12/synthesized-free-tool-detect-remove-algorithmic-biases/feed/ 0
Mozilla shares YouTube horror tales in campaign for responsible algorithms https://news.deepgeniusai.com/2019/10/15/mozilla-shares-youtube-horror-campaign-responsible-algorithms/ https://news.deepgeniusai.com/2019/10/15/mozilla-shares-youtube-horror-campaign-responsible-algorithms/#respond Tue, 15 Oct 2019 12:02:41 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6107 Mozilla has launched a campaign for more responsible algorithms by sharing YouTube horror tales crowdsourced from social media. We’ve all scratched our heads at some recommendations when using online platforms. Just yesterday, Verge reporter Megan Farokhmanesh shared how her Instagram recommendations have been plagued by some rather bizarre CGI images of teeth. Farokhmanesh’s account is... Read more »

The post Mozilla shares YouTube horror tales in campaign for responsible algorithms appeared first on AI News.

]]>
Mozilla has launched a campaign for more responsible algorithms by sharing YouTube horror tales crowdsourced from social media.

We’ve all scratched our heads at some recommendations when using online platforms. Just yesterday, Verge reporter Megan Farokhmanesh shared how her Instagram recommendations have been plagued by some rather bizarre CGI images of teeth.

Farokhmanesh’s account is of a recommendation algorithm going rogue in a relatively harmless and amusing way, but that’s not always the case.

Algorithms need to be unbiased. It’s easy to imagine how, without due scrutiny, algorithms could recommend content which influences a person to think or vote a certain way. The bias may not even be intentional, but it doesn’t make it any less dangerous.

YouTube’s algorithms, in particular, have been called out for promoting some awful content – including paedophilia and radicalisation. To really put that danger in perspective, around 70 percent of YouTube’s viewing time comes from recommendations.

Mozilla’s newly-launched site features 28 horror stories caused by YouTube’s algorithms. The site was launched following a Mozilla-led social media campaign where users shared their stories using the #YouTubeRegrets hashtag.

In one story, an individual provided an account of their preschool son who – like many his age – liked watching Thomas the Tank Engine videos. YouTube’s recommendations led him into watching graphic compilations of train wrecks.

At that early stage in a person’s life, what they see can be detrimental to their long-term development. However, that doesn’t mean adults aren’t also affected.

Another person said: “I started by watching a boxing match, then street boxing matches, and then I saw videos of street fights, then accidents and urban violence… I ended up with a horrible vision of the world and feeling bad, without really wanting to.”

Yet another person said they’d often watch a drag queen who did a lot of positive-affirmation and confidence-building videos. YouTube’s recommendations allegedly served up a ton of anti-LGBT content for ages after, which could have a devastating impact on already too-often marginalised communities.

In September, Mozilla advised YouTube it could improve its service – and trust in it – by taking three key steps:

  • Provide independent researchers with access to meaningful data.
  • Build simulation tools for researchers.
  • Empower researchers by not implementing restrictive API rate limits and provide access to a historical archive of videos.

We’ll have to wait and see whether YouTube takes on Mozilla’s advice, but we hope changes are made sooner rather than later. Around 250 million hours of YouTube content per day is watched via recommendations, and we can never be certain for how long those videos will last in the minds of those viewing them.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Mozilla shares YouTube horror tales in campaign for responsible algorithms appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/10/15/mozilla-shares-youtube-horror-campaign-responsible-algorithms/feed/ 0
Joy Buolamwini: Fighting algorithmic bias needs to be ‘a priority’ https://news.deepgeniusai.com/2019/01/24/joy-buolamwini-algorithmic-bias-priority/ https://news.deepgeniusai.com/2019/01/24/joy-buolamwini-algorithmic-bias-priority/#comments Thu, 24 Jan 2019 15:09:20 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4584 Algorithmic Justice League founder Joy Buolamwini gave a speech during the World Economic Forum this week on the need to fight AI bias. Buolamwini is also an MIT Media Lab researcher and went somewhat viral for her TED Talk in 2016 titled ‘How I’m fighting bias in algorithms’. Her latest speech included a presentation in... Read more »

The post Joy Buolamwini: Fighting algorithmic bias needs to be ‘a priority’ appeared first on AI News.

]]>
Algorithmic Justice League founder Joy Buolamwini gave a speech during the World Economic Forum this week on the need to fight AI bias.

Buolamwini is also an MIT Media Lab researcher and went somewhat viral for her TED Talk in 2016 titled ‘How I’m fighting bias in algorithms’.

Her latest speech included a presentation in which Buolamwini went over an analysis of the current popular facial recognition algorithms.

Here were the overall accuracy results when guessing the gender of a face:

  • Microsoft: 93.7 percent
  • Face++: 90 percent
  • IBM: 87.9 percent

Shown in this way, there appears to be little problem. Of course, society is a lot more diverse and algorithms need to be accurate for all.

When separated between males and females, a greater disparity becomes apparent:

  • Microsoft: 89.3 percent (females), 97.4 percent (males)
  • Face++: 78.7 percent (females), 99.3 percent (males)
  • IBM: 79.7 percent (females), 94.4 percent (males)

Here we begin to see the underrepresentation of females in STEM careers begin to come into effect. China-based Face++ suffers the worst, likely a result of the country’s more severe gender gap (PDF) over the US.

Splitting between skin type also increases the disparity:

  • Microsoft: 87.1 percent (darker), 99.3 percent (lighter)
  • Face++: 83.5 percent (darker), 95.3 percent (lighter)
  • IBM: 77.6 percent (darker), 96.8 percent (lighter)

The difference here is likely again to do with a racial disparity in STEM careers. A gap between 12-19 percent is observed between darker and lighter skin tones.

So far, the results are in line with a 2010 study by researchers at NIST and the University of Texas in Dallas. The researchers found (PDF) algorithms designed and tested in East Asia are better at recognising East Asians, while those developed in Western countries are more accurate when detecting Caucasians.

“We did something that hadn’t been done in the field before, which was doing intersectional analysis,” explains Buolamwini. “If we only do single axis analysis – we only look at skin type, only look at gender… – we’re going to miss important trends.”

Here is where the results get most concerning. Results are in descending order from most accurate to least:

Microsoft

Lighter Males (100 percent)

Lighter Females (98.3 percent)

Darker Males (94 percent)

Darker Females (79.2 percent)

Face++

Darker Males (99.3 percent)

Lighter Males (99.2 percent)

Lighter Females (94 percent)

Darker Females (65.5 percent)

IBM

Lighter Males (99.7 percent)

Lighter Females (92.9 percent)

Darker Males (88 percent)

Darker Females (65.3 percent)

The lack of accuracy with regards to females with darker skin tones is of particular note. Two of the three algorithms would get it wrong in approximately one-third of occasions.

Just imagine surveillance being used with these algorithms. Lighter skinned males would be recognised in most cases, but darker skinned females would be stopped often. That could be a lot of mistakes in areas with high footfall such as airports.

Prior to making her results public, Buolamwini sent the results to each company. IBM responded the same day and said their developers would address the issue.

When she reassessed IBM’s algorithm, the accuracy when assessing darker males jumped from 88 percent to 99.4 percent, for darker females from 65.3 percent to 83.5 percent, for lighter females from 92.9 percent to 97.6 percent, and for lighter males it stayed the same at 97 percent.

Buolamwini commented: “So for everybody who watched my TED Talk and said: ‘Isn’t the reason you weren’t detected because of, you know, physics? Your skin reflectance, contrast, et cetera,’ — the laws of physics did not change between December 2017, when I did the study, and 2018, when they launched the new results.”

“What did change is they made it a priority.”

You can watch Buolamwini’s full presentation at the WEF here.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Joy Buolamwini: Fighting algorithmic bias needs to be ‘a priority’ appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/01/24/joy-buolamwini-algorithmic-bias-priority/feed/ 4