mit media lab – AI News https://news.deepgeniusai.com Artificial Intelligence News Wed, 25 Mar 2020 05:27:51 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png mit media lab – AI News https://news.deepgeniusai.com 32 32 Joy Buolamwini: Fighting algorithmic bias needs to be ‘a priority’ https://news.deepgeniusai.com/2019/01/24/joy-buolamwini-algorithmic-bias-priority/ https://news.deepgeniusai.com/2019/01/24/joy-buolamwini-algorithmic-bias-priority/#comments Thu, 24 Jan 2019 15:09:20 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4584 Algorithmic Justice League founder Joy Buolamwini gave a speech during the World Economic Forum this week on the need to fight AI bias. Buolamwini is also an MIT Media Lab researcher and went somewhat viral for her TED Talk in 2016 titled ‘How I’m fighting bias in algorithms’. Her latest speech included a presentation in... Read more »

The post Joy Buolamwini: Fighting algorithmic bias needs to be ‘a priority’ appeared first on AI News.

]]>
Algorithmic Justice League founder Joy Buolamwini gave a speech during the World Economic Forum this week on the need to fight AI bias.

Buolamwini is also an MIT Media Lab researcher and went somewhat viral for her TED Talk in 2016 titled ‘How I’m fighting bias in algorithms’.

Her latest speech included a presentation in which Buolamwini went over an analysis of the current popular facial recognition algorithms.

Here were the overall accuracy results when guessing the gender of a face:

  • Microsoft: 93.7 percent
  • Face++: 90 percent
  • IBM: 87.9 percent

Shown in this way, there appears to be little problem. Of course, society is a lot more diverse and algorithms need to be accurate for all.

When separated between males and females, a greater disparity becomes apparent:

  • Microsoft: 89.3 percent (females), 97.4 percent (males)
  • Face++: 78.7 percent (females), 99.3 percent (males)
  • IBM: 79.7 percent (females), 94.4 percent (males)

Here we begin to see the underrepresentation of females in STEM careers begin to come into effect. China-based Face++ suffers the worst, likely a result of the country’s more severe gender gap (PDF) over the US.

Splitting between skin type also increases the disparity:

  • Microsoft: 87.1 percent (darker), 99.3 percent (lighter)
  • Face++: 83.5 percent (darker), 95.3 percent (lighter)
  • IBM: 77.6 percent (darker), 96.8 percent (lighter)

The difference here is likely again to do with a racial disparity in STEM careers. A gap between 12-19 percent is observed between darker and lighter skin tones.

So far, the results are in line with a 2010 study by researchers at NIST and the University of Texas in Dallas. The researchers found (PDF) algorithms designed and tested in East Asia are better at recognising East Asians, while those developed in Western countries are more accurate when detecting Caucasians.

“We did something that hadn’t been done in the field before, which was doing intersectional analysis,” explains Buolamwini. “If we only do single axis analysis – we only look at skin type, only look at gender… – we’re going to miss important trends.”

Here is where the results get most concerning. Results are in descending order from most accurate to least:

Microsoft

Lighter Males (100 percent)

Lighter Females (98.3 percent)

Darker Males (94 percent)

Darker Females (79.2 percent)

Face++

Darker Males (99.3 percent)

Lighter Males (99.2 percent)

Lighter Females (94 percent)

Darker Females (65.5 percent)

IBM

Lighter Males (99.7 percent)

Lighter Females (92.9 percent)

Darker Males (88 percent)

Darker Females (65.3 percent)

The lack of accuracy with regards to females with darker skin tones is of particular note. Two of the three algorithms would get it wrong in approximately one-third of occasions.

Just imagine surveillance being used with these algorithms. Lighter skinned males would be recognised in most cases, but darker skinned females would be stopped often. That could be a lot of mistakes in areas with high footfall such as airports.

Prior to making her results public, Buolamwini sent the results to each company. IBM responded the same day and said their developers would address the issue.

When she reassessed IBM’s algorithm, the accuracy when assessing darker males jumped from 88 percent to 99.4 percent, for darker females from 65.3 percent to 83.5 percent, for lighter females from 92.9 percent to 97.6 percent, and for lighter males it stayed the same at 97 percent.

Buolamwini commented: “So for everybody who watched my TED Talk and said: ‘Isn’t the reason you weren’t detected because of, you know, physics? Your skin reflectance, contrast, et cetera,’ — the laws of physics did not change between December 2017, when I did the study, and 2018, when they launched the new results.”

“What did change is they made it a priority.”

You can watch Buolamwini’s full presentation at the WEF here.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Joy Buolamwini: Fighting algorithmic bias needs to be ‘a priority’ appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/01/24/joy-buolamwini-algorithmic-bias-priority/feed/ 4
MIT created a psychopathic AI based on Norman Bates https://news.deepgeniusai.com/2018/06/06/mit-psychopathic-ai/ https://news.deepgeniusai.com/2018/06/06/mit-psychopathic-ai/#comments Wed, 06 Jun 2018 11:23:01 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3253 While many researchers are calling for regulations to ensure safe and ethical AI development, MIT has created one with a psychopathic personality. The scientists, from MIT’s unconventional Media Lab, based their AI’s personality on serial killer Norman Bates from Hitchcock thriller Psycho. AI Norman is designed to caption images. Whereas most neural networks are trained... Read more »

The post MIT created a psychopathic AI based on Norman Bates appeared first on AI News.

]]>
While many researchers are calling for regulations to ensure safe and ethical AI development, MIT has created one with a psychopathic personality.

The scientists, from MIT’s unconventional Media Lab, based their AI’s personality on serial killer Norman Bates from Hitchcock thriller Psycho.

AI Norman is designed to caption images. Whereas most neural networks are trained on a range of images to reduce bias, poor Norman was exposed ‘to the darkest corners of Reddit.’

Anyone who has found themselves in these areas will feel for Norman. Whereas we can close our browsers, Norman had the equivalent of being tied up and forced to watch a barrage of the worst of humankind.

Needless to say, Norman gained a macabre view of the world.

MIT’s researchers ran Norman through a Rorschach test in order to see what it sees in comparison to an AI trained in a more standard manner:

norman ai mit bias psychopath

Fortunately, MIT promises its AI is only a warning about bias and its potential dangers. As its creators say, “when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it.”

Norman won’t have access to the big red button. Unfortunately, that won’t keep less predictable human psychopaths from it.

(See also: Open source tool for detecting bias in algorithms)

What are your thoughts on MIT’s psychopathic AI?

 

The post MIT created a psychopathic AI based on Norman Bates appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/06/06/mit-psychopathic-ai/feed/ 1