equality – AI News https://news.deepgeniusai.com Artificial Intelligence News Fri, 04 Dec 2020 16:19:37 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png equality – AI News https://news.deepgeniusai.com 32 32 Google fires ethical AI researcher Timnit Gebru after critical email https://news.deepgeniusai.com/2020/12/04/google-fires-ethical-ai-researcher-timnit-gebru-email/ https://news.deepgeniusai.com/2020/12/04/google-fires-ethical-ai-researcher-timnit-gebru-email/#comments Fri, 04 Dec 2020 16:18:56 +0000 https://news.deepgeniusai.com/?p=10062 A leading figure in ethical AI development has been fired by Google after criticising the company. Timnit Gebru is considered a pioneer in the field and researched the risks and inequalities found in large language models. Gebru claims she was fired by Google over an unpublished paper and sending an email critical of the company’s... Read more »

The post Google fires ethical AI researcher Timnit Gebru after critical email appeared first on AI News.

]]>
A leading figure in ethical AI development has been fired by Google after criticising the company.

Timnit Gebru is considered a pioneer in the field and researched the risks and inequalities found in large language models.

Gebru claims she was fired by Google over an unpublished paper and sending an email critical of the company’s practices.

The paper questions whether language models can be too big, who benefits from them, and whether they can increase prejudice and inequalities. Some recent cases validate her claims about large models and datasets in general.

For example, MIT was forced to remove a large dataset earlier this year called 80 Million Tiny Images. The dataset is popular for training AIs but was found to contain images labelled with racist, misogynistic, and other unacceptable terms.

A statement on MIT’s website claims it was unaware of the offensive labels and they were “a consequence of the automated data collection procedure that relied on nouns from WordNet.”

The statement goes on to explain the 80 million images contained in the dataset – with sizes of just 32×32 pixels – meant that manual inspection would be almost impossible and couldn’t guarantee all offensive images would be removed.

Gebru reportedly sent an email to the Google Brain Women and Allies listserv that is “inconsistent with the expectations of a Google manager.”

In the email, Gebru expressed her frustration with a perceived lack of progress at Google in hiring women at Google. Gebru claimed she was also told not to publish a piece of research and advised employees to stop filling out diversity paperwork because it didn’t matter.

On top of the questionable reasons for her firing, Gebru says her former colleagues were emailed saying she offered her resignation—which she claims was not the case:

Platformer obtained an email from Jeff Dean, Head of Google Research, which was sent to employees and offers his take on Gebru’s claims:

“We’ve approved dozens of papers that Timnit and/or the other Googlers have authored and then published, but as you know, papers often require changes during the internal review process (or are even deemed unsuitable for submission). Unfortunately, this particular paper was only shared with a day’s notice before its deadline — we require two weeks for this sort of review — and then instead of awaiting reviewer feedback, it was approved for submission and submitted.

A cross functional team then reviewed the paper as part of our regular process and the authors were informed that it didn’t meet our bar for publication and were given feedback about why. It ignored too much relevant research — for example, it talked about the environmental impact of large models, but disregarded subsequent research showing much greater efficiencies. Similarly, it raised concerns about bias in language models, but didn’t take into account recent research to mitigate these issues.”

Dean goes on to claim Gebru made demands which included revealing the identities of the individuals he and Google Research VP of Engineering Megan Kacholia consulted with as part of the paper’s review. If the demands weren’t met, Gebru reportedly said she would leave the company.

It’s a case of one word against another, but – for a company already in the spotlight from both the public and regulators over questionable practices – being seen to fire an ethics researcher for calling out problems is not going to be good PR.

(Image Credit: Timnit Gebru by Kimberly White/Getty Images for TechCrunch under CC BY 2.0 license)

The post Google fires ethical AI researcher Timnit Gebru after critical email appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/04/google-fires-ethical-ai-researcher-timnit-gebru-email/feed/ 2
MIT has removed a dataset which leads to misogynistic, racist AI models https://news.deepgeniusai.com/2020/07/02/mit-removed-dataset-misogynistic-racist-ai-models/ https://news.deepgeniusai.com/2020/07/02/mit-removed-dataset-misogynistic-racist-ai-models/#comments Thu, 02 Jul 2020 15:43:05 +0000 https://news.deepgeniusai.com/?p=9728 MIT has apologised for, and taken offline, a dataset which trains AI models with misogynistic and racist tendencies. The dataset in question is called 80 Million Tiny Images and was created in 2008. Designed for training AIs to detect objects, the dataset is a huge collection of pictures which are individually labelled based on what... Read more »

The post MIT has removed a dataset which leads to misogynistic, racist AI models appeared first on AI News.

]]>
MIT has apologised for, and taken offline, a dataset which trains AI models with misogynistic and racist tendencies.

The dataset in question is called 80 Million Tiny Images and was created in 2008. Designed for training AIs to detect objects, the dataset is a huge collection of pictures which are individually labelled based on what they feature.

Machine-learning models are trained using these images and their labels. An image of a street – when fed into an AI trained on such a dataset – could tell you about things it contains such as cars, streetlights, pedestrians, and bikes.

Two researchers – Vinay Prabhu, chief scientist at UnifyID, and Abeba Birhane, a PhD candidate at University College Dublin in Ireland – analysed the images and found thousands of concerning labels.

MIT’s training set was found to label women as “bitches” or “whores,” and people from BAME communities with the kind of derogatory terms I’m sure you don’t need me to write. The Register notes the dataset also contained close-up images of female genitalia labeled with the C-word.

The Register alerted MIT to the concerning issues found by Prabhu and Birhane with the dataset and the college promptly took it offline. MIT went a step further and urged anyone using the dataset to stop using it and delete any copies.

A statement on MIT’s website claims it was unaware of the offensive labels and they were “a consequence of the automated data collection procedure that relied on nouns from WordNet.”

The statement goes on to explain the 80 million images contained in the dataset, with sizes of just 32×32 pixels, means that manual inspection would be almost impossible and cannot guarantee all offensive images will be removed.

“Biases, offensive and prejudicial images, and derogatory terminology alienates an important part of our community – precisely those that we are making efforts to include. It also contributes to harmful biases in AI systems trained on such data,” wrote Antonio Torralba, Rob Fergus, and Bill Freeman from MIT.

“Additionally, the presence of such prejudicial images hurts efforts to foster a culture of inclusivity in the computer vision community. This is extremely unfortunate and runs counter to the values that we strive to uphold.”

You can find a full pre-print copy of Prabhu and Birhane’s paper here (PDF)

(Photo by Clay Banks on Unsplash)

The post MIT has removed a dataset which leads to misogynistic, racist AI models appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/02/mit-removed-dataset-misogynistic-racist-ai-models/feed/ 4
Detroit Police chief says AI face recognition doesn’t work ‘96% of the time’ https://news.deepgeniusai.com/2020/06/30/detroit-police-chief-ai-face-recognition/ https://news.deepgeniusai.com/2020/06/30/detroit-police-chief-ai-face-recognition/#respond Tue, 30 Jun 2020 09:45:29 +0000 https://news.deepgeniusai.com/?p=9720 Detroit Police chief Editor Craig has acknowledged that AI-powered face recognition doesn’t work the vast majority of times. “If we would use the software only [for subject identification], we would not solve the case 95-97 percent of the time,” Craig said. “If we were just to use the technology by itself to identify someone, I... Read more »

The post Detroit Police chief says AI face recognition doesn’t work ‘96% of the time’ appeared first on AI News.

]]>
Detroit Police chief Editor Craig has acknowledged that AI-powered face recognition doesn’t work the vast majority of times.

“If we would use the software only [for subject identification], we would not solve the case 95-97 percent of the time,” Craig said. “If we were just to use the technology by itself to identify someone, I would say 96 percent of the time it would misidentify.”

Craig’s comments arrive just days after the ACLU (American Civil Liberties Union) lodged a complaint against the Detroit police following the harrowing wrongful arrest of black male Robert Williams due to a facial recognition error.

Detroit Police arrested Williams for allegedly stealing five watches valued at $3800 from a store in October 2018. A blurry CCTV image was matched by a facial recognition algorithm to Williams’ driver’s license photo.

Current AI algorithms are known to have a racism issue. Extensive studies have repeatedly shown that facial recognition algorithms are almost 100 percent accurate when used on white males, but have serious problems when it comes to darker skin colours and the fairer sex.

This racism issue was shown again this week after an AI designed to upscale blurry photos, such as those often taken from security cameras, was applied to a variety of people from the BAME communities.

Here’s a particularly famous one:

And some other examples:

Last week, Boston followed in the footsteps of an increasing number of cities like San Francisco, Oakland, and California in banning facial recognition technology over human rights concerns.

“Facial recognition is inherently dangerous and inherently oppressive. It cannot be reformed or regulated. It must be abolished,” said Evan Greer, deputy director of the digital rights group Fight for the Future.

Over the other side of the pond, facial recognition tests in the UK so far have also been nothing short of a complete failure. An initial trial at the 2016 Notting Hill Carnival led to not a single person being identified. A follow-up trial the following year led to no legitimate matches but 35 false positives.

An independent report into the Met Police’s facial recognition trials, conducted last year by Professor Peter Fussey and Dr Daragh Murray, concluded that it was only verifiably accurate in just 19 percent of cases.

The next chilling step for AI in surveillance is using it to predict crime. Following news of an imminent publication called ‘A Deep Neural Network Model to Predict Criminality Using Image Processing,’ over 1000 experts signed an open letter last week opposing the use of AI for such purposes.

“Machine learning programs are not neutral; research agendas and the data sets they work with often inherit dominant cultural beliefs about the world,” warned the letter’s authors.

The acknowledgement from Detroit’s police chief that current facial recognition technologies do not work in around 96 percent of cases should be reason enough to halt its use, especially for law enforcement, at least until serious improvements are made.

(Photo by Joshua Hoehne on Unsplash)

The post Detroit Police chief says AI face recognition doesn’t work ‘96% of the time’ appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/30/detroit-police-chief-ai-face-recognition/feed/ 0
San Francisco hopes AI will prevent bias in prosecutions https://news.deepgeniusai.com/2019/06/13/san-francisco-ai-prevent-bias-prosecutions/ https://news.deepgeniusai.com/2019/06/13/san-francisco-ai-prevent-bias-prosecutions/#respond Thu, 13 Jun 2019 11:59:53 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5751 San Francisco will soon implement AI in a bid to prevent bias when prosecuting a potential criminal. Even subconscious human biases can impact courtroom decisions. Racial bias in the legal system is particularly well-documented (PDF) and often leads to individuals with darker skin being prosecuted more, or with tougher sentencing, than people with lighter skin... Read more »

The post San Francisco hopes AI will prevent bias in prosecutions appeared first on AI News.

]]>
San Francisco will soon implement AI in a bid to prevent bias when prosecuting a potential criminal.

Even subconscious human biases can impact courtroom decisions. Racial bias in the legal system is particularly well-documented (PDF) and often leads to individuals with darker skin being prosecuted more, or with tougher sentencing, than people with lighter skin tones accused of similar crimes.

Speaking during a press briefing today, SF District Attorney George Gascón said: “When you look at the people incarcerated in this country, they’re going to be disproportionately men and women of colour.”

To combat this, San Francisco will use a ‘bias mitigation tool’ which automatically redacts any information from a police report that could identify a suspect’s race.

Information stripped from reports will not only include descriptions of race but also things such as hair and eye colour. The bias mitigation tool will even remove things such as neighbourhoods and the names of people which may indicate an individual’s racial background.

San Francisco’s bias-reducing AI even strips out information which identifies specific police officers, like their badge number. Removing this data helps to ensure the prosecutor isn’t biased through knowing an officer.

The AI tool is being developed by Alex Chohlas-Wood of the Stanford Computational Policy Lab. Several computer vision algorithms are used to recognise words and replace them with more generic equivalents like Officer #2 or Associate #1.

San Francisco hopes to start using the bias mitigation tool in early July. Hopefully, it will help to address the problem of bias in the legal system while also reducing the perception that AI only introduces bias.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post San Francisco hopes AI will prevent bias in prosecutions appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/06/13/san-francisco-ai-prevent-bias-prosecutions/feed/ 0
UN: AI voice assistants fuel stereotype women are ‘subservient’ https://news.deepgeniusai.com/2019/05/22/un-ai-voice-assistants-stereotype-women/ https://news.deepgeniusai.com/2019/05/22/un-ai-voice-assistants-stereotype-women/#respond Wed, 22 May 2019 14:01:44 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5675 A report from the UN claims AI voice assistants like Alexa and Siri are fueling the stereotype women are ‘subservient’. Published by UNESCO (United Nations Educational, Scientific and Cultural Organization), the 146-page report titled “I’d blush if I could” highlights the market is dominated by female voice assistants. According to the researchers, the almost exclusive... Read more »

The post UN: AI voice assistants fuel stereotype women are ‘subservient’ appeared first on AI News.

]]>
A report from the UN claims AI voice assistants like Alexa and Siri are fueling the stereotype women are ‘subservient’.

Published by UNESCO (United Nations Educational, Scientific and Cultural Organization), the 146-page report titled “I’d blush if I could” highlights the market is dominated by female voice assistants.

According to the researchers, the almost exclusive use of female voice assistants fuels stereotypes that women are “obliging, docile and eager-to-please helpers”.

The researchers also believe the lack of mannerisms required in speaking to current virtual assistants is also problematic. They claim the fact a virtual assistant will respond to requests no matter how it’s asked reinforces the idea women are “subservient and tolerant of poor treatment” in some communities.

Similarly, the fact virtual assistants can be summoned with just a “touch of a button or with a blunt voice command like ‘hey’ or ‘OK’,” makes it appear like women are available on demand.

Most virtual assistants use female voices by default but offer a male option. Technology giants such as Amazon and Apple have in the past said consumers prefer female voices for their assistants, with an Amazon spokesperson recently attributing these voices with more “sympathetic and pleasant” traits.

The report highlights virtual assistants are predominantly created with male engineering teams. Some cases even found assistants “thanking users for sexual harassment”, and that sexual advances from male users were tolerated more than from female users.

Siri was found to respond “provocatively to sexual favours” from male users, with phrases such as: “I’d blush if I could” (hence the report’s title) and “Oooh!”, but would do so less towards women.

The lack of ability for female voice assistants to defend themselves from sexist and hostile insults “may highlight her powerlessness,” claims the report. Such coding “projects a digitally encrypted ‘boys will be boys’ attitude” that “may help biases to take hold and spread”.

In a bid to help tackle the issue, the UN believes gender-neutral and non-human voices should be used. The researchers point towards Stephen Hawking’s famous robotic voice as one such example.

Alexa, Google Assistant, and Cortana all use female voices by default. Siri uses a male voice in Arabic, British English, Dutch, and French.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post UN: AI voice assistants fuel stereotype women are ‘subservient’ appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/05/22/un-ai-voice-assistants-stereotype-women/feed/ 0
AI-conducted study highlights ‘massive gender bias’ in the UK https://news.deepgeniusai.com/2019/02/20/ai-study-gender-bias-uk/ https://news.deepgeniusai.com/2019/02/20/ai-study-gender-bias-uk/#respond Wed, 20 Feb 2019 12:59:39 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4958 A first-of-its-kind study conducted by an AI highlights the ‘massive gender bias’ which continues to plague the UK workforce. The research was published by the Royal Statistical Society but conducted by Glass AI, a startup which uses artificial intelligence to analyse every UK website. In a blog post, the company explained its unique approach: “Previous... Read more »

The post AI-conducted study highlights ‘massive gender bias’ in the UK appeared first on AI News.

]]>
A first-of-its-kind study conducted by an AI highlights the ‘massive gender bias’ which continues to plague the UK workforce.

The research was published by the Royal Statistical Society but conducted by Glass AI, a startup which uses artificial intelligence to analyse every UK website.

In a blog post, the company explained its unique approach:

“Previous related studies created for economists, policy-makers, or business analysts have tended to underuse or even ignore the web as a data source, typically only looking in any detail at a limited number of sectors of the economy, examining a small slice of geography or conducting manual (and expensive) surveys.

Worse, given a small data set, data scientists have no choice but to extrapolate and rely on small sample statistics.“

Across the entire ‘.uk’ domain, Glass AI read the genders of 2.3 million people and the positions they held in 150,000 organisations spanning 108 industry sectors.

Gender gaps were found across industries with men far more likely to be in leadership positions.

82 percent of all CEOs, 92 percent of chairpersons, and 73 percent of directors are male.

Meanwhile, support roles are dominated by women. 95 percent of receptionists, legal secretaries, and care assistants are female.

Both of the core genders are relatively equal when it comes to participating in the workforce, but the gap presents itself in the roles they attain.

Glass AI’s findings matched those of the Office for National Statistics which shows promise for the use of artificial intelligence to conduct research far quicker and with fewer resources that matches or exceeds traditional means.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post AI-conducted study highlights ‘massive gender bias’ in the UK appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/02/20/ai-study-gender-bias-uk/feed/ 0
Amnesty International warns of AI ‘nightmare scenarios’ https://news.deepgeniusai.com/2018/06/14/amnesty-international-ai-nightmare/ https://news.deepgeniusai.com/2018/06/14/amnesty-international-ai-nightmare/#respond Thu, 14 Jun 2018 13:50:20 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3327 Human rights campaigners Amnesty International have warned of the potential ‘nightmare scenarios’ arising from AI if left unchecked. In a blog post, one scenario Amnesty foresees AI being used for is autonomous systems choosing military targets with little-to-no human oversight. Military AI Fears The development of AI has been likened to another arms race. Much... Read more »

The post Amnesty International warns of AI ‘nightmare scenarios’ appeared first on AI News.

]]>
Human rights campaigners Amnesty International have warned of the potential ‘nightmare scenarios’ arising from AI if left unchecked.

In a blog post, one scenario Amnesty foresees AI being used for is autonomous systems choosing military targets with little-to-no human oversight.

Military AI Fears

The development of AI has been likened to another arms race. Much like nuclear weapons, there is the argument if a nation doesn’t develop its capabilities then others will. Furthermore, there’s a greater incentive to use it with the knowledge of having the upper-hand.

Much progress has been made on nuclear disarmament, although the US and Russia still hold — and modernise — huge arsenals (approximately 6,800 and 7,000 warheads, respectively.)

This rivalry shows no signs of letting up and Russia continues to be linked with rogue state-like actions including hacking, interference with Western diplomatic processes, misinformation campaigns, and even assassinations.

Last week, it was unveiled that New York-based artificial intelligence startup Clarifai saw a server compromised while it was conducting secretive work on the U.S. Defense Department’s Project Maven.

Project Maven, which Google has decided it will not renew its contract to lend its expertise to following backlash, aims to automate the processing of drone images. While it’s unclear whether the hack was state-sponsored, it allegedly originated from Russia.

AI Discrimination

The next concern on Amnesty’s list is discrimination by biased algorithms — whether intentional, or not.

Unfortunately, the current under-representation problem in STEM fields is causing unintentional bias.

Here in the West, technologies are still mostly developed by white males and can often unintentionally perform better for this group.

A 2010 study by researchers at NIST and the University of Texas in Dallas found that algorithms designed and tested in East Asia are better at recognising East Asians, while those designed in Western countries are more accurate at detecting Caucasians.

Digital rights campaigners Access Now recently wrote in a post:

“From policing, to welfare systems, online discourse, and healthcare – to name a few examples – systems employing machine learning technologies can vastly and rapidly change or reinforce power structures or inequalities on an unprecedented scale and with significant harm to human rights.”

One company, Pymetrics, recently unveiled an open source tool for detecting unintentional bias in algorithms. As long as such tools are used, it could be very important to ensuring digital equality.

Meanwhile, some companies are deliberately implementing bias in their algorithms.

Russian startup NtechLab has come under fire after building an ‘ethnicity detection’ feature into its facial recognition system. Considering the existing problem of racial profiling, the idea of it becoming automated naturally raises some concern.

In a bid to quell fears about the use of its own technology for nefarious purposes, Google published its ethical principles for AI development.

Google says it will not develop technologies or weapons that cause harm, or anything which can be used for surveillance violating “internationally accepted norms” or “widely accepted principles of international law and human rights.”

Last month, Amnesty International and Access Now circulated the Toronto Declaration (PDF) which proposed a set of principles to prevent discrimination from AI and help to ensure its responsible development.

Then you’ve got MIT, who deliberately built a psychopathic AI based on a serial killer.

What are your thoughts on Amnesty International’s AI concerns?

 

The post Amnesty International warns of AI ‘nightmare scenarios’ appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/06/14/amnesty-international-ai-nightmare/feed/ 0