bias – AI News https://news.deepgeniusai.com Artificial Intelligence News Mon, 14 Dec 2020 16:34:35 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png bias – AI News https://news.deepgeniusai.com 32 32 EU human rights agency issues report on AI ethical considerations https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/ https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/#respond Mon, 14 Dec 2020 16:34:34 +0000 https://news.deepgeniusai.com/?p=10117 The European Union’s Fundamental Rights Agency (FRA) has issued a report on AI which delves into the ethical considerations which must be made about the technology. FRA’s report is titled Getting The Future Right and opens with some of the ways AI is already making lives better—such as helping with cancer diagnosis, and even predicting... Read more »

The post EU human rights agency issues report on AI ethical considerations appeared first on AI News.

]]>
The European Union’s Fundamental Rights Agency (FRA) has issued a report on AI which delves into the ethical considerations which must be made about the technology.

FRA’s report is titled Getting The Future Right and opens with some of the ways AI is already making lives better—such as helping with cancer diagnosis, and even predicting where burglaries are likely to take place.

“The possibilities seem endless,” writes Michael O’Flaherty, Director of the FRA, in the report’s foreword. “But how can we fully uphold fundamental rights standards when using AI?”

The FRA interviewed over a hundred public administration officials, private company staff, and a diverse range of experts, in a bid to answer that question.

With evidence of algorithms having biases which could lead to automating societal issues like racial profiling—it’s a question that needs answering if the full potential of AI is going to be unlocked for the whole of society.

O’Flaherty says:

“AI is not infallible, it is made by people – and humans can make mistakes. That is why people need to be aware when AI is used, how it works and how to challenge automated decisions. The EU needs to clarify how existing rules apply to AI. And organisations need to assess how their technologies can interfere with people’s rights both in the development and use of AI.

“We have an opportunity to shape AI that not only respects our human and fundamental rights but that also protects and promotes them.”

AI is being used in almost every industry in some form or another—if not already, it will be soon.

Biases in AI are more dangerous in some industries than others. Policing is an obvious example, but in areas like financial services it could mean one person being given a loan or mortgage compared to another.

Without due transparency, these biases could happen without anyone knowing the reasons behind such decisions—it could simply be because someone grew up in a different neighbourhood. Each automated decision has a very real human impact.

The FRA calls for the EU to:

  • Make sure that AI respects ALL fundamental rights – AI can affect many rights – not just privacy or data protection. It can also discriminate or impede justice. Any future AI legislation has to consider this and create effective safeguards.
  • Guarantee that people can challenge decisions taken by AI – people need to know when AI is used and how it is used, as well as how and where to complain. Organisations using AI need to be able to explain how their systems take decisions.
  • Assess AI before and during its use to reduce negative impacts – private and public organisations should carry out assessments of how AI could harm fundamental rights.
  • Provide more guidance on data protection rules – the EU should further clarify how data protection rules apply to AI. More clarity is also needed on the implications of automated decision-making and the right to human review when AI is used.
  • Assess whether AI discriminates – awareness about the potential for AI to discriminate, and the impact of this, is relatively low. This calls for more research funding to look into the potentially discriminatory effects of AI so Europe can guard against it.
  • Create an effective oversight system – the EU should invest in a more ‘joined-up’ system to hold businesses and public administrations accountable when using AI. Authorities need to ensure that oversight bodies have adequate resources and skills to do the job.

The EU has increased its scrutiny of “big tech” companies like Google in recent years over concerns of invasive privacy practices and abusing their market positions. Last week, AI News reported that Google had controversially fired leading AI ethics researcher Timnit Gebru after she criticised her employer in an email.

Google chief executive Sundar Pichai wrote in a memo: “We need to accept responsibility for the fact that a prominent black, female leader with immense talent left Google unhappily.

“It’s incredibly important to me that our black, women, and under-represented Googlers know that we value you and you do belong at Google.”

Gebru gave an interview to the BBC this week in which she called Google and big tech “institutionally racist”. With that in mind, the calls made in the FRA’s report seem especially important to heed.

You can download a full copy of the FRA’s report here.

(Photo by Guillaume Périgois on Unsplash)

The post EU human rights agency issues report on AI ethical considerations appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/feed/ 0
AWS announces nine major updates for its ML platform SageMaker https://news.deepgeniusai.com/2020/12/09/aws-nine-major-updates-ml-platform-sagemaker/ https://news.deepgeniusai.com/2020/12/09/aws-nine-major-updates-ml-platform-sagemaker/#comments Wed, 09 Dec 2020 14:47:48 +0000 https://news.deepgeniusai.com/?p=10096 Amazon Web Services (AWS) has announced nine major new updates for its cloud-based machine learning platform, SageMaker. SageMaker aims to provide a machine learning service which can be used to build, train, and deploy ML models for virtually any use case. During this year’s re:Invent conference, AWS made several announcements to further improve SageMaker’s capabilities.... Read more »

The post AWS announces nine major updates for its ML platform SageMaker appeared first on AI News.

]]>
Amazon Web Services (AWS) has announced nine major new updates for its cloud-based machine learning platform, SageMaker.

SageMaker aims to provide a machine learning service which can be used to build, train, and deploy ML models for virtually any use case.

During this year’s re:Invent conference, AWS made several announcements to further improve SageMaker’s capabilities.

Swami Sivasubramanian, VP of Amazon Machine Learning at AWS, said:

“Hundreds of thousands of everyday developers and data scientists have used our industry-leading machine learning service, Amazon SageMaker, to remove barriers to building, training, and deploying custom machine learning models. One of the best parts about having such a widely-adopted service like SageMaker is that we get lots of customer suggestions which fuel our next set of deliverables.

Today, we are announcing a set of tools for Amazon SageMaker that makes it much easier for developers to build end-to-end machine learning pipelines to prepare, build, train, explain, inspect, monitor, debug, and run custom machine learning models with greater visibility, explainability, and automation at scale.”

The first announcement is Data Wrangler, a feature which aims to automate the preparation of data for machine learning.

Data Wrangler enables customers to choose the data they want from their various data stores and import it with a single click. Over 300 built-in data transformers are included to help customers normalise, transform, and combine features without having to write any code.

Frank Farrall, Principal of AI Ecosystems and Platforms Leader at Deloitte, comments:

“SageMaker Data Wrangler enables us to hit the ground running to address our data preparation needs with a rich collection of transformation tools that accelerate the process of machine learning data preparation needed to take new products to market.

In turn, our clients benefit from the rate at which we scale deployments, enabling us to deliver measurable, sustainable results that meet the needs of our clients in a matter of days rather than months.”

The second announcement is Feature Store. Amazon SageMaker Feature Store provides a new repository that makes it easy to store, update, retrieve, and share machine learning features for training and inference.

Feature Store aims to overcome the problem of storing features which are mapped to multiple models. A purpose-built feature store helps developers to access and share features that make it much easier to name, organise, find, and share sets of features among teams of developers and data scientists. Because it resides in SageMaker Studio – close to where ML models are run – AWS claims it provides single-digit millisecond inference latency.

Mammad Zadeh, VP of Engineering, Data Platform at Intuit, says:

“We have worked closely with AWS in the lead up to the release of Amazon SageMaker Feature Store, and we are excited by the prospect of a fully managed feature store so that we no longer have to maintain multiple feature repositories across our organization.

Our data scientists will be able to use existing features from a central store and drive both standardisation and reuse of features across teams and models.”

Next up, we have SageMaker Pipelines—which claims to be the first purpose-built, easy-to-use continuous integration and continuous delivery (CI/CD) service for machine learning.

Developers can define each step of an end-to-end machine learning workflow including the data-load steps, transformations from Amazon SageMaker Data Wrangler, features stored in Amazon SageMaker Feature Store, training configuration and algorithm set up, debugging steps, and optimisation steps.

SageMaker Clarify may be one of the most important features being debuted by AWS this week considering ongoing events.

Clarify aims to provide bias detection across the machine learning workflow, enabling developers to build greater fairness and transparency into their ML models. Rather than turn to often time-consuming open-source tools, developers can use the integrated solution to quickly try and counter any bias in models.

Andreas Heyden, Executive VP of Digital Innovations for the DFL Group, says:

“Amazon SageMaker Clarify seamlessly integrates with the rest of the Bundesliga Match Facts digital platform and is a key part of our long-term strategy of standardising our machine learning workflows on Amazon SageMaker.

By using AWS’s innovative technologies, such as machine learning, to deliver more in-depth insights and provide fans with a better understanding of the split-second decisions made on the pitch, Bundesliga Match Facts enables viewers to gain deeper insights into the key decisions in each match.”

Deep Profiling for Amazon SageMaker automatically monitors system resource utilisation and provides alerts where required for any detected training bottlenecks. The feature works across frameworks (PyTorch, Apache MXNet, and TensorFlow) and collects system and training metrics automatically without requiring any code changes in training scripts.

Next up, we have Distributed Training on SageMaker which AWS claims makes it possible to train large, complex deep learning models up to two times faster than current approaches.

Kristóf Szalay, CTO at Turbine, comments:

“We use machine learning to train our in silico human cell model, called Simulated Cell, based on a proprietary network architecture. By accurately predicting various interventions on the molecular level, Simulated Cell helps us to discover new cancer drugs and find combination partners for existing therapies.

Training of our simulation is something we continuously iterate on, but on a single machine each training takes days, hindering our ability to iterate on new ideas quickly.

We are very excited about Distributed Training on Amazon SageMaker, which we are expecting to decrease our training times by 90% and to help us focus on our main task: to write a best-of-the-breed codebase for the cell model training.

Amazon SageMaker ultimately allows us to become more effective in our primary mission: to identify and develop novel cancer drugs for patients.”

SageMaker’s Data Parallelism engine scales training jobs from a single GPU to hundreds or thousands by automatically splitting data across multiple GPUs, improving training time by up to 40 percent.

With edge computing advancements increasing rapidly, AWS is keeping pace with SageMaker Edge Manager.

Edge Manager helps developers to optimise, secure, monitor, and maintain ML models deployed on fleets of edge devices. In addition to helping optimise ML models and manage edge devices, Edge Manager also provides the ability to cryptographically sign models, upload prediction data from devices to SageMaker for monitoring and analysis, and view a dashboard which tracks and provided a visual report on the operation of the deployed models within the SageMaker console.

Igor Bergman, VP of Cloud and Software of PCs and Smart Devices at Lenovo, comments:

“SageMaker Edge Manager will help eliminate the manual effort required to optimise, monitor, and continuously improve the models after deployment. With it, we expect our models will run faster and consume less memory than with other comparable machine-learning platforms.

As we extend AI to new applications across the Lenovo services portfolio, we will continue to require a high-performance pipeline that is flexible and scalable both in the cloud and on millions of edge devices. That’s why we selected the Amazon SageMaker platform. With its rich edge-to-cloud and CI/CD workflow capabilities, we can effectively bring our machine learning models to any device workflow for much higher productivity.”

Finally, SageMaker JumpStart aims to make it easier for developers which have little experience with machine learning deployments to get started.

JumpStart provides developers with an easy-to-use, searchable interface to find best-in-class solutions, algorithms, and sample notebooks. Developers can select from several end-to-end machine learning templates(e.g. fraud detection, customer churn prediction, or forecasting) and deploy them directly into their SageMaker Studio environments.

AWS has been on a roll with SageMaker improvements—delivering more than 50 new capabilities over the past year. After this bumper feature drop, we probably shouldn’t expect any more until we’ve put 2020 behind us.

You can find coverage of AWS’ more cloud-focused announcements via our sister publication CloudTech here.

The post AWS announces nine major updates for its ML platform SageMaker appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/09/aws-nine-major-updates-ml-platform-sagemaker/feed/ 1
CDEI launches a ‘roadmap’ for tackling algorithmic bias https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/ https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/#respond Fri, 27 Nov 2020 16:10:35 +0000 https://news.deepgeniusai.com/?p=10058 A review from the Centre for Data Ethics and Innovation (CDEI) has led to the creation of a “roadmap” for tackling algorithmic bias. The analysis was commissioned by the UK government in October 2018 and will receive a formal response. Algorithms bring substantial benefits to businesses and individuals able to use them effectively. However, increasing... Read more »

The post CDEI launches a ‘roadmap’ for tackling algorithmic bias appeared first on AI News.

]]>
A review from the Centre for Data Ethics and Innovation (CDEI) has led to the creation of a “roadmap” for tackling algorithmic bias.

The analysis was commissioned by the UK government in October 2018 and will receive a formal response.

Algorithms bring substantial benefits to businesses and individuals able to use them effectively. However, increasing evidence suggests biases are – often unconsciously – making their way into algorithms and creating an uneven playing field.

The CDEI is the UK government’s advisory body on the responsible use of AI and data-driven technology. CDEI has spent the past two years examining the issue of algorithmic bias and how it can be tackled.

Adrian Weller, Board Member for the Centre for Data Ethics and Innovation, said:

“It is vital that we work hard now to get this right as adoption of algorithmic decision-making increases. Government, regulators, and industry need to work together with interdisciplinary experts, stakeholders, and the public to ensure that algorithms are used to promote fairness, not undermine it.

The Centre for Data Ethics and Innovation has today set out a range of measures to help the UK to achieve this, with a focus on enhancing transparency and accountability in decision-making processes that have a significant impact on individuals.

Not only does the report propose a roadmap to tackle the risks, but it highlights the opportunity that good use of data presents to address historical unfairness and avoid new biases in key areas of life.”

The report focuses on four key sectors where algorithmic bias poses the biggest risk: policing, recruitment, financial services, and local government.

Today’s facial recognition algorithms are relatively effective when used on white males, but research has consistently shown how ineffective they are with darker skin colours and females. The error rate is, therefore, higher when facial recognition algorithms are used on some parts of society over others.

In June, Detroit Police chief Editor Craig said facial recognition would misidentify someone around 96 percent of the time—not particularly comforting when they’re being used to perform mass surveillance of protests.

Craig’s comments were made just days after the ACLU (American Civil Liberties Union) lodged a complaint against Detroit Police following the harrowing wrongful arrest of black male Robert Williams due to a facial recognition error.

And that’s just one example of where AI can unfairly impact some parts of society over another.

“Fairness is a highly prized human value,” the report’s preface reads. “Societies in which individuals can flourish need to be held together by practices and institutions that are regarded as fair.”

Ensuring fairness in algorithmic decision-making

Transparency is required for algorithms. In financial services, a business loan or mortgage could be rejected without transparency simply because a person was born in a poor neighbourhood. Job applications could be rejected not on a person’s actual skill but dependent on where they were educated.

Such biases exist in humans and our institutions today, but automating them at scale is a recipe for disaster. Removing bias from algorithms is not an easy task but if achieved would lead to increased fairness by taking human biases out of the equation.

“It is well established that there is a risk that algorithmic systems can lead to biased decisions, with perhaps the largest underlying cause being the encoding of existing human biases into algorithmic systems. But the evidence is far less clear on whether algorithmic decision-making tools carry more or less risk of bias than previous human decision-making processes. Indeed, there are reasons to think that better use of data can have a role in making decisions fairer, if done with appropriate care.

When changing processes that make life-affecting decisions about individuals we should always proceed with caution. It is important to recognise that algorithms cannot do everything. There are some aspects of decision-making where human judgement, including the ability to be sensitive and flexible to the unique circumstances of an individual, will remain crucial.”

The report’s authors examined the aforementioned four key sectors to determine their current “maturity levels” in algorithmic decision-making.

In recruitment, the authors found rapid growth in the use of algorithms to make decisions at all stages. They note that adequate data is being collected to monitor outcomes but found that understanding of how to avoid human biases creeping in is lacking.

“More guidance is needed on how to ensure that these tools do not unintentionally discriminate against groups of people, particularly when trained on historic or current employment data.”

The financial services industry has relied on data to make decisions for longer than arguably any other to determine things like how likely it is an individual can repay a debt.

“Specific groups are historically underrepresented in the financial system, and there is a risk that these historic biases could be entrenched further through algorithmic systems.”

CDEI found limited use of algorithmic decision-making in UK policing but found variance across forces with regards to both usage and managing ethical risks.

“The use of data analytics tools in policing carries significant risk. Without sufficient care, processes can lead to Review into bias in algorithmic decision-making: Executive summary Centre for Data Ethics and Innovation 8 outcomes that are biased against particular groups, or systematically unfair.

In many scenarios where these tools are helpful, there is still an important balance to be struck between automated decision-making and the application of professional judgement and discretion.”

Finally, in local government, CDEI noted an increased use of algorithms to inform decision-making but most are in their early stages of deployment. Such tools can be powerful assets for societal good – like helping to plan where resources should be allocated to maintain vital services – but can also carry significant risks.

“Evidence has shown that certain people are more likely to be overrepresented in data held by local authorities and this can then lead to biases in predictions and interventions.”

The CDEI makes a number of recommendations in its report but among them is:

  • Clear and mandatory transparency over how algorithms are used for public decision-making and steps taken to ensure the fair treatment of individuals.
  • Full accountability for organisations implementing such technologies.
  • Improving the diversity of roles involved with developing and deploying decision-making tools.
  • Updating model contracts and framework agreements for public sector procurement to incorporate minimum standards around the ethical use of AI.
  • The government working with regulators to provide clear guidance on the collection and use of protected characteristic data in outcome monitoring and decision-making processes. They should then encourage the use of that guidance and data to address current and historic bias in key sectors.
  • Ensuring that the Equality and Human Rights Commission has sufficient resources to investigate cases of alleged algorithmic discrimination.

CDEI is overseen by an independent board which is made up of experts from across industry, civil society, academia, and government; it is an advisory body and does not directly set policies. The Department for Digital, Culture, Media & Sport is consulting on whether a statutory status would help the CDEI to deliver its remit as part of the National Data Strategy.

You can find a full copy of the CDEI’s report into tackling algorithmic bias here (PDF)

(Photo by Matt Duncan on Unsplash)

The post CDEI launches a ‘roadmap’ for tackling algorithmic bias appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/feed/ 0
Synthesized’s free tool aims to detect and remove algorithmic biases https://news.deepgeniusai.com/2020/11/12/synthesized-free-tool-detect-remove-algorithmic-biases/ https://news.deepgeniusai.com/2020/11/12/synthesized-free-tool-detect-remove-algorithmic-biases/#respond Thu, 12 Nov 2020 11:13:52 +0000 https://news.deepgeniusai.com/?p=10016 Synthesized has launched a free tool which aims to quickly identify and remove dangerous biases in algorithms. As humans, we all have biases. These biases, often unconsciously, end up in algorithms which are designed to be used across society. In practice, this could mean anything from a news app serving more left-wing or right-wing content—through... Read more »

The post Synthesized’s free tool aims to detect and remove algorithmic biases appeared first on AI News.

]]>
Synthesized has launched a free tool which aims to quickly identify and remove dangerous biases in algorithms.

As humans, we all have biases. These biases, often unconsciously, end up in algorithms which are designed to be used across society.

In practice, this could mean anything from a news app serving more left-wing or right-wing content—through to facial recognition systems which flag some races and genders more than others.

A 2010 study (PDF) by researchers at NIST and the University of Texas found that algorithms designed and tested in East Asia are better at recognising East Asians, while those developed in Western countries are more accurate when detecting Caucasians.

Dr Nicolai Baldin, CEO and Founder of Synthesized, said:

“The reputational risk of all organisations is under threat due to biased data and we’ve seen this will no longer be tolerated at any level. It’s a burning priority now and must be dealt with as a matter of urgency, both from a legal and ethical standpoint.

Last year, Algorithmic Justice League founder Joy Buolamwini gave a presentation during the World Economic Forum on the need to fight AI bias. Buolamwini highlighted the massive disparities in effectiveness when popular facial recognition algorithms were applied to various parts of society.

Synthesized claims its platform is able to automatically identify bias across data attributes like gender, age, race, religion, sexual orientation, and more. 

The platform was designed to be simple-to-use with no coding knowledge required. Users only have to upload a structured data file – as simple as a spreadsheet – to begin analysing for potential biases. A ‘Total Fairness Score’ will be provided to show what percentage of the provided dataset contained biases.

“Synthesized’s Community Edition for Bias Mitigation is one of the first offerings specifically created to understand, investigate, and root out bias in data,” explains Baldin. “We designed the platform to be very accessible, easy-to-use, and highly scalable, as organisations have data stored across a huge range of databases and data silos.”

Some examples of how Synthesized’s tool could be used across industries include:

  • In finance, to create fairer credit ratings
  • In insurance, for more equitable claims
  • In HR, to eliminate biases in hiring processes
  • In universities, for ensuring fairness in admission decisions

Synthesized’s platform uses a proprietary algorithm which is said to be quicker and more accurate than existing techniques for removing biases in datasets. A new synthetic dataset is created which, in theory, should be free of biases.

“With the generation of synthetic data, Synthesized’s platform gives its users the ability to equally distribute all attributes within a dataset to remove bias and rebalance the dataset completely,” the company says.

“Users can also manually change singular data attributes within a dataset, such as gender, providing granular control of the rebalancing process.”

If only MIT used such a tool on its dataset it was forced to remove in July after being found to be racist and misogynistic.

You can find out more about Synthesized’s tool and how to get started here.

(Photo by Agence Olloweb on Unsplash)

The post Synthesized’s free tool aims to detect and remove algorithmic biases appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/12/synthesized-free-tool-detect-remove-algorithmic-biases/feed/ 0
Over 1,000 researchers sign letter opposing ‘crime predicting’ AI https://news.deepgeniusai.com/2020/06/24/over-1000-researchers-sign-letter-crime-predicting-ai/ https://news.deepgeniusai.com/2020/06/24/over-1000-researchers-sign-letter-crime-predicting-ai/#respond Wed, 24 Jun 2020 12:24:25 +0000 https://news.deepgeniusai.com/?p=9706 More than 1,000 researchers, academics, and experts have signed an open letter opposing the use of AI to predict crime. Anyone who has watched the sci-fi classic Minority Report will be concerned about attempts to predict crime before it happens. In an ideal scenario, crime prediction could help determine where to allocate police resources –... Read more »

The post Over 1,000 researchers sign letter opposing ‘crime predicting’ AI appeared first on AI News.

]]>
More than 1,000 researchers, academics, and experts have signed an open letter opposing the use of AI to predict crime.

Anyone who has watched the sci-fi classic Minority Report will be concerned about attempts to predict crime before it happens. In an ideal scenario, crime prediction could help determine where to allocate police resources – but the reality will be very different.

The researchers are speaking out ahead of an imminent publication titled ‘A Deep Neural Network Model to Predict Criminality Using Image Processing’. In the paper, the authors claim to be able to predict whether a person will become a criminal based on automated facial recognition.

“By automating the identification of potential threats without bias, our aim is to produce tools for crime prevention, law enforcement, and military applications that are less impacted by implicit biases and emotional responses,” says Harrisburg University Professor and co-author of the paper Nathaniel J.S. Ashby.

“Our next step is finding strategic partners to advance this mission.”

Finding willing partners may prove to be a challenge. Signatories of the open letter include employees working on AI from tech giants including Microsoft, Google, and Facebook.

In their letter, the signatories highlight the many issues of today’s AI technologies which make dabbling in crime prediction so dangerous.

Chief among the concerns is the well-documented racial bias of algorithms. Every current facial recognition system is more accurate when detecting white males and often incorrectly flags members of the BAME community as criminals more often when used in a law enforcement setting.

However, even if the inaccuracies with facial recognition algorithms are addressed, the researchers highlight the problems with the current justice system which have been put in the spotlight in recent weeks following the murder of George Floyd.

In their letter, the researchers explain:

“Research of this nature — and its accompanying claims to accuracy — rest on the assumption that data regarding criminal arrest and conviction can serve as reliable, neutral indicators of underlying criminal activity. Yet these records are far from neutral.

As numerous scholars have demonstrated, historical court and arrest data reflect the policies and practices of the criminal justice system. These data reflect who police choose to arrest, how judges choose to rule, and which people are granted longer or more lenient sentences.

Countless studies have shown that people of color are treated more harshly than similarly situated white people at every stage of the legal system, which results in serious distortions in the data. Thus, any software built within the existing criminal legal framework will inevitably echo those same prejudices and fundamental inaccuracies when it comes to determining if a person has the ‘face of a criminal.’”

Among the co-authors of the disputed paper is Jonathan W. Korn, a Ph.D. student who is highlighted as an NYPD veteran. Korn says that AI which can predict criminality would be “a significant advantage for law enforcement agencies.”

While such a system would make the lives of law enforcement officers easier, it would do so at the cost of privacy and the automation of racial profiling.

“Machine learning programs are not neutral; research agendas and the data sets they work with often inherit dominant cultural beliefs about the world,” warn the letter’s authors.

“The uncritical acceptance of default assumptions inevitably leads to discriminatory design in algorithmic systems, reproducing ideas which normalise social hierarchies and legitimise violence against marginalised groups.”

You can find the full open letter here.

(Photo by Bill Oxford on Unsplash)

The post Over 1,000 researchers sign letter opposing ‘crime predicting’ AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/24/over-1000-researchers-sign-letter-crime-predicting-ai/feed/ 0
San Francisco hopes AI will prevent bias in prosecutions https://news.deepgeniusai.com/2019/06/13/san-francisco-ai-prevent-bias-prosecutions/ https://news.deepgeniusai.com/2019/06/13/san-francisco-ai-prevent-bias-prosecutions/#respond Thu, 13 Jun 2019 11:59:53 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5751 San Francisco will soon implement AI in a bid to prevent bias when prosecuting a potential criminal. Even subconscious human biases can impact courtroom decisions. Racial bias in the legal system is particularly well-documented (PDF) and often leads to individuals with darker skin being prosecuted more, or with tougher sentencing, than people with lighter skin... Read more »

The post San Francisco hopes AI will prevent bias in prosecutions appeared first on AI News.

]]>
San Francisco will soon implement AI in a bid to prevent bias when prosecuting a potential criminal.

Even subconscious human biases can impact courtroom decisions. Racial bias in the legal system is particularly well-documented (PDF) and often leads to individuals with darker skin being prosecuted more, or with tougher sentencing, than people with lighter skin tones accused of similar crimes.

Speaking during a press briefing today, SF District Attorney George Gascón said: “When you look at the people incarcerated in this country, they’re going to be disproportionately men and women of colour.”

To combat this, San Francisco will use a ‘bias mitigation tool’ which automatically redacts any information from a police report that could identify a suspect’s race.

Information stripped from reports will not only include descriptions of race but also things such as hair and eye colour. The bias mitigation tool will even remove things such as neighbourhoods and the names of people which may indicate an individual’s racial background.

San Francisco’s bias-reducing AI even strips out information which identifies specific police officers, like their badge number. Removing this data helps to ensure the prosecutor isn’t biased through knowing an officer.

The AI tool is being developed by Alex Chohlas-Wood of the Stanford Computational Policy Lab. Several computer vision algorithms are used to recognise words and replace them with more generic equivalents like Officer #2 or Associate #1.

San Francisco hopes to start using the bias mitigation tool in early July. Hopefully, it will help to address the problem of bias in the legal system while also reducing the perception that AI only introduces bias.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post San Francisco hopes AI will prevent bias in prosecutions appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/06/13/san-francisco-ai-prevent-bias-prosecutions/feed/ 0
UK government investigates AI bias in decision-making https://news.deepgeniusai.com/2019/03/20/uk-government-ai-bias-decision-making/ https://news.deepgeniusai.com/2019/03/20/uk-government-ai-bias-decision-making/#comments Wed, 20 Mar 2019 17:57:00 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5368 The UK government is launching an investigation to determine the levels of bias in algorithms that could affect people’s lives. A browse through our ‘ethics’ category here on AI News will highlight the serious problem of bias in today’s algorithms. With AIs being increasingly used for decision-making, parts of society could be left behind. Conducted... Read more »

The post UK government investigates AI bias in decision-making appeared first on AI News.

]]>
The UK government is launching an investigation to determine the levels of bias in algorithms that could affect people’s lives.

A browse through our ‘ethics’ category here on AI News will highlight the serious problem of bias in today’s algorithms. With AIs being increasingly used for decision-making, parts of society could be left behind.

Conducted by the Centre for Data Ethics and Innovation (CDEI), the investigation will focus on areas where AI has tremendous potential – such as policing, recruitment, and financial services – but would have a serious negative impact on lives if not implemented correctly.

Digital Secretary Jeremy Wright said:

“Technology is a force for good which has improved people’s lives but we must make sure it is developed in a safe and secure way.

Our Centre for Data Ethics and Innovation has been set up to help us achieve this aim and keep Britain at the forefront of technological development.

I’m pleased its team of experts is undertaking an investigation into the potential for bias in algorithmic decision-making in areas including crime, justice and financial services. I look forward to seeing the Centre’s recommendations to Government on any action we need to take to help make sure we maximise the benefits of these powerful technologies for society.”

Durham police are currently using AI for a tool it calls ‘Harm Assessment Risk’. As you might guess, the AI determines whether an individual is likely to cause further harm. The tool helps with decisions on whether an individual is eligible for deferred prosecution.

If an algorithm is more or less effective on individuals with different characteristics over another, serious problems would arise.

Roger Taylor, Chair of the CDEI, is expected to say during a Downing Street event:

“The Centre is focused on addressing the greatest challenges and opportunities posed by data driven technology. These are complex issues and we will need to take advantage of the expertise that exists across the UK and beyond. If we get this right, the UK can be the global leader in responsible innovation.

We want to work with organisations so they can maximise the benefits of data driven technology and use it to ensure the decisions they make are fair. As a first step we will be exploring the potential for bias in key sectors where the decisions made by algorithms can have a big impact on people’s lives.

I am delighted that the Centre is today publishing its strategy setting out our priorities.”

In a 2010 study, researchers at NIST and the University of Texas in Dallas found (PDF) algorithms designed and tested in East Asia are better at recognising East Asians, while those developed in Western countries are more accurate when detecting Caucasians.

Similar worrying discrepancies were highlighted by Algorithmic Justice League founder Joy Buolamwini during a presentation at the World Economic Forum back in January. For her research, she analysed popular facial recognition algorithms.

These issues with bias in algorithms need to be addressed now before they are used for critical decision-making. The public is currently unconvinced AI will benefit humanity, and AI companies themselves are bracing for ‘reputational harm’ along the way.

Interim reports from the CDEI will be released in the summer with final reports set to be published early next year.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post UK government investigates AI bias in decision-making appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/03/20/uk-government-ai-bias-decision-making/feed/ 1
Joy Buolamwini: Fighting algorithmic bias needs to be ‘a priority’ https://news.deepgeniusai.com/2019/01/24/joy-buolamwini-algorithmic-bias-priority/ https://news.deepgeniusai.com/2019/01/24/joy-buolamwini-algorithmic-bias-priority/#comments Thu, 24 Jan 2019 15:09:20 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4584 Algorithmic Justice League founder Joy Buolamwini gave a speech during the World Economic Forum this week on the need to fight AI bias. Buolamwini is also an MIT Media Lab researcher and went somewhat viral for her TED Talk in 2016 titled ‘How I’m fighting bias in algorithms’. Her latest speech included a presentation in... Read more »

The post Joy Buolamwini: Fighting algorithmic bias needs to be ‘a priority’ appeared first on AI News.

]]>
Algorithmic Justice League founder Joy Buolamwini gave a speech during the World Economic Forum this week on the need to fight AI bias.

Buolamwini is also an MIT Media Lab researcher and went somewhat viral for her TED Talk in 2016 titled ‘How I’m fighting bias in algorithms’.

Her latest speech included a presentation in which Buolamwini went over an analysis of the current popular facial recognition algorithms.

Here were the overall accuracy results when guessing the gender of a face:

  • Microsoft: 93.7 percent
  • Face++: 90 percent
  • IBM: 87.9 percent

Shown in this way, there appears to be little problem. Of course, society is a lot more diverse and algorithms need to be accurate for all.

When separated between males and females, a greater disparity becomes apparent:

  • Microsoft: 89.3 percent (females), 97.4 percent (males)
  • Face++: 78.7 percent (females), 99.3 percent (males)
  • IBM: 79.7 percent (females), 94.4 percent (males)

Here we begin to see the underrepresentation of females in STEM careers begin to come into effect. China-based Face++ suffers the worst, likely a result of the country’s more severe gender gap (PDF) over the US.

Splitting between skin type also increases the disparity:

  • Microsoft: 87.1 percent (darker), 99.3 percent (lighter)
  • Face++: 83.5 percent (darker), 95.3 percent (lighter)
  • IBM: 77.6 percent (darker), 96.8 percent (lighter)

The difference here is likely again to do with a racial disparity in STEM careers. A gap between 12-19 percent is observed between darker and lighter skin tones.

So far, the results are in line with a 2010 study by researchers at NIST and the University of Texas in Dallas. The researchers found (PDF) algorithms designed and tested in East Asia are better at recognising East Asians, while those developed in Western countries are more accurate when detecting Caucasians.

“We did something that hadn’t been done in the field before, which was doing intersectional analysis,” explains Buolamwini. “If we only do single axis analysis – we only look at skin type, only look at gender… – we’re going to miss important trends.”

Here is where the results get most concerning. Results are in descending order from most accurate to least:

Microsoft

Lighter Males (100 percent)

Lighter Females (98.3 percent)

Darker Males (94 percent)

Darker Females (79.2 percent)

Face++

Darker Males (99.3 percent)

Lighter Males (99.2 percent)

Lighter Females (94 percent)

Darker Females (65.5 percent)

IBM

Lighter Males (99.7 percent)

Lighter Females (92.9 percent)

Darker Males (88 percent)

Darker Females (65.3 percent)

The lack of accuracy with regards to females with darker skin tones is of particular note. Two of the three algorithms would get it wrong in approximately one-third of occasions.

Just imagine surveillance being used with these algorithms. Lighter skinned males would be recognised in most cases, but darker skinned females would be stopped often. That could be a lot of mistakes in areas with high footfall such as airports.

Prior to making her results public, Buolamwini sent the results to each company. IBM responded the same day and said their developers would address the issue.

When she reassessed IBM’s algorithm, the accuracy when assessing darker males jumped from 88 percent to 99.4 percent, for darker females from 65.3 percent to 83.5 percent, for lighter females from 92.9 percent to 97.6 percent, and for lighter males it stayed the same at 97 percent.

Buolamwini commented: “So for everybody who watched my TED Talk and said: ‘Isn’t the reason you weren’t detected because of, you know, physics? Your skin reflectance, contrast, et cetera,’ — the laws of physics did not change between December 2017, when I did the study, and 2018, when they launched the new results.”

“What did change is they made it a priority.”

You can watch Buolamwini’s full presentation at the WEF here.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Joy Buolamwini: Fighting algorithmic bias needs to be ‘a priority’ appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/01/24/joy-buolamwini-algorithmic-bias-priority/feed/ 4
AI is at risk of bias due to serious gender gap problem https://news.deepgeniusai.com/2018/12/18/ai-risk-bias-gender-gap-problem/ https://news.deepgeniusai.com/2018/12/18/ai-risk-bias-gender-gap-problem/#respond Tue, 18 Dec 2018 18:11:42 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4341 AI needs to be created by a diverse range of developers to prevent bias, but the World Economic Forum (WEF) has found a serious gender gap. Gender gaps in STEM careers have been a problem for some time, but it’s not often the end product matters what gender it was developed by. AI is about... Read more »

The post AI is at risk of bias due to serious gender gap problem appeared first on AI News.

]]>
AI needs to be created by a diverse range of developers to prevent bias, but the World Economic Forum (WEF) has found a serious gender gap.

Gender gaps in STEM careers have been a problem for some time, but it’s not often the end product matters what gender it was developed by. AI is about to be everywhere, and it matters that it’s representative of those it serves.

In a report published this week, the WEF wrote:

“The equal contribution of women and men in this process of deep economic and societal transformation is critical.

More than ever, societies cannot afford to lose out on the skills, ideas and perspectives of half of humanity to realize the promise of a more prosperous and humancentric future that well-governed innovation and technology can bring.”

Shockingly, the WEF report found less than one-fourth of roles in the industry are being filled by women. To put that in perspective, the AI gender gap is around three times larger than other industry talent pools.

“It is absolutely crucial that those people who create AI are representative of the population as a whole,” said Kay Firth-Butterfield, WEF’s head of artificial intelligence and machine learning.

Bias in coding has the potential for AI to perform better for certain groups of society than others, potentially giving them an advantage. This bias is rarely intentional but has already found its way into AI developments.

A recent test of Amazon’s facial recognition technology by the ACLU (American Civil Liberties Union) found it erroneously labelled those with darker skin colours as criminals more often.

Similarly, a 2010 study by researchers at NIST and the University of Texas in Dallas found that algorithms designed and tested in East Asia are better at recognising East Asians, while those designed in Western countries are more accurate at detecting Caucasians.

More recently, Google released a predictive text feature within Gmail where the algorithm made biased assumptions referring to a nurse with female pronouns.

It’s clear, addressing the gender gap is more pressing than ever.

You can find the full report here.

 AI & >.

The post AI is at risk of bias due to serious gender gap problem appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/12/18/ai-risk-bias-gender-gap-problem/feed/ 0
ACLU finds Amazon’s facial recognition AI is racially biased https://news.deepgeniusai.com/2018/07/27/aclu-amazon-facial-recognition-biased/ https://news.deepgeniusai.com/2018/07/27/aclu-amazon-facial-recognition-biased/#respond Fri, 27 Jul 2018 13:03:43 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3552 A test of Amazon’s facial recognition technology by the ACLU has found it erroneously labelled those with darker skin colours as criminals more often. Bias in AI technology, when used by law enforcement, has raised concerns of infringing on civil rights by automated racial profiling. A 2010 study by researchers at NIST and the University... Read more »

The post ACLU finds Amazon’s facial recognition AI is racially biased appeared first on AI News.

]]>
A test of Amazon’s facial recognition technology by the ACLU has found it erroneously labelled those with darker skin colours as criminals more often.

Bias in AI technology, when used by law enforcement, has raised concerns of infringing on civil rights by automated racial profiling.

A 2010 study by researchers at NIST and the University of Texas in Dallas found that algorithms designed and tested in East Asia are better at recognising East Asians, while those designed in Western countries are more accurate at detecting Caucasians.

The ACLU (American Civil Liberties Union) ran a test of Amazon’s facial recognition technology on members of Congress to see if they match with a database of criminal mugshots.

Amazon’s Rekognition tool was used to compare pictures of all members of the House and Senate against 25,000 arrest photos, the false matches disproportionately affected members of the Congressional Black Caucus.

In a blog post, the ACLU said:

“The members of Congress who were falsely matched with the mugshot database we used in the test include Republicans and Democrats, men and women, and legislators of all ages, from all across the country.

These results demonstrate why Congress should join the ACLU in calling for a moratorium on law enforcement use of face surveillance.”

AWS (Amazon Web Services) disputed the methodology used by the ACLU.

The company says the default setting of 80 percent confidence was left on. For law enforcement, Amazon says it suggests the option to only register matches of 95 percent confidence or above.

Only 28 members of Congress were incorrectly flagged as being among the criminal mugshots, but nearly 40 percent of the wrong matches were people of darker skin colours. What really puts it all into perspective, however, is that only 20 percent of Congress are people of colour.

“Our test reinforces that face surveillance is not safe for government use,” said Jacob Snow, Technology and Civil Liberties Attorney at the ACLU Foundation of Northern California. “Face surveillance will be used to power discriminatory surveillance and policing that targets communities of color, immigrants, and activists. Once unleashed, that damage can’t be undone.”

Amazon is actively marketing its facial recognition technology to law enforcement such as police in Washington County, Oregon, and Orlando, Florida. They promote it as a way to identify people in real-time from both surveillance footage and officers’ body cameras.

What are your thoughts on the ACLU’s findings?

 

The post ACLU finds Amazon’s facial recognition AI is racially biased appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/07/27/aclu-amazon-facial-recognition-biased/feed/ 0