San Francisco hopes AI will prevent bias in prosecutions

San Francisco will soon implement AI in a bid to prevent bias when prosecuting a potential criminal.

Even subconscious human biases can impact courtroom decisions. Racial bias in the legal system is particularly well-documented (PDF) and often leads to individuals with darker skin being prosecuted more, or with tougher sentencing, than people with lighter skin tones accused of similar crimes.

Speaking during a press briefing today, SF District Attorney George Gascón...

UN: AI voice assistants fuel stereotype women are ‘subservient’

A report from the UN claims AI voice assistants like Alexa and Siri are fueling the stereotype women are ‘subservient’.

Published by UNESCO (United Nations Educational, Scientific and Cultural Organization), the 146-page report titled “I’d blush if I could” highlights the market is dominated by female voice assistants.

According to the researchers, the almost exclusive use of female voice assistants fuels stereotypes that women are "obliging, docile and...

UK government announces board members of AI Council

The UK government has announced the names of board members appointed to its dedicated AI Council.

As one of the global leaders in AI, the global community will be looking to who the UK has appointed to its council and awaiting its guidance.

Digital Secretary, Jeremy Wright, said:

“Britain is already a leading authority in AI. We are home to some of the world’s finest academic institutions, landing record levels of investment to the sector, and attracting...

AI Expo Global: Fairness and safety in artificial intelligence

AI News sat down with Faculty's head of research Ilya Feige to discuss safe and fair practices in artificial intelligence development.

Feige had just finished giving a talk entitled ‘Fairness in AI: Latest developments in AI safety’ at this year's AI Expo Global. We managed to grab him to get more of his thoughts on the issue.

Rightfully, people are becoming increasingly concerned about unfair and unsafe AIs. Human biases are seeping into algorithms which poses a...

Google axes its AI ethics board after less than a week

Google has axed its troubled AI ethics board less than a week after it had been created.

The board appeared fairly representative of society from the outset, although perhaps too much so. Some views shouldn't be represented.

In a statement, Google wrote:

"It's become clear that in the current environment, ATEAC can't function as we wanted. So we're ending the council and going back to the drawing board.

We'll continue to be responsible in our work on the...

Google’s AI ethics panel is already receiving backlash

Google is already facing criticism of its AI ethics panel over featuring a conservative figure with some controversial views.

Kay Editor is the president of conservative think tank the Heritage Foundation. The foundation has a history of opposing LGBTQ and immigrant rights which many Googlers feel should not be promoted.

A group of Google employees which calls itself ‘Googlers Against Transphobia’ wrote a post criticising Mountain View’s appointment of the Heritage...

Don’t be evil: Google is creating a dedicated AI ethics panel

Google is aiming to prevent societal disasters caused by its AI technology with the creation of a dedicated ethics panel.

The panel is called the Advanced Technology External Advisory Council (ATEAC) and features a range of academics and experts from around the world.

Eight people are currently on the panel, with some from as far as Hong Kong and South Africa. Among the roster is former US deputy secretary of state William Joseph Burns and University of Bath associate...

Report: 94 percent of IT leaders want greater focus on AI ethics

A study from SnapLogic has found that 94 percent of IT decision makers across the UK and US want a greater focus on ethical AI development.

Bias in algorithms continues to be a problem and is among the biggest barriers to societal adoption. Facial recognition algorithms, for example, have been found to be far less accurate for some parts of society than others.

Without addressing these issues, we’re in danger of automating problems such as racial profiling. Public...

Stanford’s institute ensuring AI ‘represents humanity’ lacks diversity

An institute established by Stanford University to address concerns that AI may not represent the whole of humanity is lacking in diversity.

The goal of the Institute for Human-Centered Artificial Intelligence is admirable, but the fact it consists primarily of white males brings into doubt its ability to ensure adequate representation.

Cybersecurity expert Chad Loder noticed that not a single member of Stanford’s new AI faculty was black. Tech site Gizmodo reached out...

UK government investigates AI bias in decision-making

The UK government is launching an investigation to determine the levels of bias in algorithms that could affect people's lives.

A browse through our ‘ethics’ category here on AI News will highlight the serious problem of bias in today's algorithms. With AIs being increasingly used for decision-making, parts of society could be left behind.

Conducted by the Centre for Data Ethics and Innovation (CDEI), the investigation will focus on areas where AI has tremendous...