AI Expo Global: Fairness and safety in artificial intelligence

AI News sat down with Faculty's head of research Ilya Feige to discuss safe and fair practices in artificial intelligence development.

Feige had just finished giving a talk entitled ‘Fairness in AI: Latest developments in AI safety’ at this year's AI Expo Global. We managed to grab him to get more of his thoughts on the issue.

Rightfully, people are becoming increasingly concerned about unfair and unsafe AIs. Human biases are seeping into algorithms which poses a...

Lack of STEM diversity is causing AI to have a ‘white male’ bias

A report from New York University's AI Now Institute has found a predominantly white male coding workforce is causing bias in algorithms.

The report highlights that – while gradually narrowing – the lack of diverse representation at major technology companies such as Microsoft, Google, and Facebook is causing AIs to cater more towards white males.

For example, at Facebook just 15 percent of the company's AI staff are women. The problem is even more substantial at...

EU AI Expert Group: Ethical risks are ‘unimaginable’

The EU Commission’s AI expert group has published its assessment of the rapidly-advancing technology and warned it has “unimaginable” ethical risks.

Some of the highlighted risks includes lethal autonomous systems, tracking individuals, and ‘scoring’ people in society.

On the subject of lethal autonomous systems, the experts warn machines with cognitive skills could “decide whom, when and where to fight without human intervention”.

When it comes...

EU Commission advances work on AI ethical guidelines

The EU Commission is advancing work on the establishment of AI ethical guidelines to ensure they can be put into practice.

A group of industry experts were appointed in 2016 to establish guidelines which ensure that AI is developed sensibly.

There are seven key pillars to the EU's ethical AI strategy:

Human agency and oversightRobustness and safetyPrivacy and data governanceTransparencyDiversity, non-discrimination, and fairnessSocietal and environmental...

Google axes its AI ethics board after less than a week

Google has axed its troubled AI ethics board less than a week after it had been created.

The board appeared fairly representative of society from the outset, although perhaps too much so. Some views shouldn't be represented.

In a statement, Google wrote:

"It's become clear that in the current environment, ATEAC can't function as we wanted. So we're ending the council and going back to the drawing board.

We'll continue to be responsible in our work on the...

AI Experts: Dear Amazon, stop selling facial recognition to law enforcement

A group of AI experts have signed an open letter to Amazon demanding the company stops selling facial recognition to law enforcement following bias findings.

Back in January, AI News reported on findings by Algorithmic Justice League founder Joy Buolamwini who researched some of the world's most popular facial recognition algorithms.

Buolamwini found most of the algorithms were biased and misidentified subjects with darker skin colours and/or females more...

Google’s AI ethics panel is already receiving backlash

Google is already facing criticism of its AI ethics panel over featuring a conservative figure with some controversial views.

Kay Editor is the president of conservative think tank the Heritage Foundation. The foundation has a history of opposing LGBTQ and immigrant rights which many Googlers feel should not be promoted.

A group of Google employees which calls itself ‘Googlers Against Transphobia’ wrote a post criticising Mountain View’s appointment of the Heritage...

Don’t be evil: Google is creating a dedicated AI ethics panel

Google is aiming to prevent societal disasters caused by its AI technology with the creation of a dedicated ethics panel.

The panel is called the Advanced Technology External Advisory Council (ATEAC) and features a range of academics and experts from around the world.

Eight people are currently on the panel, with some from as far as Hong Kong and South Africa. Among the roster is former US deputy secretary of state William Joseph Burns and University of Bath associate...

Report: 94 percent of IT leaders want greater focus on AI ethics

A study from SnapLogic has found that 94 percent of IT decision makers across the UK and US want a greater focus on ethical AI development.

Bias in algorithms continues to be a problem and is among the biggest barriers to societal adoption. Facial recognition algorithms, for example, have been found to be far less accurate for some parts of society than others.

Without addressing these issues, we’re in danger of automating problems such as racial profiling. Public...

Stanford’s institute ensuring AI ‘represents humanity’ lacks diversity

An institute established by Stanford University to address concerns that AI may not represent the whole of humanity is lacking in diversity.

The goal of the Institute for Human-Centered Artificial Intelligence is admirable, but the fact it consists primarily of white males brings into doubt its ability to ensure adequate representation.

Cybersecurity expert Chad Loder noticed that not a single member of Stanford’s new AI faculty was black. Tech site Gizmodo reached out...