US adds Chinese AI firms to ban list citing abuses against Muslims in Xinjiang

china usa ai ban list ai companies tech artificial intelligence entity trade war

A collection of Chinese AI and facial recognition firms have been added to a US blacklist citing rights abuses against Muslims in Xinjiang.

28 Chinese firms have been added (PDF) to the “entity list” of the US government which prohibits American companies from continuing any links with them.

The US government said the firms were blacklisted for playing a role in the "implementation of China's campaign of repression, mass arbitrary detention, and high-technology...

UK police are concerned AI will lead to bias and over-reliance on automation

British police have expressed concern that using AI in their operations may lead to increased bias and an over-reliance on automation.

A study commissioned by UK government advisory body the Centre for Data Ethics and Innovation warned that police felt AI may "amplify" prejudices.

50 experts were interviewed by the Royal United Services Institute (RUSI) for the research, including senior police officers.

Racial profiling continues to be a huge problem. More...

Police in China will use AI face recognition to identify ‘lost’ elderly

china lost elderly facial recognition face ai artificial intelligence surveillance chinese

Chinese police hope to use AI-powered facial recognition, in combination with the nation's mass surveillance network, to identify lost elderly people.

The country's surveillance network is often scrutinised for being invasive, but the ability to detect potentially vulnerable people helps to shift the perception that it primarily benefits the government.

Public data suggests around 500,000 elderly people get lost each year, the equivalent of around 1,370 per day. About 72...

EmoNet: Emotional neural network automatically categorises feelings

emonet neural network ai artificial intelligence emotions feelings

A neural network called EmoNet has been designed to automatically categorise the feelings of an individual.

EmoNet was created by researchers from the University of Colorado and Duke University and could one day help AIs to understand and react to human emotions.

The neural network is capable of accurately classifying images into 11 emotions, although some with a higher confidence than others.

‘Craving,’ ‘sexual desire,’ and ‘horror’ were able to be...

EU AI Expert Group: Ethical risks are ‘unimaginable’

The EU Commission’s AI expert group has published its assessment of the rapidly-advancing technology and warned it has “unimaginable” ethical risks.

Some of the highlighted risks includes lethal autonomous systems, tracking individuals, and ‘scoring’ people in society.

On the subject of lethal autonomous systems, the experts warn machines with cognitive skills could “decide whom, when and where to fight without human intervention”.

When it comes...

Amazon joins calls to establish facial recognition standards

Amazon has put its weight behind the growing number of calls from companies, individuals, and rights groups to establish facial recognition standards.

Michael Punke, VP of Global Public Policy at Amazon Web Services, said.

"Over the past several months, we've talked to customers, researchers, academics, policymakers, and others to understand how to best balance the benefits of facial recognition with the potential risks.

It's critical that any legislation protect...

Joy Buolamwini: Fighting algorithmic bias needs to be ‘a priority’

Algorithmic Justice League founder Joy Buolamwini gave a speech during the World Economic Forum this week on the need to fight AI bias.

Buolamwini is also an MIT Media Lab researcher and went somewhat viral for her TED Talk in 2016 titled ‘How I’m fighting bias in algorithms’.

Her latest speech included a presentation in which Buolamwini went over an analysis of the current popular facial recognition algorithms.

Here were the overall accuracy results when...

Chinese AI darling SenseTime wants facial recognition standards

The CEO of Chinese AI darling SenseTime wants to see facial recognition standards established for a ‘healthier’ industry. SenseTime is among China’s most renowned AI companies. Back in April, we reported it had become the world’s most funded AI startup. Part of the company’s monumental success is the popularity of facial recognition in China where it’s used in many aspects of citizens’ lives. Just yesterday, game developer Tencent announced it’s testing facial recognition to...

Amazon expert suggests AI regulation after ACLU’s bias findings

An expert from Amazon has suggested the government should implement a minimum confidence level for the use of facial recognition in law enforcement. Dr. Matt Wood, GM of Deep Learning and AI at Amazon Web Services, made the suggestion in a blog post responding to the ACLU’s (American Civil Liberties Union) findings of a racial bias in the ‘Rekognition’ facial recognition algorithm by Amazon. In their findings, the ACLU found Rekognition erroneously labelled those with darker skin...

ACLU finds Amazon’s facial recognition AI is racially biased

A test of Amazon’s facial recognition technology by the ACLU has found it erroneously labelled those with darker skin colours as criminals more often. Bias in AI technology, when used by law enforcement, has raised concerns of infringing on civil rights by automated racial profiling. A 2010 study by researchers at NIST and the University of Texas in Dallas found that algorithms designed and tested in East Asia are better at recognising East Asians, while those designed in Western...