AI Expo Global: Fairness and safety in artificial intelligence

AI News sat down with Faculty's head of research Ilya Feige to discuss safe and fair practices in artificial intelligence development.

Feige had just finished giving a talk entitled ‘Fairness in AI: Latest developments in AI safety’ at this year's AI Expo Global. We managed to grab him to get more of his thoughts on the issue.

Rightfully, people are becoming increasingly concerned about unfair and unsafe AIs. Human biases are seeping into algorithms which poses a...

EU AI Expert Group: Ethical risks are ‘unimaginable’

The EU Commission’s AI expert group has published its assessment of the rapidly-advancing technology and warned it has “unimaginable” ethical risks.

Some of the highlighted risks includes lethal autonomous systems, tracking individuals, and ‘scoring’ people in society.

On the subject of lethal autonomous systems, the experts warn machines with cognitive skills could “decide whom, when and where to fight without human intervention”.

When it comes...

AI tags potential criminals before they’ve done anything

British police want to use AI for highlighting who is at risk of becoming a criminal before they’ve actually committed any crime. Although it sounds like a dystopian nightmare, there are clear benefits. Resources and outreach programs can be allocated to attempt preventing a crime, stop anyone becoming a victim, and remove the costs associated with prosecuting and jailing someone. With prisons overburdened and space limited, reducing the need to lock someone up is a win for everyone....

Amazon expert suggests AI regulation after ACLU’s bias findings

An expert from Amazon has suggested the government should implement a minimum confidence level for the use of facial recognition in law enforcement. Dr. Matt Wood, GM of Deep Learning and AI at Amazon Web Services, made the suggestion in a blog post responding to the ACLU’s (American Civil Liberties Union) findings of a racial bias in the ‘Rekognition’ facial recognition algorithm by Amazon. In their findings, the ACLU found Rekognition erroneously labelled those with darker skin...

ACLU finds Amazon’s facial recognition AI is racially biased

A test of Amazon’s facial recognition technology by the ACLU has found it erroneously labelled those with darker skin colours as criminals more often. Bias in AI technology, when used by law enforcement, has raised concerns of infringing on civil rights by automated racial profiling. A 2010 study by researchers at NIST and the University of Texas in Dallas found that algorithms designed and tested in East Asia are better at recognising East Asians, while those designed in Western...

INTERPOL investigates how AI will impact crime and policing

INTERPOL hosted an event in Singapore bringing leading experts together with the aim of examining how AI will affect crime and prevention. The event, organised by INTERPOL and the UNICRI Centre for AI and Robotics, was held at the former’s Global Complex for Innovation. Experts from across industries gathered to discuss issues and several private sector companies gave live demonstrations of related projects. Some technological advances in AI pose a threat. In a recent interview with Irakli...

Amazon is next to face employee protest over government contracts

amazon government protest contract surveillance face recognition

Mere days after Google and Microsoft staff protested their employers’ controversial government contracts, Amazon is facing its own internal revolt. Amazon employees are not all too pleased with their company’s sale of facial recognition software and other services to US government bodies. Much like Google and Microsoft’s employees, who demanded their respective companies never undertake work that may cause social or physical harm, a similar letter was posted on Amazon’s internal...

Don’t Be Evil: Google publishes its AI ethical principles following backlash

google ai ethical principles dont be evil

Following the backlash over its Project Maven plans to develop AI for the US military, Google has since withdrawn and published its ethical principles. Project Maven was Google’s collaboration with the US Department of Defense. In March, leaks indicated that Google supplied AI technology to the Pentagon to help analyse drone footage. The following month, over 4,000 employees signed a petition demanding that Google's management cease work on Project Maven and promise to never again...

Russian startup is building a controversial ‘ethnicity-detecting’ AI

A startup from Russia is building an AI which uses facial recognition to determine ethnicity, prompting fears it could be used for automated racial profiling. NtechLab lists ‘ethnicity detection’ as an upcoming feature of its solution. The algorithm promises the ability to examine people and determine their ethnicity. An image, which has since been pulled as the result of backlash, showed classifications of people including ‘European’, ‘African’, and ‘Arabic’. The company is...

Information Commissioner targets intrusive facial recognition

Facial recognition offers huge opportunities, but the Information Commissioner is more concerned about how it could impact privacy. In a post on the ICO blog, Information Commissioner Elizabeth Denham highlights the advantages and disadvantages of facial recognition. “I have identified FRT by law enforcement as a priority area for my office and I recently wrote to the Home Office and the NPCC setting out my concerns,” Denham wrote. “Should my concerns not be addressed, I will...