Society – AI News https://news.deepgeniusai.com Artificial Intelligence News Mon, 11 Jan 2021 17:12:10 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Society – AI News https://news.deepgeniusai.com 32 32 Police use of Clearview AI’s facial recognition increased 26% after Capitol raid https://news.deepgeniusai.com/2021/01/11/police-use-clearview-ai-facial-recognition-increased-26-capitol-raid/ https://news.deepgeniusai.com/2021/01/11/police-use-clearview-ai-facial-recognition-increased-26-capitol-raid/#respond Mon, 11 Jan 2021 17:12:08 +0000 https://news.deepgeniusai.com/?p=10153 Clearview AI reports that police use of the company’s highly-controversial facial recognition system jumped 26 percent following the raid on the Capitol. The facial recognition system relies on scraping the data of people from across the web without their explicit consent, a practice which has naturally raised some eyebrows—including the ACLU’s which called it a... Read more »

The post Police use of Clearview AI’s facial recognition increased 26% after Capitol raid appeared first on AI News.

]]>
Clearview AI reports that police use of the company’s highly-controversial facial recognition system jumped 26 percent following the raid on the Capitol.

The facial recognition system relies on scraping the data of people from across the web without their explicit consent, a practice which has naturally raised some eyebrows—including the ACLU’s which called it a “nightmare scenario” for privacy.

Around three billion images are said to have been scraped for Clearview AI’s system.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland once argued.

Whether you call them protestors or domestic terrorists, the Trump supporters who raided the US Capitol Building last week – incited by the president to halt democracy and overturn the votes of millions of Americans – committed clear criminal offences that were bipartisanly condemned.

In comments to New York Times, Clearview AI CEO Hoan Ton-That claimed the company’s witnesses “a 26 percent increase of searches over our usual weekday search volume” on January 7th, following the riots.

Given the number of individuals involved, law enforcement has a gargantuan task to identify and locate the people that went far beyond exercising their right to peaceful protest and invaded a federal building, caused huge amounts of damage, and threatened elected representatives and staff.

The FBI has issued public appeals, but it’s little surprise that law enforcement is turning to automated means—regardless of the controversy. According to Clearview AI, approximately 2,400 agencies across the US use the company’s facial recognition technology.

Last year, the UK and Australia launched a joint probe into Clearview AI’s practices.

“The Office of the Australian Information Commissioner (OAIC) and the UK’s Information Commissioner’s Office (ICO) have opened a joint investigation into the personal information handling practices of Clearview Inc., focusing on the company’s use of ‘scraped’ data and biometrics of individuals,” the ICO wrote in a statement.

A similar probe was also launched by the EU’s privacy watchdog. The European Data Protection Board ruled that any use of the service by law enforcement in Europe would “likely not be consistent with the EU data protection regime” and that it “has doubts as to whether any Union or Member State law provides a legal basis for using a service such as the one offered by Clearview AI.”

Clearview AI has already been forced to suspend operations in Canada after the federal Office of the Privacy Commissioner of Canada opened an investigation into the company.

While Clearview AI’s facial recognition tech continues to have widespread use in the US, some police departments have taken the independent decision to ban officers from using such systems due to the well-documented inaccuracies which particularly affect minority communities.

The post Police use of Clearview AI’s facial recognition increased 26% after Capitol raid appeared first on AI News.

]]>
https://news.deepgeniusai.com/2021/01/11/police-use-clearview-ai-facial-recognition-increased-26-capitol-raid/feed/ 0
OpenAI’s latest neural network creates images from written descriptions https://news.deepgeniusai.com/2021/01/06/openai-latest-neural-network-creates-images-written-descriptions/ https://news.deepgeniusai.com/2021/01/06/openai-latest-neural-network-creates-images-written-descriptions/#comments Wed, 06 Jan 2021 18:28:28 +0000 https://news.deepgeniusai.com/?p=10142 OpenAI has debuted its latest jaw-dropping innovation, an image-generating neural network called DALL·E. DALL·E is a 12-billion parameter version of GPT-3 which is trained to generate images from text descriptions. “We find that DALL·E is able to create plausible images for a great variety of sentences that explore the compositional structure of language,“ OpenAI explains.... Read more »

The post OpenAI’s latest neural network creates images from written descriptions appeared first on AI News.

]]>
OpenAI has debuted its latest jaw-dropping innovation, an image-generating neural network called DALL·E.

DALL·E is a 12-billion parameter version of GPT-3 which is trained to generate images from text descriptions.

“We find that DALL·E is able to create plausible images for a great variety of sentences that explore the compositional structure of language,“ OpenAI explains.

Generated images can range from drawings, to objects, and even manipulated real-world photos. Here are some examples of each provided by OpenAI:

Just as OpenAI’s GPT-3 text generator caused alarm about implications such as helping to create fake news for the kinds of disinformation campaigns recently seen around COVID-19, 5G, and attempting to influence various democratic processes—similar concerns will be raised about the company’s latest innovation.

People are increasingly aware of fake news and not to believe everything they read, especially from unknown sources without good track records. However, as humans, we’re still used to believing what we can see with our eyes. Fake news with fake supporting imagery is a rather convincing combination.

Much like it argued with GPT-3, OpenAI essentially says that – by putting the technology out there as responsibly as possible – it helps to raise awareness and drives research into how the implications can be tackled before such neural networks are inevitably created and used by malicious parties.

“We recognise that work involving generative models has the potential for significant, broad societal impacts,” OpenAI said.

“In the future, we plan to analyse how models like DALL·E relate to societal issues like economic impact on certain work processes and professions, the potential for bias in the model outputs, and the longer-term ethical challenges implied by this technology.”

Technological advancements will almost always be used for damaging purposes—but often the benefits outweigh the risks. I’d wager you could write pages about the good and bad sides of the internet, but overall it’s a pretty fantastic thing.

When it comes down to it: If the “good guys” don’t build it, you can be sure the bad ones will.

(Image Credit: Justin Jay Wang/OpenAI)

The post OpenAI’s latest neural network creates images from written descriptions appeared first on AI News.

]]>
https://news.deepgeniusai.com/2021/01/06/openai-latest-neural-network-creates-images-written-descriptions/feed/ 1
Google is telling its scientists to give AI a ‘positive’ spin https://news.deepgeniusai.com/2020/12/24/google-telling-scientists-give-ai-positive-spin/ https://news.deepgeniusai.com/2020/12/24/google-telling-scientists-give-ai-positive-spin/#respond Thu, 24 Dec 2020 10:09:16 +0000 https://news.deepgeniusai.com/?p=10136 Google has reportedly been telling its scientists to give AI a “positive” spin in research papers. Documents obtained by Reuters suggest that, in at least three cases, Google’s researchers were requested to refrain from being critical of AI technology. A “sensitive topics” review was established by Google earlier this year to catch papers which cast... Read more »

The post Google is telling its scientists to give AI a ‘positive’ spin appeared first on AI News.

]]>
Google has reportedly been telling its scientists to give AI a “positive” spin in research papers.

Documents obtained by Reuters suggest that, in at least three cases, Google’s researchers were requested to refrain from being critical of AI technology.

A “sensitive topics” review was established by Google earlier this year to catch papers which cast a negative light on AI ahead of their publication.

Google asks its scientists to consult with legal, policy, and public relations teams prior to publishing anything on topics which could be deemed sensitive like sentiment analysis and categorisations of people based on race and/or political affiliation.

The new review means that papers from Google’s expert researchers which raise questions about AI developments may never be published. Reuters says four staff researchers believe Google is interfering with studies into potential technology harms.

Google recently faced scrutiny after firing leading AI ethics researcher Timnit Gebru.

Gebru is considered a pioneer in the field and researched the risks and inequalities found in large language models. She claims to have been fired by Google over an unpublished paper and sending an email critical of the company’s practices.

In an internal email countering Gebru’s claims, Head of Google Research Jeff Dean wrote:

“We’ve approved dozens of papers that Timnit and/or the other Googlers have authored and then published, but as you know, papers often require changes during the internal review process (or are even deemed unsuitable for submission). 

Unfortunately, this particular paper was only shared with a day’s notice before its deadline — we require two weeks for this sort of review — and then instead of awaiting reviewer feedback, it was approved for submission and submitted.

A cross-functional team then reviewed the paper as part of our regular process and the authors were informed that it didn’t meet our bar for publication and were given feedback about why.”

While it’s one word against another, it’s not a great look for Google.

“Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues,” Reuters reported one of Google’s documents as saying.

On its public-facing website, Google says that its scientists have “substantial” freedom—but that’s increasingly appearing like it’s not the case.

(Photo by Mitchell Luo on Unsplash)

The post Google is telling its scientists to give AI a ‘positive’ spin appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/24/google-telling-scientists-give-ai-positive-spin/feed/ 0
Facebook is developing a news-summarising AI called TL;DR https://news.deepgeniusai.com/2020/12/16/facebook-developing-news-summarising-ai-tldr/ https://news.deepgeniusai.com/2020/12/16/facebook-developing-news-summarising-ai-tldr/#comments Wed, 16 Dec 2020 17:19:16 +0000 https://news.deepgeniusai.com/?p=10126 Facebook is developing an AI called TL;DR which summarises news into shorter snippets. Anyone who’s spent much time on the web will know what TL;DR stands for⁠—but, for everyone else, it’s an acronym for “Too Long, Didn’t Read”. It’s an understandable sentiment we’ve all felt at some point. People lead busy lives. Some outlets now... Read more »

The post Facebook is developing a news-summarising AI called TL;DR appeared first on AI News.

]]>
Facebook is developing an AI called TL;DR which summarises news into shorter snippets.

Anyone who’s spent much time on the web will know what TL;DR stands for⁠—but, for everyone else, it’s an acronym for “Too Long, Didn’t Read”.

It’s an understandable sentiment we’ve all felt at some point. People lead busy lives. Some outlets now even specialise in short, at-a-glance news.

The problem is, it’s hard to get the full picture of a story in just a brief snippet.

In a world where fake news can be posted and spread like wildfire across social networks – almost completely unchecked – it feels even more dangerous to normalise “news” being delivered in short-form without full context.

There are two sides to most stories, and it’s hard to see how both can be summarised properly.

However, the argument also goes the other way. When articles are too long, people have a natural habit of skim-reading them. Skimming in this way often means people then believe they’re fully informed on a topic… when we know that’s often not the case.

TL;DR needs to strike a healthy balance between summarising the news but not so much that people don’t get enough of the story. Otherwise, it could increase existing societal problems with misinformation, fake news, and lack of media trust.

According to BuzzFeed, Facebook showed off TL;DR during an internal meeting this week. 

Facebook appears to be planning to add an AI-powered assistant to TL;DR which can answer questions about the article. The assistant could help to clear up anything the reader is uncertain about, but it’s also going to have to prove it doesn’t suffer from any biases which arguably all current algorithms suffer from to some extent.

The AI is also going to have to be very careful in not taking things like quotes out-of-context and end up further automating the spread of misinformation.

There’s also going to be a debate over what sources Facebook should use. Should Facebook stick only to the “mainstream media” which many believe follow the agendas of certain powerful moguls? Or serve news from smaller outlets without much historic credibility? The answer probably lies somewhere in the middle, but it’s going to be difficult to get right.

Facebook continues to be a major source of misinformation – in large part driven by algorithms promoting such content – and it’s had little success so far in any news-related efforts. I think most people will be expecting this to be another disaster waiting to happen.

(Image Credit: Mark Zuckerberg by Alessio Jacona under CC BY-SA 2.0 license)

The post Facebook is developing a news-summarising AI called TL;DR appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/16/facebook-developing-news-summarising-ai-tldr/feed/ 1
EU human rights agency issues report on AI ethical considerations https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/ https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/#respond Mon, 14 Dec 2020 16:34:34 +0000 https://news.deepgeniusai.com/?p=10117 The European Union’s Fundamental Rights Agency (FRA) has issued a report on AI which delves into the ethical considerations which must be made about the technology. FRA’s report is titled Getting The Future Right and opens with some of the ways AI is already making lives better—such as helping with cancer diagnosis, and even predicting... Read more »

The post EU human rights agency issues report on AI ethical considerations appeared first on AI News.

]]>
The European Union’s Fundamental Rights Agency (FRA) has issued a report on AI which delves into the ethical considerations which must be made about the technology.

FRA’s report is titled Getting The Future Right and opens with some of the ways AI is already making lives better—such as helping with cancer diagnosis, and even predicting where burglaries are likely to take place.

“The possibilities seem endless,” writes Michael O’Flaherty, Director of the FRA, in the report’s foreword. “But how can we fully uphold fundamental rights standards when using AI?”

The FRA interviewed over a hundred public administration officials, private company staff, and a diverse range of experts, in a bid to answer that question.

With evidence of algorithms having biases which could lead to automating societal issues like racial profiling—it’s a question that needs answering if the full potential of AI is going to be unlocked for the whole of society.

O’Flaherty says:

“AI is not infallible, it is made by people – and humans can make mistakes. That is why people need to be aware when AI is used, how it works and how to challenge automated decisions. The EU needs to clarify how existing rules apply to AI. And organisations need to assess how their technologies can interfere with people’s rights both in the development and use of AI.

“We have an opportunity to shape AI that not only respects our human and fundamental rights but that also protects and promotes them.”

AI is being used in almost every industry in some form or another—if not already, it will be soon.

Biases in AI are more dangerous in some industries than others. Policing is an obvious example, but in areas like financial services it could mean one person being given a loan or mortgage compared to another.

Without due transparency, these biases could happen without anyone knowing the reasons behind such decisions—it could simply be because someone grew up in a different neighbourhood. Each automated decision has a very real human impact.

The FRA calls for the EU to:

  • Make sure that AI respects ALL fundamental rights – AI can affect many rights – not just privacy or data protection. It can also discriminate or impede justice. Any future AI legislation has to consider this and create effective safeguards.
  • Guarantee that people can challenge decisions taken by AI – people need to know when AI is used and how it is used, as well as how and where to complain. Organisations using AI need to be able to explain how their systems take decisions.
  • Assess AI before and during its use to reduce negative impacts – private and public organisations should carry out assessments of how AI could harm fundamental rights.
  • Provide more guidance on data protection rules – the EU should further clarify how data protection rules apply to AI. More clarity is also needed on the implications of automated decision-making and the right to human review when AI is used.
  • Assess whether AI discriminates – awareness about the potential for AI to discriminate, and the impact of this, is relatively low. This calls for more research funding to look into the potentially discriminatory effects of AI so Europe can guard against it.
  • Create an effective oversight system – the EU should invest in a more ‘joined-up’ system to hold businesses and public administrations accountable when using AI. Authorities need to ensure that oversight bodies have adequate resources and skills to do the job.

The EU has increased its scrutiny of “big tech” companies like Google in recent years over concerns of invasive privacy practices and abusing their market positions. Last week, AI News reported that Google had controversially fired leading AI ethics researcher Timnit Gebru after she criticised her employer in an email.

Google chief executive Sundar Pichai wrote in a memo: “We need to accept responsibility for the fact that a prominent black, female leader with immense talent left Google unhappily.

“It’s incredibly important to me that our black, women, and under-represented Googlers know that we value you and you do belong at Google.”

Gebru gave an interview to the BBC this week in which she called Google and big tech “institutionally racist”. With that in mind, the calls made in the FRA’s report seem especially important to heed.

You can download a full copy of the FRA’s report here.

(Photo by Guillaume Périgois on Unsplash)

The post EU human rights agency issues report on AI ethical considerations appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/feed/ 0
Former NHS surgeon creates AI ‘virtual patient’ for remote training https://news.deepgeniusai.com/2020/12/11/former-nhs-surgeon-ai-virtual-patient-remote-training/ https://news.deepgeniusai.com/2020/12/11/former-nhs-surgeon-ai-virtual-patient-remote-training/#comments Fri, 11 Dec 2020 14:05:07 +0000 https://news.deepgeniusai.com/?p=10102 A former NHS surgeon has created an AI-powered “virtual patient” which helps to keep skills sharp during a time when most in-person training is on hold. Dr Alex Young is a trained orthopaedic and trauma surgeon who founded Virti and set out to use emerging technologies to provide immersive training for both new healthcare professionals... Read more »

The post Former NHS surgeon creates AI ‘virtual patient’ for remote training appeared first on AI News.

]]>
A former NHS surgeon has created an AI-powered “virtual patient” which helps to keep skills sharp during a time when most in-person training is on hold.

Dr Alex Young is a trained orthopaedic and trauma surgeon who founded Virti and set out to use emerging technologies to provide immersive training for both new healthcare professionals and experienced ones looking to hone their skills.

COVID-19 has put most in-person training on hold to minimise transmission risks. Hospitals and universities across the UK and US are now using the virtual patient as a replacement—including our fantastic local medics and surgeons at the Bristol NHS Foundation Trust.

The virtual patient uses Natural Language Processing (NLP) and ‘narrative branching’ to allow medics to roleplay lifelike clinical scenarios. Medics and trainees can interact with the virtual patient using their tablet, desktop, or even VR/AR headsets for a more immersive experience.

Dr Alex Young comments:

“We’ve been working with healthcare organisations for several years, but the pandemic has created really specific challenges that technology is helping to solve. It’s no longer safe or practicable to have 30 medics in a room with an actor, honing their clinical soft-skills. With our virtual patient technology, we’ve created an extremely realistic and repeatable experience that can provide feedback in real time. This means clinicians and students can continue to learn valuable skills.

Right now, communication with patients can be very difficult. There’s a lot of PPE involved and patients are often on their own. Having healthcare staff who are skilled in handling these situations can therefore make a huge difference to that patient’s experience.”

Some of the supported scenarios include: breaking bad news, comforting a patient in distress, and communicating effectively whilst their faces are obscured by PPE. Virti’s technology was also used at the peak of the pandemic to train NHS staff in key skills required on the front line, such as how to safely use PPE, how to navigate an unfamiliar intensive care ward, how to engage with patients and their families, and how to use a ventilator.

Tom Woollard, West Suffolk Hospital Clinical Skills and Simulation Tutor, who used the Virti platform at the peak of the COVID pandemic, comments:

“We’ve been using Virti’s technology in our intensive care unit to help train staff who have been drafted in to deal with COVID-19 demand.

The videos which we have created and uploaded are being accessed on the Virti platform by nursing staff, physiotherapists and Operational Department Practitioners (ODPs) to orient them in the new environment and reduce their anxiety.

The tech has helped us to reach a large audience and deliver formerly labour-intensive training and teaching which is now impossible with social distancing.

In the future, West Suffolk will consider applying Virti tech to other areas of hospital practice.”

The use of speech recognition, NLP, and ‘narrative branching’ provides a realistic simulation of how a patient would likely respond—providing lifelike responses in speech, body language, and mannerisms.

The AI delivers real-time feedback to the user so they can learn and improve. With upwards of 70 percent of complaints against health professionals and care providers attributable to poor communication, the virtual patient could help to deliver better care while reducing time spent handling complaints.

Virti’s groundbreaking technology has – quite rightly – been named one of TIME’s best inventions of 2020.

The post Former NHS surgeon creates AI ‘virtual patient’ for remote training appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/11/former-nhs-surgeon-ai-virtual-patient-remote-training/feed/ 1
Google fires ethical AI researcher Timnit Gebru after critical email https://news.deepgeniusai.com/2020/12/04/google-fires-ethical-ai-researcher-timnit-gebru-email/ https://news.deepgeniusai.com/2020/12/04/google-fires-ethical-ai-researcher-timnit-gebru-email/#comments Fri, 04 Dec 2020 16:18:56 +0000 https://news.deepgeniusai.com/?p=10062 A leading figure in ethical AI development has been fired by Google after criticising the company. Timnit Gebru is considered a pioneer in the field and researched the risks and inequalities found in large language models. Gebru claims she was fired by Google over an unpublished paper and sending an email critical of the company’s... Read more »

The post Google fires ethical AI researcher Timnit Gebru after critical email appeared first on AI News.

]]>
A leading figure in ethical AI development has been fired by Google after criticising the company.

Timnit Gebru is considered a pioneer in the field and researched the risks and inequalities found in large language models.

Gebru claims she was fired by Google over an unpublished paper and sending an email critical of the company’s practices.

The paper questions whether language models can be too big, who benefits from them, and whether they can increase prejudice and inequalities. Some recent cases validate her claims about large models and datasets in general.

For example, MIT was forced to remove a large dataset earlier this year called 80 Million Tiny Images. The dataset is popular for training AIs but was found to contain images labelled with racist, misogynistic, and other unacceptable terms.

A statement on MIT’s website claims it was unaware of the offensive labels and they were “a consequence of the automated data collection procedure that relied on nouns from WordNet.”

The statement goes on to explain the 80 million images contained in the dataset – with sizes of just 32×32 pixels – meant that manual inspection would be almost impossible and couldn’t guarantee all offensive images would be removed.

Gebru reportedly sent an email to the Google Brain Women and Allies listserv that is “inconsistent with the expectations of a Google manager.”

In the email, Gebru expressed her frustration with a perceived lack of progress at Google in hiring women at Google. Gebru claimed she was also told not to publish a piece of research and advised employees to stop filling out diversity paperwork because it didn’t matter.

On top of the questionable reasons for her firing, Gebru says her former colleagues were emailed saying she offered her resignation—which she claims was not the case:

Platformer obtained an email from Jeff Dean, Head of Google Research, which was sent to employees and offers his take on Gebru’s claims:

“We’ve approved dozens of papers that Timnit and/or the other Googlers have authored and then published, but as you know, papers often require changes during the internal review process (or are even deemed unsuitable for submission). Unfortunately, this particular paper was only shared with a day’s notice before its deadline — we require two weeks for this sort of review — and then instead of awaiting reviewer feedback, it was approved for submission and submitted.

A cross functional team then reviewed the paper as part of our regular process and the authors were informed that it didn’t meet our bar for publication and were given feedback about why. It ignored too much relevant research — for example, it talked about the environmental impact of large models, but disregarded subsequent research showing much greater efficiencies. Similarly, it raised concerns about bias in language models, but didn’t take into account recent research to mitigate these issues.”

Dean goes on to claim Gebru made demands which included revealing the identities of the individuals he and Google Research VP of Engineering Megan Kacholia consulted with as part of the paper’s review. If the demands weren’t met, Gebru reportedly said she would leave the company.

It’s a case of one word against another, but – for a company already in the spotlight from both the public and regulators over questionable practices – being seen to fire an ethics researcher for calling out problems is not going to be good PR.

(Image Credit: Timnit Gebru by Kimberly White/Getty Images for TechCrunch under CC BY 2.0 license)

The post Google fires ethical AI researcher Timnit Gebru after critical email appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/04/google-fires-ethical-ai-researcher-timnit-gebru-email/feed/ 2
CDEI launches a ‘roadmap’ for tackling algorithmic bias https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/ https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/#respond Fri, 27 Nov 2020 16:10:35 +0000 https://news.deepgeniusai.com/?p=10058 A review from the Centre for Data Ethics and Innovation (CDEI) has led to the creation of a “roadmap” for tackling algorithmic bias. The analysis was commissioned by the UK government in October 2018 and will receive a formal response. Algorithms bring substantial benefits to businesses and individuals able to use them effectively. However, increasing... Read more »

The post CDEI launches a ‘roadmap’ for tackling algorithmic bias appeared first on AI News.

]]>
A review from the Centre for Data Ethics and Innovation (CDEI) has led to the creation of a “roadmap” for tackling algorithmic bias.

The analysis was commissioned by the UK government in October 2018 and will receive a formal response.

Algorithms bring substantial benefits to businesses and individuals able to use them effectively. However, increasing evidence suggests biases are – often unconsciously – making their way into algorithms and creating an uneven playing field.

The CDEI is the UK government’s advisory body on the responsible use of AI and data-driven technology. CDEI has spent the past two years examining the issue of algorithmic bias and how it can be tackled.

Adrian Weller, Board Member for the Centre for Data Ethics and Innovation, said:

“It is vital that we work hard now to get this right as adoption of algorithmic decision-making increases. Government, regulators, and industry need to work together with interdisciplinary experts, stakeholders, and the public to ensure that algorithms are used to promote fairness, not undermine it.

The Centre for Data Ethics and Innovation has today set out a range of measures to help the UK to achieve this, with a focus on enhancing transparency and accountability in decision-making processes that have a significant impact on individuals.

Not only does the report propose a roadmap to tackle the risks, but it highlights the opportunity that good use of data presents to address historical unfairness and avoid new biases in key areas of life.”

The report focuses on four key sectors where algorithmic bias poses the biggest risk: policing, recruitment, financial services, and local government.

Today’s facial recognition algorithms are relatively effective when used on white males, but research has consistently shown how ineffective they are with darker skin colours and females. The error rate is, therefore, higher when facial recognition algorithms are used on some parts of society over others.

In June, Detroit Police chief Editor Craig said facial recognition would misidentify someone around 96 percent of the time—not particularly comforting when they’re being used to perform mass surveillance of protests.

Craig’s comments were made just days after the ACLU (American Civil Liberties Union) lodged a complaint against Detroit Police following the harrowing wrongful arrest of black male Robert Williams due to a facial recognition error.

And that’s just one example of where AI can unfairly impact some parts of society over another.

“Fairness is a highly prized human value,” the report’s preface reads. “Societies in which individuals can flourish need to be held together by practices and institutions that are regarded as fair.”

Ensuring fairness in algorithmic decision-making

Transparency is required for algorithms. In financial services, a business loan or mortgage could be rejected without transparency simply because a person was born in a poor neighbourhood. Job applications could be rejected not on a person’s actual skill but dependent on where they were educated.

Such biases exist in humans and our institutions today, but automating them at scale is a recipe for disaster. Removing bias from algorithms is not an easy task but if achieved would lead to increased fairness by taking human biases out of the equation.

“It is well established that there is a risk that algorithmic systems can lead to biased decisions, with perhaps the largest underlying cause being the encoding of existing human biases into algorithmic systems. But the evidence is far less clear on whether algorithmic decision-making tools carry more or less risk of bias than previous human decision-making processes. Indeed, there are reasons to think that better use of data can have a role in making decisions fairer, if done with appropriate care.

When changing processes that make life-affecting decisions about individuals we should always proceed with caution. It is important to recognise that algorithms cannot do everything. There are some aspects of decision-making where human judgement, including the ability to be sensitive and flexible to the unique circumstances of an individual, will remain crucial.”

The report’s authors examined the aforementioned four key sectors to determine their current “maturity levels” in algorithmic decision-making.

In recruitment, the authors found rapid growth in the use of algorithms to make decisions at all stages. They note that adequate data is being collected to monitor outcomes but found that understanding of how to avoid human biases creeping in is lacking.

“More guidance is needed on how to ensure that these tools do not unintentionally discriminate against groups of people, particularly when trained on historic or current employment data.”

The financial services industry has relied on data to make decisions for longer than arguably any other to determine things like how likely it is an individual can repay a debt.

“Specific groups are historically underrepresented in the financial system, and there is a risk that these historic biases could be entrenched further through algorithmic systems.”

CDEI found limited use of algorithmic decision-making in UK policing but found variance across forces with regards to both usage and managing ethical risks.

“The use of data analytics tools in policing carries significant risk. Without sufficient care, processes can lead to Review into bias in algorithmic decision-making: Executive summary Centre for Data Ethics and Innovation 8 outcomes that are biased against particular groups, or systematically unfair.

In many scenarios where these tools are helpful, there is still an important balance to be struck between automated decision-making and the application of professional judgement and discretion.”

Finally, in local government, CDEI noted an increased use of algorithms to inform decision-making but most are in their early stages of deployment. Such tools can be powerful assets for societal good – like helping to plan where resources should be allocated to maintain vital services – but can also carry significant risks.

“Evidence has shown that certain people are more likely to be overrepresented in data held by local authorities and this can then lead to biases in predictions and interventions.”

The CDEI makes a number of recommendations in its report but among them is:

  • Clear and mandatory transparency over how algorithms are used for public decision-making and steps taken to ensure the fair treatment of individuals.
  • Full accountability for organisations implementing such technologies.
  • Improving the diversity of roles involved with developing and deploying decision-making tools.
  • Updating model contracts and framework agreements for public sector procurement to incorporate minimum standards around the ethical use of AI.
  • The government working with regulators to provide clear guidance on the collection and use of protected characteristic data in outcome monitoring and decision-making processes. They should then encourage the use of that guidance and data to address current and historic bias in key sectors.
  • Ensuring that the Equality and Human Rights Commission has sufficient resources to investigate cases of alleged algorithmic discrimination.

CDEI is overseen by an independent board which is made up of experts from across industry, civil society, academia, and government; it is an advisory body and does not directly set policies. The Department for Digital, Culture, Media & Sport is consulting on whether a statutory status would help the CDEI to deliver its remit as part of the National Data Strategy.

You can find a full copy of the CDEI’s report into tackling algorithmic bias here (PDF)

(Photo by Matt Duncan on Unsplash)

The post CDEI launches a ‘roadmap’ for tackling algorithmic bias appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/feed/ 0
Salesforce-backed AI project SharkEye aims to protect beachgoers https://news.deepgeniusai.com/2020/11/24/salesforce-ai-project-sharkeye-protect-beachgoers/ https://news.deepgeniusai.com/2020/11/24/salesforce-ai-project-sharkeye-protect-beachgoers/#comments Tue, 24 Nov 2020 13:32:04 +0000 https://news.deepgeniusai.com/?p=10050 Salesforce is backing an AI project called SharkEye which aims to save the lives of beachgoers from one of the sea’s deadliest predators. Shark attacks are, fortunately, quite rare. However, they do happen and most cases are either fatal or cause life-changing injuries. Just last week, a fatal shark attack in Australia marked the eighth... Read more »

The post Salesforce-backed AI project SharkEye aims to protect beachgoers appeared first on AI News.

]]>
Salesforce is backing an AI project called SharkEye which aims to save the lives of beachgoers from one of the sea’s deadliest predators.

Shark attacks are, fortunately, quite rare. However, they do happen and most cases are either fatal or cause life-changing injuries.

Just last week, a fatal shark attack in Australia marked the eighth of the year—an almost 100-year record for the highest annual death toll. Once rare sightings in Southern California beaches are now becoming increasingly common as sharks are preferring the warmer waters close to shore.

Academics from the University of California and San Diego State University have teamed up with AI researchers from Salesforce to create software which can spot when sharks are swimming around popular beach destinations.

Sharks are currently tracked – when at all – by either keeping tabs of tagged animals online or by someone on a paddleboard keeping an eye out. It’s an inefficient system ripe for some AI innovation.

SharkEye uses drones to spot sharks from above. The drones fly preprogrammed paths at a height of around 120 feet to cover large areas of the ocean while preventing marine life from being disturbed.

If a shark is spotted, a message can be sent instantly to people including lifeguards, surf instructors, and beachside homeowners to take necessary action. Future alerts could also be sent directly to beachgoers who’ve signed up for them or pushed via social channels.

The drone footage is helping to feed further research into movement patterns. The researchers hope that by combining with data like ocean temperature, and the movement of other marine life, an AI will be able to predict when and where sharks are most likely to be in areas which may pose a danger to people.

SharkEye is still considered to be in its pilot stage but has been tested for the past two summers at Padaro Beach in Santa Barbara County.

A shark is suspected to have bitten a woman at Padaro Beach over summer when the team wasn’t flying a drone due to the coronavirus shutdown. Fortunately, her injuries were minor. However, a 26-year-old man was killed in a shark attack a few hours north in Santa Cruz just eight days later.

Attacks can lead to sharks also being killed or injured in a bid to save human life. Using AI to help find safer ways for sharks and humans to share the water can only be a good thing.

(Photo by Laura College on Unsplash)

The post Salesforce-backed AI project SharkEye aims to protect beachgoers appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/24/salesforce-ai-project-sharkeye-protect-beachgoers/feed/ 1
AI helps patients to get more rest while reducing staff workload https://news.deepgeniusai.com/2020/11/17/ai-patients-more-rest-reducing-staff-workload/ https://news.deepgeniusai.com/2020/11/17/ai-patients-more-rest-reducing-staff-workload/#comments Tue, 17 Nov 2020 15:17:04 +0000 https://news.deepgeniusai.com/?p=10028 A team from Feinstein Institutes for Research thinks AI could be key to helping patients get more rest while reducing the burden on healthcare staff. Everyone knows how important adequate sleep is for recovery. However, patients in pain – or just insomniacs like me – can struggle to get the sleep they need. “Rest is... Read more »

The post AI helps patients to get more rest while reducing staff workload appeared first on AI News.

]]>
A team from Feinstein Institutes for Research thinks AI could be key to helping patients get more rest while reducing the burden on healthcare staff.

Everyone knows how important adequate sleep is for recovery. However, patients in pain – or just insomniacs like me – can struggle to get the sleep they need.

“Rest is a critical element to a patient’s care, and it has been well-documented that disrupted sleep is a common complaint that could delay discharge and recovery,” said Theodoros Zanos, Assistant Professor at Feinstein Institutes’ Institute of Bioelectronic Medicine.

When a patient finally gets some shut-eye, the last thing they want is to be woken up to have their vitals checked—but such measurements are, well, vital.

In a paper published in Nature Partner Journals, the researchers detailed how they developed a deep-learning predictive tool which predicts a patient’s stability overnight. This prevents multiple unnecessary checks being carried out.

Vital sign measurements from 2.13 million patient visits at Northwell Health hospitals in New York between 2012 and 2019 were used to train the AI. Data included heart rate, systolic blood pressure, body temperature, respiratory rate, and age. A total of 24.3 million vital signs were used.

When tested, the AI misdiagnosed just two of 10,000 patients in overnight stays. The researchers noted how nurses on their usual rounds would be able to account for the two misdiagnosed cases.

According to the paper, around 20-35 percent of a nurse’s time is spent keeping records of patients’ vitals. Around 10 percent of their time is spent collecting vitals. On average, a nurse currently has to collect a patient’s vitals every four to five hours.

With that in mind, it’s little wonder medical staff feel so overburdened and stressed. These people want to provide the best care they can but only have two hands. Using AI to free up more time for their heroic duties while simultaneously improving patient care can only be a good thing.

The AI tool is being rolled out across several of Northwell Health’s hospitals.

The post AI helps patients to get more rest while reducing staff workload appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/17/ai-patients-more-rest-reducing-staff-workload/feed/ 1