Ethics – AI News https://news.deepgeniusai.com Artificial Intelligence News Thu, 24 Dec 2020 10:09:18 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Ethics – AI News https://news.deepgeniusai.com 32 32 Google is telling its scientists to give AI a ‘positive’ spin https://news.deepgeniusai.com/2020/12/24/google-telling-scientists-give-ai-positive-spin/ https://news.deepgeniusai.com/2020/12/24/google-telling-scientists-give-ai-positive-spin/#respond Thu, 24 Dec 2020 10:09:16 +0000 https://news.deepgeniusai.com/?p=10136 Google has reportedly been telling its scientists to give AI a “positive” spin in research papers. Documents obtained by Reuters suggest that, in at least three cases, Google’s researchers were requested to refrain from being critical of AI technology. A “sensitive topics” review was established by Google earlier this year to catch papers which cast... Read more »

The post Google is telling its scientists to give AI a ‘positive’ spin appeared first on AI News.

]]>
Google has reportedly been telling its scientists to give AI a “positive” spin in research papers.

Documents obtained by Reuters suggest that, in at least three cases, Google’s researchers were requested to refrain from being critical of AI technology.

A “sensitive topics” review was established by Google earlier this year to catch papers which cast a negative light on AI ahead of their publication.

Google asks its scientists to consult with legal, policy, and public relations teams prior to publishing anything on topics which could be deemed sensitive like sentiment analysis and categorisations of people based on race and/or political affiliation.

The new review means that papers from Google’s expert researchers which raise questions about AI developments may never be published. Reuters says four staff researchers believe Google is interfering with studies into potential technology harms.

Google recently faced scrutiny after firing leading AI ethics researcher Timnit Gebru.

Gebru is considered a pioneer in the field and researched the risks and inequalities found in large language models. She claims to have been fired by Google over an unpublished paper and sending an email critical of the company’s practices.

In an internal email countering Gebru’s claims, Head of Google Research Jeff Dean wrote:

“We’ve approved dozens of papers that Timnit and/or the other Googlers have authored and then published, but as you know, papers often require changes during the internal review process (or are even deemed unsuitable for submission). 

Unfortunately, this particular paper was only shared with a day’s notice before its deadline — we require two weeks for this sort of review — and then instead of awaiting reviewer feedback, it was approved for submission and submitted.

A cross-functional team then reviewed the paper as part of our regular process and the authors were informed that it didn’t meet our bar for publication and were given feedback about why.”

While it’s one word against another, it’s not a great look for Google.

“Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues,” Reuters reported one of Google’s documents as saying.

On its public-facing website, Google says that its scientists have “substantial” freedom—but that’s increasingly appearing like it’s not the case.

(Photo by Mitchell Luo on Unsplash)

The post Google is telling its scientists to give AI a ‘positive’ spin appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/24/google-telling-scientists-give-ai-positive-spin/feed/ 0
EU human rights agency issues report on AI ethical considerations https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/ https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/#respond Mon, 14 Dec 2020 16:34:34 +0000 https://news.deepgeniusai.com/?p=10117 The European Union’s Fundamental Rights Agency (FRA) has issued a report on AI which delves into the ethical considerations which must be made about the technology. FRA’s report is titled Getting The Future Right and opens with some of the ways AI is already making lives better—such as helping with cancer diagnosis, and even predicting... Read more »

The post EU human rights agency issues report on AI ethical considerations appeared first on AI News.

]]>
The European Union’s Fundamental Rights Agency (FRA) has issued a report on AI which delves into the ethical considerations which must be made about the technology.

FRA’s report is titled Getting The Future Right and opens with some of the ways AI is already making lives better—such as helping with cancer diagnosis, and even predicting where burglaries are likely to take place.

“The possibilities seem endless,” writes Michael O’Flaherty, Director of the FRA, in the report’s foreword. “But how can we fully uphold fundamental rights standards when using AI?”

The FRA interviewed over a hundred public administration officials, private company staff, and a diverse range of experts, in a bid to answer that question.

With evidence of algorithms having biases which could lead to automating societal issues like racial profiling—it’s a question that needs answering if the full potential of AI is going to be unlocked for the whole of society.

O’Flaherty says:

“AI is not infallible, it is made by people – and humans can make mistakes. That is why people need to be aware when AI is used, how it works and how to challenge automated decisions. The EU needs to clarify how existing rules apply to AI. And organisations need to assess how their technologies can interfere with people’s rights both in the development and use of AI.

“We have an opportunity to shape AI that not only respects our human and fundamental rights but that also protects and promotes them.”

AI is being used in almost every industry in some form or another—if not already, it will be soon.

Biases in AI are more dangerous in some industries than others. Policing is an obvious example, but in areas like financial services it could mean one person being given a loan or mortgage compared to another.

Without due transparency, these biases could happen without anyone knowing the reasons behind such decisions—it could simply be because someone grew up in a different neighbourhood. Each automated decision has a very real human impact.

The FRA calls for the EU to:

  • Make sure that AI respects ALL fundamental rights – AI can affect many rights – not just privacy or data protection. It can also discriminate or impede justice. Any future AI legislation has to consider this and create effective safeguards.
  • Guarantee that people can challenge decisions taken by AI – people need to know when AI is used and how it is used, as well as how and where to complain. Organisations using AI need to be able to explain how their systems take decisions.
  • Assess AI before and during its use to reduce negative impacts – private and public organisations should carry out assessments of how AI could harm fundamental rights.
  • Provide more guidance on data protection rules – the EU should further clarify how data protection rules apply to AI. More clarity is also needed on the implications of automated decision-making and the right to human review when AI is used.
  • Assess whether AI discriminates – awareness about the potential for AI to discriminate, and the impact of this, is relatively low. This calls for more research funding to look into the potentially discriminatory effects of AI so Europe can guard against it.
  • Create an effective oversight system – the EU should invest in a more ‘joined-up’ system to hold businesses and public administrations accountable when using AI. Authorities need to ensure that oversight bodies have adequate resources and skills to do the job.

The EU has increased its scrutiny of “big tech” companies like Google in recent years over concerns of invasive privacy practices and abusing their market positions. Last week, AI News reported that Google had controversially fired leading AI ethics researcher Timnit Gebru after she criticised her employer in an email.

Google chief executive Sundar Pichai wrote in a memo: “We need to accept responsibility for the fact that a prominent black, female leader with immense talent left Google unhappily.

“It’s incredibly important to me that our black, women, and under-represented Googlers know that we value you and you do belong at Google.”

Gebru gave an interview to the BBC this week in which she called Google and big tech “institutionally racist”. With that in mind, the calls made in the FRA’s report seem especially important to heed.

You can download a full copy of the FRA’s report here.

(Photo by Guillaume Périgois on Unsplash)

The post EU human rights agency issues report on AI ethical considerations appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/feed/ 0
Google fires ethical AI researcher Timnit Gebru after critical email https://news.deepgeniusai.com/2020/12/04/google-fires-ethical-ai-researcher-timnit-gebru-email/ https://news.deepgeniusai.com/2020/12/04/google-fires-ethical-ai-researcher-timnit-gebru-email/#comments Fri, 04 Dec 2020 16:18:56 +0000 https://news.deepgeniusai.com/?p=10062 A leading figure in ethical AI development has been fired by Google after criticising the company. Timnit Gebru is considered a pioneer in the field and researched the risks and inequalities found in large language models. Gebru claims she was fired by Google over an unpublished paper and sending an email critical of the company’s... Read more »

The post Google fires ethical AI researcher Timnit Gebru after critical email appeared first on AI News.

]]>
A leading figure in ethical AI development has been fired by Google after criticising the company.

Timnit Gebru is considered a pioneer in the field and researched the risks and inequalities found in large language models.

Gebru claims she was fired by Google over an unpublished paper and sending an email critical of the company’s practices.

The paper questions whether language models can be too big, who benefits from them, and whether they can increase prejudice and inequalities. Some recent cases validate her claims about large models and datasets in general.

For example, MIT was forced to remove a large dataset earlier this year called 80 Million Tiny Images. The dataset is popular for training AIs but was found to contain images labelled with racist, misogynistic, and other unacceptable terms.

A statement on MIT’s website claims it was unaware of the offensive labels and they were “a consequence of the automated data collection procedure that relied on nouns from WordNet.”

The statement goes on to explain the 80 million images contained in the dataset – with sizes of just 32×32 pixels – meant that manual inspection would be almost impossible and couldn’t guarantee all offensive images would be removed.

Gebru reportedly sent an email to the Google Brain Women and Allies listserv that is “inconsistent with the expectations of a Google manager.”

In the email, Gebru expressed her frustration with a perceived lack of progress at Google in hiring women at Google. Gebru claimed she was also told not to publish a piece of research and advised employees to stop filling out diversity paperwork because it didn’t matter.

On top of the questionable reasons for her firing, Gebru says her former colleagues were emailed saying she offered her resignation—which she claims was not the case:

Platformer obtained an email from Jeff Dean, Head of Google Research, which was sent to employees and offers his take on Gebru’s claims:

“We’ve approved dozens of papers that Timnit and/or the other Googlers have authored and then published, but as you know, papers often require changes during the internal review process (or are even deemed unsuitable for submission). Unfortunately, this particular paper was only shared with a day’s notice before its deadline — we require two weeks for this sort of review — and then instead of awaiting reviewer feedback, it was approved for submission and submitted.

A cross functional team then reviewed the paper as part of our regular process and the authors were informed that it didn’t meet our bar for publication and were given feedback about why. It ignored too much relevant research — for example, it talked about the environmental impact of large models, but disregarded subsequent research showing much greater efficiencies. Similarly, it raised concerns about bias in language models, but didn’t take into account recent research to mitigate these issues.”

Dean goes on to claim Gebru made demands which included revealing the identities of the individuals he and Google Research VP of Engineering Megan Kacholia consulted with as part of the paper’s review. If the demands weren’t met, Gebru reportedly said she would leave the company.

It’s a case of one word against another, but – for a company already in the spotlight from both the public and regulators over questionable practices – being seen to fire an ethics researcher for calling out problems is not going to be good PR.

(Image Credit: Timnit Gebru by Kimberly White/Getty Images for TechCrunch under CC BY 2.0 license)

The post Google fires ethical AI researcher Timnit Gebru after critical email appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/04/google-fires-ethical-ai-researcher-timnit-gebru-email/feed/ 2
CDEI launches a ‘roadmap’ for tackling algorithmic bias https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/ https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/#respond Fri, 27 Nov 2020 16:10:35 +0000 https://news.deepgeniusai.com/?p=10058 A review from the Centre for Data Ethics and Innovation (CDEI) has led to the creation of a “roadmap” for tackling algorithmic bias. The analysis was commissioned by the UK government in October 2018 and will receive a formal response. Algorithms bring substantial benefits to businesses and individuals able to use them effectively. However, increasing... Read more »

The post CDEI launches a ‘roadmap’ for tackling algorithmic bias appeared first on AI News.

]]>
A review from the Centre for Data Ethics and Innovation (CDEI) has led to the creation of a “roadmap” for tackling algorithmic bias.

The analysis was commissioned by the UK government in October 2018 and will receive a formal response.

Algorithms bring substantial benefits to businesses and individuals able to use them effectively. However, increasing evidence suggests biases are – often unconsciously – making their way into algorithms and creating an uneven playing field.

The CDEI is the UK government’s advisory body on the responsible use of AI and data-driven technology. CDEI has spent the past two years examining the issue of algorithmic bias and how it can be tackled.

Adrian Weller, Board Member for the Centre for Data Ethics and Innovation, said:

“It is vital that we work hard now to get this right as adoption of algorithmic decision-making increases. Government, regulators, and industry need to work together with interdisciplinary experts, stakeholders, and the public to ensure that algorithms are used to promote fairness, not undermine it.

The Centre for Data Ethics and Innovation has today set out a range of measures to help the UK to achieve this, with a focus on enhancing transparency and accountability in decision-making processes that have a significant impact on individuals.

Not only does the report propose a roadmap to tackle the risks, but it highlights the opportunity that good use of data presents to address historical unfairness and avoid new biases in key areas of life.”

The report focuses on four key sectors where algorithmic bias poses the biggest risk: policing, recruitment, financial services, and local government.

Today’s facial recognition algorithms are relatively effective when used on white males, but research has consistently shown how ineffective they are with darker skin colours and females. The error rate is, therefore, higher when facial recognition algorithms are used on some parts of society over others.

In June, Detroit Police chief Editor Craig said facial recognition would misidentify someone around 96 percent of the time—not particularly comforting when they’re being used to perform mass surveillance of protests.

Craig’s comments were made just days after the ACLU (American Civil Liberties Union) lodged a complaint against Detroit Police following the harrowing wrongful arrest of black male Robert Williams due to a facial recognition error.

And that’s just one example of where AI can unfairly impact some parts of society over another.

“Fairness is a highly prized human value,” the report’s preface reads. “Societies in which individuals can flourish need to be held together by practices and institutions that are regarded as fair.”

Ensuring fairness in algorithmic decision-making

Transparency is required for algorithms. In financial services, a business loan or mortgage could be rejected without transparency simply because a person was born in a poor neighbourhood. Job applications could be rejected not on a person’s actual skill but dependent on where they were educated.

Such biases exist in humans and our institutions today, but automating them at scale is a recipe for disaster. Removing bias from algorithms is not an easy task but if achieved would lead to increased fairness by taking human biases out of the equation.

“It is well established that there is a risk that algorithmic systems can lead to biased decisions, with perhaps the largest underlying cause being the encoding of existing human biases into algorithmic systems. But the evidence is far less clear on whether algorithmic decision-making tools carry more or less risk of bias than previous human decision-making processes. Indeed, there are reasons to think that better use of data can have a role in making decisions fairer, if done with appropriate care.

When changing processes that make life-affecting decisions about individuals we should always proceed with caution. It is important to recognise that algorithms cannot do everything. There are some aspects of decision-making where human judgement, including the ability to be sensitive and flexible to the unique circumstances of an individual, will remain crucial.”

The report’s authors examined the aforementioned four key sectors to determine their current “maturity levels” in algorithmic decision-making.

In recruitment, the authors found rapid growth in the use of algorithms to make decisions at all stages. They note that adequate data is being collected to monitor outcomes but found that understanding of how to avoid human biases creeping in is lacking.

“More guidance is needed on how to ensure that these tools do not unintentionally discriminate against groups of people, particularly when trained on historic or current employment data.”

The financial services industry has relied on data to make decisions for longer than arguably any other to determine things like how likely it is an individual can repay a debt.

“Specific groups are historically underrepresented in the financial system, and there is a risk that these historic biases could be entrenched further through algorithmic systems.”

CDEI found limited use of algorithmic decision-making in UK policing but found variance across forces with regards to both usage and managing ethical risks.

“The use of data analytics tools in policing carries significant risk. Without sufficient care, processes can lead to Review into bias in algorithmic decision-making: Executive summary Centre for Data Ethics and Innovation 8 outcomes that are biased against particular groups, or systematically unfair.

In many scenarios where these tools are helpful, there is still an important balance to be struck between automated decision-making and the application of professional judgement and discretion.”

Finally, in local government, CDEI noted an increased use of algorithms to inform decision-making but most are in their early stages of deployment. Such tools can be powerful assets for societal good – like helping to plan where resources should be allocated to maintain vital services – but can also carry significant risks.

“Evidence has shown that certain people are more likely to be overrepresented in data held by local authorities and this can then lead to biases in predictions and interventions.”

The CDEI makes a number of recommendations in its report but among them is:

  • Clear and mandatory transparency over how algorithms are used for public decision-making and steps taken to ensure the fair treatment of individuals.
  • Full accountability for organisations implementing such technologies.
  • Improving the diversity of roles involved with developing and deploying decision-making tools.
  • Updating model contracts and framework agreements for public sector procurement to incorporate minimum standards around the ethical use of AI.
  • The government working with regulators to provide clear guidance on the collection and use of protected characteristic data in outcome monitoring and decision-making processes. They should then encourage the use of that guidance and data to address current and historic bias in key sectors.
  • Ensuring that the Equality and Human Rights Commission has sufficient resources to investigate cases of alleged algorithmic discrimination.

CDEI is overseen by an independent board which is made up of experts from across industry, civil society, academia, and government; it is an advisory body and does not directly set policies. The Department for Digital, Culture, Media & Sport is consulting on whether a statutory status would help the CDEI to deliver its remit as part of the National Data Strategy.

You can find a full copy of the CDEI’s report into tackling algorithmic bias here (PDF)

(Photo by Matt Duncan on Unsplash)

The post CDEI launches a ‘roadmap’ for tackling algorithmic bias appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/feed/ 0
Synthesized’s free tool aims to detect and remove algorithmic biases https://news.deepgeniusai.com/2020/11/12/synthesized-free-tool-detect-remove-algorithmic-biases/ https://news.deepgeniusai.com/2020/11/12/synthesized-free-tool-detect-remove-algorithmic-biases/#respond Thu, 12 Nov 2020 11:13:52 +0000 https://news.deepgeniusai.com/?p=10016 Synthesized has launched a free tool which aims to quickly identify and remove dangerous biases in algorithms. As humans, we all have biases. These biases, often unconsciously, end up in algorithms which are designed to be used across society. In practice, this could mean anything from a news app serving more left-wing or right-wing content—through... Read more »

The post Synthesized’s free tool aims to detect and remove algorithmic biases appeared first on AI News.

]]>
Synthesized has launched a free tool which aims to quickly identify and remove dangerous biases in algorithms.

As humans, we all have biases. These biases, often unconsciously, end up in algorithms which are designed to be used across society.

In practice, this could mean anything from a news app serving more left-wing or right-wing content—through to facial recognition systems which flag some races and genders more than others.

A 2010 study (PDF) by researchers at NIST and the University of Texas found that algorithms designed and tested in East Asia are better at recognising East Asians, while those developed in Western countries are more accurate when detecting Caucasians.

Dr Nicolai Baldin, CEO and Founder of Synthesized, said:

“The reputational risk of all organisations is under threat due to biased data and we’ve seen this will no longer be tolerated at any level. It’s a burning priority now and must be dealt with as a matter of urgency, both from a legal and ethical standpoint.

Last year, Algorithmic Justice League founder Joy Buolamwini gave a presentation during the World Economic Forum on the need to fight AI bias. Buolamwini highlighted the massive disparities in effectiveness when popular facial recognition algorithms were applied to various parts of society.

Synthesized claims its platform is able to automatically identify bias across data attributes like gender, age, race, religion, sexual orientation, and more. 

The platform was designed to be simple-to-use with no coding knowledge required. Users only have to upload a structured data file – as simple as a spreadsheet – to begin analysing for potential biases. A ‘Total Fairness Score’ will be provided to show what percentage of the provided dataset contained biases.

“Synthesized’s Community Edition for Bias Mitigation is one of the first offerings specifically created to understand, investigate, and root out bias in data,” explains Baldin. “We designed the platform to be very accessible, easy-to-use, and highly scalable, as organisations have data stored across a huge range of databases and data silos.”

Some examples of how Synthesized’s tool could be used across industries include:

  • In finance, to create fairer credit ratings
  • In insurance, for more equitable claims
  • In HR, to eliminate biases in hiring processes
  • In universities, for ensuring fairness in admission decisions

Synthesized’s platform uses a proprietary algorithm which is said to be quicker and more accurate than existing techniques for removing biases in datasets. A new synthetic dataset is created which, in theory, should be free of biases.

“With the generation of synthetic data, Synthesized’s platform gives its users the ability to equally distribute all attributes within a dataset to remove bias and rebalance the dataset completely,” the company says.

“Users can also manually change singular data attributes within a dataset, such as gender, providing granular control of the rebalancing process.”

If only MIT used such a tool on its dataset it was forced to remove in July after being found to be racist and misogynistic.

You can find out more about Synthesized’s tool and how to get started here.

(Photo by Agence Olloweb on Unsplash)

The post Synthesized’s free tool aims to detect and remove algorithmic biases appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/12/synthesized-free-tool-detect-remove-algorithmic-biases/feed/ 0
Information Commissioner clears Cambridge Analytica of influencing Brexit https://news.deepgeniusai.com/2020/10/08/information-commissioner-cambridge-analytica-influencing-brexit/ https://news.deepgeniusai.com/2020/10/08/information-commissioner-cambridge-analytica-influencing-brexit/#respond Thu, 08 Oct 2020 16:32:57 +0000 https://news.deepgeniusai.com/?p=9938 A three-year investigation by the UK Information Commissioner’s office has cleared Cambridge Analytica of electoral interference. Cambridge Analytica was accused in March 2018 of using AI tools and big data to influence the results of the Brexit referendum and the US presidential election. Most objective observers probably felt the case was overblown, but it’s taken... Read more »

The post Information Commissioner clears Cambridge Analytica of influencing Brexit appeared first on AI News.

]]>
A three-year investigation by the UK Information Commissioner’s office has cleared Cambridge Analytica of electoral interference.

Cambridge Analytica was accused in March 2018 of using AI tools and big data to influence the results of the Brexit referendum and the US presidential election. Most objective observers probably felt the case was overblown, but it’s taken until now to be confirmed.

“From my review of the materials recovered by the investigation I have found no further evidence to change my earlier view that CA [Cambridge Analytica] was not involved in the EU referendum campaign in the UK,” wrote Information Commissioner Elizabeth Denham.

Cambridge Analytica did obtain a ton of user data—but through predominantly commercial means, and of mostly US voters. Such data is available to, and has also been purchased by, other electoral campaigns for targeted advertising purposes (the Remain campaigns in the UK actually outspent their Leave counterparts by £6 million.)

“CA were purchasing significant volumes of commercially available personal data (at one estimate over 130 billion data points), in the main about millions of US voters, to combine it with the Facebook derived insight information they had obtained from an academic at Cambridge University, Dr Aleksandr Kogan, and elsewhere,” wrote Denham.

The only real scandal was Facebook’s poor protection of users which allowed third-party apps to scrape their data—for which it was fined £500,000 by the UK’s data protection watchdog.

It seems the claims Cambridge Analytica used powerful AI tools were also rather overblown, with the information commissioner saying all they found were models “built from ‘off the shelf’ analytical tools”.

The information commissioner even found evidence that Cambridge Analytica’s own staff “were concerned about some of the public statements the leadership of the company were making about their impact and influence.”

Cambridge Analytica appears to have been a victim of those unable to accept democratic results combined with its own boasting of capabilities that weren’t actually that impressive.

You can read the full report here (PDF)

(Photo by Christian Lue on Unsplash)

The post Information Commissioner clears Cambridge Analytica of influencing Brexit appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/10/08/information-commissioner-cambridge-analytica-influencing-brexit/feed/ 0
Google returns to using human YouTube moderators after AI errors https://news.deepgeniusai.com/2020/09/21/google-human-youtube-moderators-ai-errors/ https://news.deepgeniusai.com/2020/09/21/google-human-youtube-moderators-ai-errors/#respond Mon, 21 Sep 2020 17:05:18 +0000 https://news.deepgeniusai.com/?p=9865 Google is returning to using humans for YouTube moderation after repeated errors with its AI system. Moderating a large network like YouTube is no easy task. Aside from the sheer volume of content uploaded every day, moderators are subjected to the worst of humanity and often end up requiring therapy. They’re the unsung heroes. AI... Read more »

The post Google returns to using human YouTube moderators after AI errors appeared first on AI News.

]]>
Google is returning to using humans for YouTube moderation after repeated errors with its AI system.

Moderating a large network like YouTube is no easy task. Aside from the sheer volume of content uploaded every day, moderators are subjected to the worst of humanity and often end up requiring therapy. They’re the unsung heroes.

AI has been hailed as helping to deal with some of the aforementioned issues. Either by automating the moderation process entirely or by offering a helping hand to humans.

Google was left with little choice but to give more power to its AI moderators as the COVID-19 pandemic took hold… but it hasn’t been smooth sailing.

In late August, YouTube said that it had removed 11.4 million videos over the three months prior–the most since the site launched in 2005.

That figure alone should raise a few eyebrows. If a team of humans were removing that many videos, they probably deserve quite the pay rise.

Of course, most of the video removals weren’t done by humans. Many of the videos didn’t even violate the guidelines.

Neal Mohan, chief product officer at YouTube, told the Financial Times:

“One of the decisions we made [at the beginning of the COVID-19 pandemic] when it came to machines who couldn’t be as precise as humans, we were going to err on the side of making sure that our users were protected, even though that might have resulted in [a] slightly higher number of videos coming down.”

Some of the removals left content creators bewildered, angry, and out of pocket in some cases.

Around 320,000 of videos taken down were appealed, and half of the appealed videos were reinstated.

Deciding what content to ultimately remove feels like one of the many tasks which needs human involvement. Humans are much better at detecting nuances and things like sarcasm.

However, the sheer scale of content needing to be moderated also requires an AI to help automate some of that process.

“Over 50 percent of those 11 million videos were removed without a single view by an actual YouTube user and over 80 percent were removed with less than 10 views,” Mohan said. “That’s the power of machines.”

AIs can also help to protect humans from the worst of the content. Content detection systems are being built to automatically blur things like child abuse enough so that human moderators know what it is to remove it—but to limit their psychological impact.

Some believe AIs are better in helping to determine what content should be removed simply using logic rather than a human’s natural biases like their political-leaning, but we know human biases seep into algorithms.

In May, YouTube admitted to deleting messages critical of the Chinese Communist Party (CCP). YouTube later blamed an “error with our enforcement systems” for the mistakes. Senator Josh Hawley even wrote (PDF) to Google CEO Sundar Pichai seeking answers to “troubling reports that your company has resumed its long pattern of censorship at the behest of the Chinese Communist Party.”

Google appears to have quickly realised that replacing humans entirely with AI is rarely a good idea. The company says many of the human moderators who were “put offline” during the pandemic are now coming back.

(Photo by Rachit Tank on Unsplash)

The post Google returns to using human YouTube moderators after AI errors appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/09/21/google-human-youtube-moderators-ai-errors/feed/ 0
How can AI-powered humanitarian engineering tackle the biggest threats facing our planet? https://news.deepgeniusai.com/2020/08/28/how-can-ai-powered-humanitarian-engineering-tackle-the-biggest-threats-facing-our-planet/ https://news.deepgeniusai.com/2020/08/28/how-can-ai-powered-humanitarian-engineering-tackle-the-biggest-threats-facing-our-planet/#respond Fri, 28 Aug 2020 20:40:53 +0000 https://news.deepgeniusai.com/?p=9834 Humanitarian engineering programs bring together engineers, policy makers, non-profit organisations, and local communities to leverage technology for the greater good of humanity. The intersection of technology, community, and sustainability offers a plethora of opportunities to innovate. We still live in an era where millions of people are under extreme poverty, lacking access to clean water,... Read more »

The post How can AI-powered humanitarian engineering tackle the biggest threats facing our planet? appeared first on AI News.

]]>
Humanitarian engineering programs bring together engineers, policy makers, non-profit organisations, and local communities to leverage technology for the greater good of humanity.

The intersection of technology, community, and sustainability offers a plethora of opportunities to innovate. We still live in an era where millions of people are under extreme poverty, lacking access to clean water, basic sanitation, electricity, internet, quality education, and healthcare.

Clearly, we need global solutions to tackle the grandest challenges facing our planet. So how can artificial intelligence (AI) assist in addressing key humanitarian and sustainable development challenges?

To begin with, the United Nations Sustainable Development Goals (SDGs) represent a collection of 17 global goals that aim to address pressing global challenges, achieve inclusive development, and foster peace and prosperity in a sustainable manner by 2030. AI enables the building of smart systems that imitate human intelligence to solve real-world problems.

Recent advancements in AI have radically changed the way we think, live, and collaborate. Our daily lives are centred around AI-powered solutions with smart speakers playing wakeup alarms, smart watches tracking steps in our morning walk, smart refrigerators recommending breakfast recipes, smart TVs providing personalised content recommendations, and navigation mobile apps recommending the best route based on real-time traffic. Clearly, the age of AI is here. How can we leverage this transformative technology to amplify the impact for social good?

Accelerating AI-powered social innovations

AI core capabilities like machine learning (ML), computer vision, natural language understanding, and speech recognition offer new approaches to address humanitarian challenges and amplify the positive impact on underserved communities. ML enables machines to process massive amounts of data, interconnect underlying patterns, and derive meaningful insights for decision making. ML techniques like deep learning offer the powerful capability to create sophisticated AI models based on artificial neural networks.

Such models can be used for numerous real-world situations, like pandemic forecasting. AI tools can model and predict the spread of outbreaks like Covid-19 in low-resource settings using recent outbreak trends, treatment data, and travel history. This will help governmental and healthcare agencies to identify high-risk areas, manage demand and supply of essential medical supplies, and formulate localised remedial measures to control an outbreak.

Computer vision techniques process visual information in digital images and videos to generate valuable inference. Trained AI models assist medical practitioners to examine clinical images and identify hidden patterns of malignant tumors supporting expediated decision-making and a treatment plan for patients. Most recently, smart speakers have extended their conversational AI capabilities for healthcare use cases like chronic illness management, prescription ordering, and urgent-care appointments.

This advancement opens up the possibility to drive healthcare innovations that will break down access barriers and deliver quality healthcare to a marginalised population. Similarly, global educational programs aimed to connect the digitally unconnected can leverage satellite images and ML algorithms to map school locations. AI-powered learning products are increasingly launched to provide personalised experiences to train young children in math and science.

The convergence of AI with the Internet of Things (IoT) facilitates rapid development of meaningful solutions for agriculture to monitor soil health, assess crop damage, and optimise use of pesticides. This empowers local farmers to model different scenarios and choose the right crop that is likely to maximise the quality and yield, and it contributes toward zero hunger and economic empowerment SDGs.

Decoding best program practices

To deliver high social impact, AI-driven humanitarian programs should follow a “bottom-up” approach. One should always work backwards from needs of the end-user, drive clarity on the targeted community/user, their major pain points, the opportunity to innovate, and expected user experience.

Most importantly, always check whether AI is relevant to the problem at hand or investigate if a meaningful alternative approach exists. Understand how an AI-powered solution will deliver value to various stakeholders involved and positively contribute toward achieving SDG for local communities. Define a suite of metrics to measure various dimensions of program success. Data acquisition is central to building robust AI models that require access to meaningful and quality data.

Delivering effective AI solutions to the humanitarian landscape requires a clear understanding of the data required and relevant sources to acquire them. For instance, satellite images, electronic health records, census data, educational records, and public datasets are used to solve problems in education, healthcare, and climate change. Partnership with key field players is important for addressing data gaps for domains with sparsely available data.

Responsible use of AI in humanitarian programs can be achieved by enforcing standards and best practices to implement fairness, inclusiveness, security, and privacy controls. Always check models and datasets for bias and negative experiences. Techniques like data visualisation and clustering can evaluate a dataset’s distribution for fair representation of various stakeholders’ dimensions. Routine updates to training and testing datasets is essential to fairly account for diversity in users’ growing needs and usage patterns. Safeguard sensitive user information by implementing privacy controls like encrypting user data at rest and in transit, limit access to user data and critical production systems based on least-privilege access control, and enforce data retention and deletion policy on user datasets. Implement a robust threat model to handle possible system attacks and routine checks on infrastructure security vulnerabilities.

To conclude, AI-powered humanitarian programs offer a transformative opportunity to advance social innovations and build a better tomorrow for the benefit of humanity.

Photo by Elena Mozhvilo on Unsplash

? Attend the co-located 

The post How can AI-powered humanitarian engineering tackle the biggest threats facing our planet? appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/08/28/how-can-ai-powered-humanitarian-engineering-tackle-the-biggest-threats-facing-our-planet/feed/ 0
University College London: Deepfakes are the ‘most serious’ AI crime threat https://news.deepgeniusai.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/ https://news.deepgeniusai.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/#respond Thu, 06 Aug 2020 12:41:52 +0000 https://news.deepgeniusai.com/?p=9794 Researchers from University College London have released a ranking of what experts believe to be the most serious AI crime threats. The researchers first created a list of 20 expected ways AI will be used by criminals within the next 15 years. 31 experts were then asked to rank them by potential severity. Deepfakes –... Read more »

The post University College London: Deepfakes are the ‘most serious’ AI crime threat appeared first on AI News.

]]>
Researchers from University College London have released a ranking of what experts believe to be the most serious AI crime threats.

The researchers first created a list of 20 expected ways AI will be used by criminals within the next 15 years. 31 experts were then asked to rank them by potential severity.

Deepfakes – AI-generated images, videos, and articles – ranked top of the list as the most serious threat.

New and dangerous territory

It’s of little surprise to see deepfakes rank so highly, given the existing issues with disinformation campaigns.

Most fake content today at least must be created by humans, such as those working in the likes of Russia’s infamous “troll farms”. Human-generated disinformation campaigns take time to produce to a convincing standard and often have patterns which make them easier to trace. 

Automating the production of fake content en masse, to influence things such as democratic votes and public opinion, is entering into new and dangerous territory.

One of the most high-profile deepfake cases so far was that of US house speaker Nancy Pelosi. In 2018, a deepfake video circulated on social media which made Pelosi appear drunk and slurring her words. Pelosi criticised Facebook’s response, or lack thereof, and later told California’s KQED: “I think they have proven — by not taking down something they know is false — that they were willing enablers of the Russian interference in our election.”

The deepfake of Pelosi was unsophisticated and likely created to be amusing rather than malicious, but it’s an early warning of how such fakes could be used to cause reputational damage – or worse. Just imagine the potential consequences a fake video of the president announcing an imminent strike on somewhere like North Korea could have.

Deepfakes also have obvious potential to be used for fraud purposes, to pretend to be someone else to access things like bank accounts and sensitive information.

Then there’s the issue of blackmail. Deep learning has already been used to put the faces of celebrities on adult performers. While fake, the threat to release such videos – and the embarrassment caused – could lead to some paying a ransom to keep it from being made public.

“People now conduct large parts of their lives online and their online activity can make and break reputations,” comments first author Dr Matthew Caldwell of UCL Computer Science. “Such an online environment, where data is property and information power, is ideally suited for exploitation by AI-based criminal activity.”

All in all, it’s easy to see why experts are so concerned about deepfakes.

As part of a bid to persuade Facebook to change its policies on deepfakes, Israeli startup Canny AI created a deepfake of Facebook CEO Mark Zuckerberg last year which made it appear like he said: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

Other AI crime threats

There were four other major AI crime threats identified by the researchers: the use of driverless cars as weapons, automated spear fishing, harvesting information for blackmail, and the disruption of AI-controlled systems.

“As the capabilities of AI-based technologies expand, so too has their potential for criminal exploitation,” explained senior author Professor Lewis Griffin of UCL Computer Science. “To adequately prepare for possible AI threats, we need to identify what these threats might be, and how they may impact our lives.”

Of medium concern were of things such as the sale of items and services wrongly called AI, such as security screening and targeted advertising solutions. The researchers believe leading people to believe they’re AI-powered could be lucrative.

Among the lesser concerns is things such as so-called “burglar bots” which could get in through access points of a property to unlock them or search for data. The researchers believe this poses less of a threat because they can be easily prevented through methods such as letterbox cages.

Similarly, the researchers note the potential for AI-based stalking is damaging for individuals but isn’t considered a major threat as it could not operate at scale.

You can find the researchers’ full paper in the Crime Science Journal here.

(Photo by Bill Oxford on Unsplash)

The post University College London: Deepfakes are the ‘most serious’ AI crime threat appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/feed/ 0
Google’s Model Card Toolkit aims to bring transparency to AI https://news.deepgeniusai.com/2020/07/30/google-model-card-toolkit-ai/ https://news.deepgeniusai.com/2020/07/30/google-model-card-toolkit-ai/#respond Thu, 30 Jul 2020 16:02:21 +0000 https://news.deepgeniusai.com/?p=9782 Google has released a toolkit which it hopes will bring some transparency to AI models. People are wary of big tech companies like Google. People are also concerned about AI. Combine the two and you’ve got a general distrust which can hinder important advancements. Model Card Toolkit aims to step in and facilitate AI model... Read more »

The post Google’s Model Card Toolkit aims to bring transparency to AI appeared first on AI News.

]]>
Google has released a toolkit which it hopes will bring some transparency to AI models.

People are wary of big tech companies like Google. People are also concerned about AI. Combine the two and you’ve got a general distrust which can hinder important advancements.

Model Card Toolkit aims to step in and facilitate AI model transparency reporting for developers, regulators, and downstream users.

Google launched Model Cards itself over the past year, something that the company first conceptualised in an October 2018 whitepaper.

Model Cards provide a structured framework for reporting on ML model provenance, usage, and ethics-informed evaluation and give a detailed overview of a model’s suggested uses and limitations. 

So far, Google has released Model Cards for open source models built on its MediaPipe platform as well as its commercial Cloud Vision API Face Detection and Object Detection services.

Google’s new toolkit for Model Cards will simplify the process of creating them for third parties by compiling the data and helping build interfaces orientated for specific audiences.

Here’s an example of a Model Card:

MediaPipe has published their Model Cards for each of their open-source models in their GitHub repository.

To demonstrate how the Model Cards Toolkit can be used in practice, Google has released a Colab tutorial that builds a Model Card for a simple classification model trained on the UCI Census Income dataset.

If you just want to dive right in, you can access the Model Cards Toolkit here.

(Photo by Marc Schulte on Unsplash)

The post Google’s Model Card Toolkit aims to bring transparency to AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/30/google-model-card-toolkit-ai/feed/ 0