Adoption – AI News https://news.deepgeniusai.com Artificial Intelligence News Tue, 15 Dec 2020 15:57:51 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Adoption – AI News https://news.deepgeniusai.com 32 32 From experimentation to implementation: How AI is proving its worth in financial services https://news.deepgeniusai.com/2020/12/15/from-experimentation-to-implementation-how-ai-is-proving-its-worth-in-financial-services/ https://news.deepgeniusai.com/2020/12/15/from-experimentation-to-implementation-how-ai-is-proving-its-worth-in-financial-services/#comments Tue, 15 Dec 2020 15:57:15 +0000 https://news.deepgeniusai.com/?p=10122 For financial institutions, recovering from the pandemic will put an end to tentative experiments with artificial intelligence (AI) and machine learning (ML), and demand their large-scale adoption. The crisis has required financial organisations to respond to customer needs around the clock. Many are therefore transforming with ever-increasing pace, but they must ensure that their core... Read more »

The post From experimentation to implementation: How AI is proving its worth in financial services appeared first on AI News.

]]>
For financial institutions, recovering from the pandemic will put an end to tentative experiments with artificial intelligence (AI) and machine learning (ML), and demand their large-scale adoption. The crisis has required financial organisations to respond to customer needs around the clock. Many are therefore transforming with ever-increasing pace, but they must ensure that their core critical operations continue to run smoothly. This has sparked an interest in AI and ML solutions, which reduce the need for manual intervention in operations, significantly improve security and free up time for innovation. Reducing the time between the generation of an idea and it delivering value for the business, AI and ML promise long-term, strategic advantages for organisations.

We’re now seeing banks transforming into digitally driven enterprises akin to big tech firms, building capabilities that enable a relentless focus on customers. So how can banks and finance institutions make the most of AI, and what are the key use cases in practice?

Benefits across the business

Many financial services firms had already adopted AI and ML prior to the pandemic. However, many had difficulties identifying which key functions benefit most from AI, and so the technology did not always deliver the returns expected. This is set to change in the coming months: increased AI and ML deployment will be at the heart of the economic recovery from COVID-19, and the pandemic has highlighted particular areas where AI should be applied. These range from informing credit decisions and preventing fraud, to improving the customer experience through frictionless, 24/7 interactions.

Some specific financial services processes that can be improved by AI include:

Document processing with intelligent automation

Intelligent and robotic process automation optimise various functions, enhance efficiency, and improve the overall speed and accuracy of core financial processes, leading to substantial cost-savings. One area that has risen in prominence is e-KYC, or ‘electronic know-your-customer’. This is a remote, paperless process that reduces the bureaucratic costs of crucial ‘know-your-customer’ protocols, such as verification of client identities and signatures.

This task once involved repetitive, mundane actions with considerable effort required just to keep track of document handling, loan disbursement and repayment, as well as regulatory reporting of the entire process. However, this year, organisations are embracing intelligent automation platforms that manage, interpret and extract unstructured data, including text, images, scanned documents (handwritten and electronic), faxes, and web content. Running on an NLP (natural language processing) engine, which identifies any missing, unseen, and ill-formed data, these platforms offer near-perfect accuracy and higher reliability. Average handling time is reduced, and firms gain a significant competitive advantage through an improved customer experience.

Efficient and thorough customer support

Virtual assistants can respond to customer needs with minimal employee input. A straightforward  means of increasing productivity, the time and effort spent on generic customer queries is reduced, freeing up teams to focus on longer-term projects that drive innovation across the business.

We’re all familiar with chatbots on e-commerce sites, and such solutions will become increasingly common in the financial services industry, with organisations such as JP Morgan now making use of these bots to streamline their back-office operations and strengthen customer support. The platforms involve COIN, short for ‘contract intelligence’, which runs on an ML system powered by the bank’s private cloud network. As well as creating appropriate responses to general queries, COIN automates legal filing tasks, reviews documents, handles basic IT requests such as password resets, and creates new tools for both bankers and clients with greater proficiency and less human error. 

Risk management analytics

Estimating creditworthiness is largely based on the likelihood of an individual or business repaying a loan. Determining the chances of default underpins the risk management processes at all lending organisations. Even with impeccable data, assessing this has its difficulties, as some individuals and organisations can be untruthful about their ability to pay their loans back.

To combat this, companies such as Lenddo and ZestFinance are using AI for risk assessment, and to determine an individual’s creditworthiness. Credit bureaus such as Equifax also use AI, ML and advanced data and analytical tools to analyse alternate sources in the evaluation of risk, and gain customer insight in the process.

Lenders once used a limited set of data, such as annual salaries and credit scores, for this process. However, thanks to AI, organisations are now able to consider an individual’s entire digital financial footprint to determine the likelihood of default. In addition to traditional data sets, the analysis of this alternative data is particularly useful in determining the creditworthiness of individuals without conventional records of loan or credit history.

The time to adopt is now

The way that businesses and clients interact with each other has changed irreversibly this year, and the finance industry is no different. Before the urgency demanded by the pandemic, financial institutions had been experimenting with AI and ML on a limited scale – mainly as a tick-box exercise in an effort to ‘keep up with the Joneses’. The widespread adoption that has been taking place this year stems from the need to truly innovate and increase resilience across the sector.

Banks and financial institutions are now aware of the key areas that benefit from AI, such as greater efficiency in back office operations, and significant improvements in customer engagement. A transformation process that was in its infancy prior to Covid-19 has accelerated and is fast becoming the standard approach. What’s more, financial organisations that are embracing AI now and prioritising its full implementation will be best placed to reap its rewards in the future.

Photo by Jeffrey Blum on Unsplash

? Attend the co-located 

The post From experimentation to implementation: How AI is proving its worth in financial services appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/15/from-experimentation-to-implementation-how-ai-is-proving-its-worth-in-financial-services/feed/ 1
EU human rights agency issues report on AI ethical considerations https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/ https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/#respond Mon, 14 Dec 2020 16:34:34 +0000 https://news.deepgeniusai.com/?p=10117 The European Union’s Fundamental Rights Agency (FRA) has issued a report on AI which delves into the ethical considerations which must be made about the technology. FRA’s report is titled Getting The Future Right and opens with some of the ways AI is already making lives better—such as helping with cancer diagnosis, and even predicting... Read more »

The post EU human rights agency issues report on AI ethical considerations appeared first on AI News.

]]>
The European Union’s Fundamental Rights Agency (FRA) has issued a report on AI which delves into the ethical considerations which must be made about the technology.

FRA’s report is titled Getting The Future Right and opens with some of the ways AI is already making lives better—such as helping with cancer diagnosis, and even predicting where burglaries are likely to take place.

“The possibilities seem endless,” writes Michael O’Flaherty, Director of the FRA, in the report’s foreword. “But how can we fully uphold fundamental rights standards when using AI?”

The FRA interviewed over a hundred public administration officials, private company staff, and a diverse range of experts, in a bid to answer that question.

With evidence of algorithms having biases which could lead to automating societal issues like racial profiling—it’s a question that needs answering if the full potential of AI is going to be unlocked for the whole of society.

O’Flaherty says:

“AI is not infallible, it is made by people – and humans can make mistakes. That is why people need to be aware when AI is used, how it works and how to challenge automated decisions. The EU needs to clarify how existing rules apply to AI. And organisations need to assess how their technologies can interfere with people’s rights both in the development and use of AI.

“We have an opportunity to shape AI that not only respects our human and fundamental rights but that also protects and promotes them.”

AI is being used in almost every industry in some form or another—if not already, it will be soon.

Biases in AI are more dangerous in some industries than others. Policing is an obvious example, but in areas like financial services it could mean one person being given a loan or mortgage compared to another.

Without due transparency, these biases could happen without anyone knowing the reasons behind such decisions—it could simply be because someone grew up in a different neighbourhood. Each automated decision has a very real human impact.

The FRA calls for the EU to:

  • Make sure that AI respects ALL fundamental rights – AI can affect many rights – not just privacy or data protection. It can also discriminate or impede justice. Any future AI legislation has to consider this and create effective safeguards.
  • Guarantee that people can challenge decisions taken by AI – people need to know when AI is used and how it is used, as well as how and where to complain. Organisations using AI need to be able to explain how their systems take decisions.
  • Assess AI before and during its use to reduce negative impacts – private and public organisations should carry out assessments of how AI could harm fundamental rights.
  • Provide more guidance on data protection rules – the EU should further clarify how data protection rules apply to AI. More clarity is also needed on the implications of automated decision-making and the right to human review when AI is used.
  • Assess whether AI discriminates – awareness about the potential for AI to discriminate, and the impact of this, is relatively low. This calls for more research funding to look into the potentially discriminatory effects of AI so Europe can guard against it.
  • Create an effective oversight system – the EU should invest in a more ‘joined-up’ system to hold businesses and public administrations accountable when using AI. Authorities need to ensure that oversight bodies have adequate resources and skills to do the job.

The EU has increased its scrutiny of “big tech” companies like Google in recent years over concerns of invasive privacy practices and abusing their market positions. Last week, AI News reported that Google had controversially fired leading AI ethics researcher Timnit Gebru after she criticised her employer in an email.

Google chief executive Sundar Pichai wrote in a memo: “We need to accept responsibility for the fact that a prominent black, female leader with immense talent left Google unhappily.

“It’s incredibly important to me that our black, women, and under-represented Googlers know that we value you and you do belong at Google.”

Gebru gave an interview to the BBC this week in which she called Google and big tech “institutionally racist”. With that in mind, the calls made in the FRA’s report seem especially important to heed.

You can download a full copy of the FRA’s report here.

(Photo by Guillaume Périgois on Unsplash)

The post EU human rights agency issues report on AI ethical considerations appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/feed/ 0
Algorithmia: AI budgets are increasing but deployment challenges remain https://news.deepgeniusai.com/2020/12/10/algorithmia-ai-budgets-increasing-deployment-challenges-remain/ https://news.deepgeniusai.com/2020/12/10/algorithmia-ai-budgets-increasing-deployment-challenges-remain/#comments Thu, 10 Dec 2020 12:52:07 +0000 https://news.deepgeniusai.com/?p=10099 A new report from Algorithmia has found that enterprise budgets for AI are rapidly increasing but significant deployment challenges remain. Algorithmia’s 2021 Enterprise Trends in Machine Learning report features the views of 403 business leaders involved with machine learning initiatives. Diego Oppenheimer, CEO of Algorithmia, says: “COVID-19 has caused rapid change which has challenged our... Read more »

The post Algorithmia: AI budgets are increasing but deployment challenges remain appeared first on AI News.

]]>
A new report from Algorithmia has found that enterprise budgets for AI are rapidly increasing but significant deployment challenges remain.

Algorithmia’s 2021 Enterprise Trends in Machine Learning report features the views of 403 business leaders involved with machine learning initiatives.

Diego Oppenheimer, CEO of Algorithmia, says:

“COVID-19 has caused rapid change which has challenged our assumptions in many areas. In this rapidly changing environment, organisations are rethinking their investments and seeing the importance of AI/ML to drive revenue and efficiency during uncertain times.

Before the pandemic, the top concern for organisations pursuing AI/ML initiatives was a lack of skilled in-house talent. Today, organisations are worrying more about how to get ML models into production faster and how to ensure their performance over time.

While we don’t want to marginalise these issues, I am encouraged by the fact that the type of challenges have more to do with how to maximise the value of AI/ML investments as opposed to whether or not a company can pursue them at all.”

The main takeaway is that AI budgets are significantly increasing. 83 percent of respondents said they’ve increased their budgets compared to last year.

Despite a difficult year for many companies, business leaders are not being put off of AI investments—in fact, they’re doubling-down.

In Algorithmia’s summer survey, 50 percent of respondents said they plan to spend more on AI this year. Around one in five even said they “plan to spend a lot more.”

76 percent of businesses report they are now prioritising AI/ML over other IT initiatives. 64 percent say the priority of AI/ML has increased relative to other IT initiatives over the last 12 months.

With unemployment figures around the world at their highest for several years – even decades in some cases – it’s at least heartening to hear that 76 percent of respondents said they’ve not reduced the size of their AI/ML teams. 27 percent even report an increase.

43 percent say their AI/ML initiatives “matter way more than we thought” and close to one in four believe their AI/ML initiatives should have been their top priority sooner. Process automation and improving customer experiences are the two main areas for AI investments.

While it’s been all good news so far, there are AI deployment issues being faced by many companies which are yet to be addressed.

Governance is, by far, the biggest AI challenge being faced by companies. 56 percent of the businesses ranked governance, security, and auditability issues as a concern.

Regulatory compliance is vital but can be confusing, especially with different regulations between not just countries but even states. 67 percent of the organisations report having to comply with multiple regulations for their AI/ML deployments.

The next major challenge after governance is with basic deployment and organisational challenges. 

Basic integration issues were ranked by 49 percent of businesses as a problem. Furthermore, more job roles are being involved with AI deployment strategies than ever before—it’s no longer seen as just the domain of data scientists.

However, there’s perhaps some light at the end of the tunnel. Organisations are reporting improved outcomes when using dedicated, third-party MLOps solutions.

While keeping in mind Algorithmia is a third-party MLOps solution, the report claims organisations using such a platform spend an average of around 21 percent less on infrastructure costs. Furthermore, it also helps to free up their data scientists—who spend less time on model deployment.

You can find a full copy of Algorithmia’s report here (requires signup)

The post Algorithmia: AI budgets are increasing but deployment challenges remain appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/10/algorithmia-ai-budgets-increasing-deployment-challenges-remain/feed/ 1
From fantasy to reality: Misunderstanding the impact of AI https://news.deepgeniusai.com/2020/12/09/from-fantasy-to-reality-misunderstanding-the-impact-of-ai/ https://news.deepgeniusai.com/2020/12/09/from-fantasy-to-reality-misunderstanding-the-impact-of-ai/#comments Wed, 09 Dec 2020 11:19:46 +0000 https://news.deepgeniusai.com/?p=10081 The prominence of artificial intelligence (AI) has significantly grown in pop culture and science fiction over the years. It has speculated on how AI can change people’s lives, the places we live and our day-to-day activities. However, despite the increase of AI in popular films such as I, Robot, Star Trek and WALL-E, it’s continued... Read more »

The post From fantasy to reality: Misunderstanding the impact of AI appeared first on AI News.

]]>
The prominence of artificial intelligence (AI) has significantly grown in pop culture and science fiction over the years. It has speculated on how AI can change people’s lives, the places we live and our day-to-day activities. However, despite the increase of AI in popular films such as I, Robot, Star Trek and WALL-E, it’s continued depiction and futuristic tendencies throughout the years have altered individual perceptions about the true meaning of AI and how it is already playing a vital part in our everyday lives.

A recent survey conducted by O’Reilly paints this exact picture. It gives AI-creators an in-depth look at how consumers identify and use AI technology, showcasing the heightened misunderstanding that consumers have of AI and its use.

AI takes over popular culture

Television and the big screen have played a large role in introducing AI into our homes, but how does this depiction impact how we develop and implement the technology?

For those working to incorporate AI technology into products and develop new ways to use it, robots and cars are not an everyday focus. The areas of advancement instead look at AI that learns from our actions to more efficiently help us in our day-to-day lives, answering questions for us and completing tasks through speech recognition and language processing at work and at home.

But how do we harness the excitement around the fantasy of AI to increase everyday adoption?

The true potential of AI

One of the best ways to merge the fantasy and reality of AI is to truly understand what consumers think and what they believe is the potential of the technology.

In our survey, when asked what the most useful form of AI is, more than half (58%) of consumers regarded smart home technology as the most vital. This was closely followed by home security systems (54%), travel recommendations (52%), and virtual assistants (50%). This provides insight into how AI creators can expand their ideas of where AI can be useful to encourage consumers to adopt it in their personal lives.

While AI is already present in our homes—thanks to smart speakers from Amazon, Apple and Google—more and more consumer groups appreciate the success of smart home technology and are willing to adopt it in the future.

Answering the questions: What is AI? And why should I care?

Survey respondents were also asked what application of AI excited them the most in the future. Fraud detection (28%) topped the list as the most exciting area for AI development.

It was the most commonly cited use by men. This is despite only 11% of consumers closely associating fraud detection with AI.

While self-driving cars also generated great excitement among 24% of respondents, interestingly, it was the most popular choice among women, younger consumers, and those working in the AI industry by a significant margin (50%). With fraud detection coming out on top, we can start to see the shift from fantasy to practicality, a trend that AI-creators should leverage to reinforce the pragmatic use of AI within the workforce.

It is up to a wide range of individuals including developers, marketers, product managers and sales to ensure that AI is used and understood correctly. For successful consumer AI adoption, developers should focus their efforts on leveraging AI to make consumers’ everyday lives easier, augmenting existing experiences to make them more seamless and exciting. While there might be an indifference with fantasy and reality, more and more consumer groups appreciate the success of smart home technology and are watching the development of autonomous vehicles very closely.

It’s up to these sectors to capitalise on the hype, but the results are also a call for the creators of work-focused AI to make solutions that capture the imagination and generate excitement. Not only this, but developers need to have in mind consumer needs relatively clearly even at the start of the process when an idea might be more amorphous.

What’s next?

What does the future hold for AI? While the notion of AI has been perpetuated in popular culture and science fiction throughout our lives, individuals are yet to understand the meaning that AI has and that it isn’t an ‘out-there’ concept. In fact, it is already all around us. It plays a role in our homes and at work, in ways that we wouldn’t expect.

Ultimately, AI creators need to bear this in mind and continuously learn from consumer attitudes towards AI to ensure individuals continue to stay engaged with technology, making the most out of the fantasy.

? Attend the co-located 

The post From fantasy to reality: Misunderstanding the impact of AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/09/from-fantasy-to-reality-misunderstanding-the-impact-of-ai/feed/ 1
CDEI launches a ‘roadmap’ for tackling algorithmic bias https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/ https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/#respond Fri, 27 Nov 2020 16:10:35 +0000 https://news.deepgeniusai.com/?p=10058 A review from the Centre for Data Ethics and Innovation (CDEI) has led to the creation of a “roadmap” for tackling algorithmic bias. The analysis was commissioned by the UK government in October 2018 and will receive a formal response. Algorithms bring substantial benefits to businesses and individuals able to use them effectively. However, increasing... Read more »

The post CDEI launches a ‘roadmap’ for tackling algorithmic bias appeared first on AI News.

]]>
A review from the Centre for Data Ethics and Innovation (CDEI) has led to the creation of a “roadmap” for tackling algorithmic bias.

The analysis was commissioned by the UK government in October 2018 and will receive a formal response.

Algorithms bring substantial benefits to businesses and individuals able to use them effectively. However, increasing evidence suggests biases are – often unconsciously – making their way into algorithms and creating an uneven playing field.

The CDEI is the UK government’s advisory body on the responsible use of AI and data-driven technology. CDEI has spent the past two years examining the issue of algorithmic bias and how it can be tackled.

Adrian Weller, Board Member for the Centre for Data Ethics and Innovation, said:

“It is vital that we work hard now to get this right as adoption of algorithmic decision-making increases. Government, regulators, and industry need to work together with interdisciplinary experts, stakeholders, and the public to ensure that algorithms are used to promote fairness, not undermine it.

The Centre for Data Ethics and Innovation has today set out a range of measures to help the UK to achieve this, with a focus on enhancing transparency and accountability in decision-making processes that have a significant impact on individuals.

Not only does the report propose a roadmap to tackle the risks, but it highlights the opportunity that good use of data presents to address historical unfairness and avoid new biases in key areas of life.”

The report focuses on four key sectors where algorithmic bias poses the biggest risk: policing, recruitment, financial services, and local government.

Today’s facial recognition algorithms are relatively effective when used on white males, but research has consistently shown how ineffective they are with darker skin colours and females. The error rate is, therefore, higher when facial recognition algorithms are used on some parts of society over others.

In June, Detroit Police chief Editor Craig said facial recognition would misidentify someone around 96 percent of the time—not particularly comforting when they’re being used to perform mass surveillance of protests.

Craig’s comments were made just days after the ACLU (American Civil Liberties Union) lodged a complaint against Detroit Police following the harrowing wrongful arrest of black male Robert Williams due to a facial recognition error.

And that’s just one example of where AI can unfairly impact some parts of society over another.

“Fairness is a highly prized human value,” the report’s preface reads. “Societies in which individuals can flourish need to be held together by practices and institutions that are regarded as fair.”

Ensuring fairness in algorithmic decision-making

Transparency is required for algorithms. In financial services, a business loan or mortgage could be rejected without transparency simply because a person was born in a poor neighbourhood. Job applications could be rejected not on a person’s actual skill but dependent on where they were educated.

Such biases exist in humans and our institutions today, but automating them at scale is a recipe for disaster. Removing bias from algorithms is not an easy task but if achieved would lead to increased fairness by taking human biases out of the equation.

“It is well established that there is a risk that algorithmic systems can lead to biased decisions, with perhaps the largest underlying cause being the encoding of existing human biases into algorithmic systems. But the evidence is far less clear on whether algorithmic decision-making tools carry more or less risk of bias than previous human decision-making processes. Indeed, there are reasons to think that better use of data can have a role in making decisions fairer, if done with appropriate care.

When changing processes that make life-affecting decisions about individuals we should always proceed with caution. It is important to recognise that algorithms cannot do everything. There are some aspects of decision-making where human judgement, including the ability to be sensitive and flexible to the unique circumstances of an individual, will remain crucial.”

The report’s authors examined the aforementioned four key sectors to determine their current “maturity levels” in algorithmic decision-making.

In recruitment, the authors found rapid growth in the use of algorithms to make decisions at all stages. They note that adequate data is being collected to monitor outcomes but found that understanding of how to avoid human biases creeping in is lacking.

“More guidance is needed on how to ensure that these tools do not unintentionally discriminate against groups of people, particularly when trained on historic or current employment data.”

The financial services industry has relied on data to make decisions for longer than arguably any other to determine things like how likely it is an individual can repay a debt.

“Specific groups are historically underrepresented in the financial system, and there is a risk that these historic biases could be entrenched further through algorithmic systems.”

CDEI found limited use of algorithmic decision-making in UK policing but found variance across forces with regards to both usage and managing ethical risks.

“The use of data analytics tools in policing carries significant risk. Without sufficient care, processes can lead to Review into bias in algorithmic decision-making: Executive summary Centre for Data Ethics and Innovation 8 outcomes that are biased against particular groups, or systematically unfair.

In many scenarios where these tools are helpful, there is still an important balance to be struck between automated decision-making and the application of professional judgement and discretion.”

Finally, in local government, CDEI noted an increased use of algorithms to inform decision-making but most are in their early stages of deployment. Such tools can be powerful assets for societal good – like helping to plan where resources should be allocated to maintain vital services – but can also carry significant risks.

“Evidence has shown that certain people are more likely to be overrepresented in data held by local authorities and this can then lead to biases in predictions and interventions.”

The CDEI makes a number of recommendations in its report but among them is:

  • Clear and mandatory transparency over how algorithms are used for public decision-making and steps taken to ensure the fair treatment of individuals.
  • Full accountability for organisations implementing such technologies.
  • Improving the diversity of roles involved with developing and deploying decision-making tools.
  • Updating model contracts and framework agreements for public sector procurement to incorporate minimum standards around the ethical use of AI.
  • The government working with regulators to provide clear guidance on the collection and use of protected characteristic data in outcome monitoring and decision-making processes. They should then encourage the use of that guidance and data to address current and historic bias in key sectors.
  • Ensuring that the Equality and Human Rights Commission has sufficient resources to investigate cases of alleged algorithmic discrimination.

CDEI is overseen by an independent board which is made up of experts from across industry, civil society, academia, and government; it is an advisory body and does not directly set policies. The Department for Digital, Culture, Media & Sport is consulting on whether a statutory status would help the CDEI to deliver its remit as part of the National Data Strategy.

You can find a full copy of the CDEI’s report into tackling algorithmic bias here (PDF)

(Photo by Matt Duncan on Unsplash)

The post CDEI launches a ‘roadmap’ for tackling algorithmic bias appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/feed/ 0
AI is helping mobile operators to cope with pandemic demand https://news.deepgeniusai.com/2020/09/30/ai-helping-telcos-cope-pandemic-demand-surge/ https://news.deepgeniusai.com/2020/09/30/ai-helping-telcos-cope-pandemic-demand-surge/#respond Wed, 30 Sep 2020 08:13:06 +0000 https://news.deepgeniusai.com/?p=9888 Artificial intelligence is helping telecoms operators to boost the RAN capacity of their 4G networks by 15 percent. More people than ever are relying on telecoms networks to work, play, and stay connected during the pandemic. Operators are doing all they can to ensure their existing networks have enough capacity to cope with demand. Gorkem... Read more »

The post AI is helping mobile operators to cope with pandemic demand appeared first on AI News.

]]>
Artificial intelligence is helping telecoms operators to boost the RAN capacity of their 4G networks by 15 percent.

More people than ever are relying on telecoms networks to work, play, and stay connected during the pandemic. Operators are doing all they can to ensure their existing networks have enough capacity to cope with demand.

Gorkem Yigit, Principal Analyst at Analysys Mason, said:

“Video streaming continues to experience high year on year growth and that has been exacerbated by the pandemic and resulting lock-downs,

Yes, 5G grabs the spotlight, but 4G is carrying the brunt of this traffic. So, while investment in 5G infrastructure continues, operators need intelligent ways to maximize and extend existing 4G network capabilities in the short to medium term – keeping their CAPEX to a minimum.”

8 out of 10 of the world’s largest operator groups have deployed traffic management technology from the Openwave subsidiary of Swedish firm Enea. Many of these have since upgraded to include machine learning capabilities.

Openwave claims that, based on its figures, some operators faced a 90 percent surge in peak throughput during lockdowns.

Machine learning is helping to predict and identify congestion in the RAN (Radio Access Network) which resides between user equipment such as wireless devices and an operator’s core network.

John Giere, President of Enea Openwave, commented:

“Conventional mobile data management requires manual configuration and network investment – it is no longer fit for purpose.

Machine Learning has given existing 4G networks the shot in the arm they needed. It can work dynamically without external probes or changes to the RAN, delivering additional capacity at a time that operators most need it.” 

The use of machine learning has increased operators’ 4G RAN capacity by 15 percent in congested locations—providing further evidence of how AI technology can be used to quickly tackle real-world problems.

(Photo by Adrian Schwarz on Unsplash)

The post AI is helping mobile operators to cope with pandemic demand appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/09/30/ai-helping-telcos-cope-pandemic-demand-surge/feed/ 0
Google returns to using human YouTube moderators after AI errors https://news.deepgeniusai.com/2020/09/21/google-human-youtube-moderators-ai-errors/ https://news.deepgeniusai.com/2020/09/21/google-human-youtube-moderators-ai-errors/#respond Mon, 21 Sep 2020 17:05:18 +0000 https://news.deepgeniusai.com/?p=9865 Google is returning to using humans for YouTube moderation after repeated errors with its AI system. Moderating a large network like YouTube is no easy task. Aside from the sheer volume of content uploaded every day, moderators are subjected to the worst of humanity and often end up requiring therapy. They’re the unsung heroes. AI... Read more »

The post Google returns to using human YouTube moderators after AI errors appeared first on AI News.

]]>
Google is returning to using humans for YouTube moderation after repeated errors with its AI system.

Moderating a large network like YouTube is no easy task. Aside from the sheer volume of content uploaded every day, moderators are subjected to the worst of humanity and often end up requiring therapy. They’re the unsung heroes.

AI has been hailed as helping to deal with some of the aforementioned issues. Either by automating the moderation process entirely or by offering a helping hand to humans.

Google was left with little choice but to give more power to its AI moderators as the COVID-19 pandemic took hold… but it hasn’t been smooth sailing.

In late August, YouTube said that it had removed 11.4 million videos over the three months prior–the most since the site launched in 2005.

That figure alone should raise a few eyebrows. If a team of humans were removing that many videos, they probably deserve quite the pay rise.

Of course, most of the video removals weren’t done by humans. Many of the videos didn’t even violate the guidelines.

Neal Mohan, chief product officer at YouTube, told the Financial Times:

“One of the decisions we made [at the beginning of the COVID-19 pandemic] when it came to machines who couldn’t be as precise as humans, we were going to err on the side of making sure that our users were protected, even though that might have resulted in [a] slightly higher number of videos coming down.”

Some of the removals left content creators bewildered, angry, and out of pocket in some cases.

Around 320,000 of videos taken down were appealed, and half of the appealed videos were reinstated.

Deciding what content to ultimately remove feels like one of the many tasks which needs human involvement. Humans are much better at detecting nuances and things like sarcasm.

However, the sheer scale of content needing to be moderated also requires an AI to help automate some of that process.

“Over 50 percent of those 11 million videos were removed without a single view by an actual YouTube user and over 80 percent were removed with less than 10 views,” Mohan said. “That’s the power of machines.”

AIs can also help to protect humans from the worst of the content. Content detection systems are being built to automatically blur things like child abuse enough so that human moderators know what it is to remove it—but to limit their psychological impact.

Some believe AIs are better in helping to determine what content should be removed simply using logic rather than a human’s natural biases like their political-leaning, but we know human biases seep into algorithms.

In May, YouTube admitted to deleting messages critical of the Chinese Communist Party (CCP). YouTube later blamed an “error with our enforcement systems” for the mistakes. Senator Josh Hawley even wrote (PDF) to Google CEO Sundar Pichai seeking answers to “troubling reports that your company has resumed its long pattern of censorship at the behest of the Chinese Communist Party.”

Google appears to have quickly realised that replacing humans entirely with AI is rarely a good idea. The company says many of the human moderators who were “put offline” during the pandemic are now coming back.

(Photo by Rachit Tank on Unsplash)

The post Google returns to using human YouTube moderators after AI errors appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/09/21/google-human-youtube-moderators-ai-errors/feed/ 0
Nvidia and ARM will open ‘world-class’ AI centre in Cambridge https://news.deepgeniusai.com/2020/09/14/nvidia-arm-world-class-ai-centre-cambridge/ https://news.deepgeniusai.com/2020/09/14/nvidia-arm-world-class-ai-centre-cambridge/#respond Mon, 14 Sep 2020 12:52:49 +0000 https://news.deepgeniusai.com/?p=9848 Nvidia is already putting its $40 billion ARM acquisition to good use by opening a “world-class” AI centre in Cambridge. British chip designer ARM’s technology is at the heart of most mobile devices. Meanwhile, Nvidia’s GPUs are increasingly being used for AI computation in servers, desktops, and even things like self-driving vehicles. However, Nvidia was... Read more »

The post Nvidia and ARM will open ‘world-class’ AI centre in Cambridge appeared first on AI News.

]]>
Nvidia is already putting its $40 billion ARM acquisition to good use by opening a “world-class” AI centre in Cambridge.

British chip designer ARM’s technology is at the heart of most mobile devices. Meanwhile, Nvidia’s GPUs are increasingly being used for AI computation in servers, desktops, and even things like self-driving vehicles.

However, Nvidia was most interested in ARM’s presence in edge devices—which it estimates to be in the region of 180 billion.

Jensen Huang, CEO of Nvidia, said:

“ARM is an incredible company and it employs some of the greatest engineering minds in the world. But we believe we can make ARM even more incredible and take it to even higher levels.

We want to propel it — and the UK — to global AI leadership.”

There were concerns Nvidia’s acquisition would lead to job losses, but the company has promised to keep the business in the UK. The company says it’s planning to hire more staff and retain ARM’s iconic brand.

Nvidia is going further in its commitment to the UK by opening a new AI centre in Cambridge, which is home to an increasing number of exciting startups in the field such as FiveAI, Prowler.io, Fetch.ai, and Darktrace.

“We will create an open centre of excellence in the area once home to giants like Isaac Newton and Alan Turing, for whom key NVIDIA technologies are named.

Here, leading scientists, engineers and researchers from the UK and around the world will come to develop their ideas, collaborate and conduct their ground-breaking work in areas like healthcare, life sciences, self-driving cars, and other fields.”

The new centre will have five key features when it opens:

  • ARM/Nvidia-based supercomputer – set to be one of the most powerful AI supercomputers in the world.
  • Research Fellowships and Partnerships – Nvidia will use the centre to establish new UK-based research partnerships, expanding on successful relationships already established with King’s College and Oxford.
  • AI Training – Nvidia will make its AI curriculum available across the UK to help create job opportunities and prepare “the next generation of UK developers for AI leadership”
  • Startup Accelerator – With so many of the world’s most exciting AI companies launching in the UK, the Nvidia Inception accelerator will help startups succeed by providing access to the aforementioned supercomputer, connections to researchers from NVIDIA and partners, technical training, and marketing promotion.
  • Industry Collaboration – AI is still in its infancy but will impact every industry to some extent. Nvidia says its new research facility will be an open hub for industry collaboration, building on the company’s existing relationships with the likes of GSK, Oxford Nanopore, and other leaders in their fields.

The UK is Europe’s leader in AI and the British government is investing heavily in ensuring it maintains its pole position. Beyond funding, the UK is also aiming to ensure it’s among the best places to run an AI company.

Current EU rules, especially around data, are often seen as limiting the development of European AI companies when compared to elsewhere in the world. While the UK will have to avoid being accused of doing a so-called “bonfire of regulations” post-Brexit, data collection regulations is likely an area which will be relaxed.

In the UK’s historic trade deal signed with Japan last week, several enhancements were made over the blanket EU-Japan deal signed earlier this year. Among the perceived improvements is the “free flow of data” by not enforcing localisation requirements, and that algorithms can remain private.

UK trade secretary Liz Truss said: “The agreement we have negotiated – in record time and in challenging circumstances – goes far beyond the existing EU deal, as it secures new wins for British businesses in our great manufacturing, food and drink, and tech industries.”

Japan and the UK, as two global tech giants, are expected to deepen their collaboration in the coming years—building on the trade deal signed last week.

Shigeki Ishizuka, Chairman of the Japan Electronics and Information Technology Industries Association, said: “We are confident that this mutual relationship will be further strengthened as an ambitious agreement that will contribute to the promotion of cooperation in research and development, the promotion of innovation, and the further expansion of inter-company collaboration.”

Nvidia’s investment shows that it has confidence in the UK’s strong AI foundations continuing to gain momentum in the coming years.

(Photo by A Perry on Unsplash)

The post Nvidia and ARM will open ‘world-class’ AI centre in Cambridge appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/09/14/nvidia-arm-world-class-ai-centre-cambridge/feed/ 0
Global spending on AI ‘expected to double in four years’, says IDC https://news.deepgeniusai.com/2020/08/27/global-spending-ai-110-billion-2024/ https://news.deepgeniusai.com/2020/08/27/global-spending-ai-110-billion-2024/#comments Thu, 27 Aug 2020 00:16:50 +0000 https://news.deepgeniusai.com/?p=9830 Worldwide spending on artificial intelligence (AI) is forecast to double over the coming for years to hit $110 billion by 2024, according to new data from IDC. The figure, which comes from the analyst firm’s latest Worldwide Artificial Intelligence Spending Guide, calculates a CAGR of 20.1% as adopting AI becomes a ‘must’ in the enterprise.... Read more »

The post Global spending on AI ‘expected to double in four years’, says IDC appeared first on AI News.

]]>
Worldwide spending on artificial intelligence (AI) is forecast to double over the coming for years to hit $110 billion by 2024, according to new data from IDC.

The figure, which comes from the analyst firm’s latest Worldwide Artificial Intelligence Spending Guide, calculates a CAGR of 20.1% as adopting AI becomes a ‘must’ in the enterprise.

In particular, companies will utilise AI to deliver a better customer experience, as well as help employees to become better at their jobs. Automated customer service agents, sales process recommendation and automation, as well as automated threat intelligence and prevention, are the primary use cases outlined by IDC.

Retail and banking are the two industries most likely to splurge in the coming years. The former, unsurprisingly, will focus more on customer experience, while the latter will invest on fraud analysis and investigation, as well as program advisors and recommendation systems.

Other industries have hit something of a proverbial wall, primarily as a result of Covid-19. Transportation, as well as the services industry – including leisure and hospitality – have already struggled with the pandemic. Naturally, IDC argued, AI investments will be on the back burner here in 2020. Yet the pandemic has seen some innovation; the research specifically noted hospitals who were using AI to speed up Covid-19 diagnosis and testing.

“Companies will adopt AI – not just because they can, but because they must,” said Ritu Jyoti, program vice president for artificial intelligence at IDC. “AI is the technology that will help businesses to be agile, innovate, and scale. The companies that become ‘AI powered’ will have the ability to synthesise information, the capacity to learn, and the capability to deliver insights at scale.”

In other words, leading organisations will be able to use AI to convert data into information and insights, understand those relationships and apply those insights to business problems, and then support decisions and bring through automation.

Sounds simple when it’s put like that.

Photo by Fabian Blank on Unsplash

? Attend the co-located 

The post Global spending on AI ‘expected to double in four years’, says IDC appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/08/27/global-spending-ai-110-billion-2024/feed/ 1
The White House is set to boost AI funding by 30 percent https://news.deepgeniusai.com/2020/08/19/white-house-boost-ai-funding-30-percent/ https://news.deepgeniusai.com/2020/08/19/white-house-boost-ai-funding-30-percent/#comments Wed, 19 Aug 2020 16:11:48 +0000 https://news.deepgeniusai.com/?p=9824 A budget proposal from the White House would boost funding for AI by around 30 percent as the US aims to retain its technological supremacy. Countries around the world are vastly increasing their budgets for AI, and with good reason. Just look at Gartner’s Hype Cycle released yesterday to see how important the technology is... Read more »

The post The White House is set to boost AI funding by 30 percent appeared first on AI News.

]]>
A budget proposal from the White House would boost funding for AI by around 30 percent as the US aims to retain its technological supremacy.

Countries around the world are vastly increasing their budgets for AI, and with good reason. Just look at Gartner’s Hype Cycle released yesterday to see how important the technology is expected to be over the next decade.

Russian president Vladimir Putin famously said back in 2017 that the nation which leads in AI “will become the ruler of the world”. Putin said that AI offers unprecedented power, including military power, to any government that leads in the field.

China, the third global superpower, has also embarked on a major national AI strategy. In July 2017, The State Council of China released the “New Generation Artificial Intelligence Development Plan” to build a domestic AI industry worth around $150 billion over the next few years and to become the leading AI power by 2030.

Naturally, the US isn’t going to give that top podium spot to China without a fight.

The White House has proposed (PDF) a 30 percent hike in spending on AI and quantum computing. Around $1.5 billion would be allocated to AI funding and $699 million to quantum technology.

According to a report published by US national security think tank Center for a New American Security (CNAS), Chinese officials see an AI ‘arms race’ as a threat to global peace.

The fear of the CNAS is that integrating AI into military resources and communications may breach current international norms and lead to conflict-by-accident.

China and the US have been vying to become the top destination for AI investments. Figures published by ABI Research at the end of last year suggested that the US reclaimed the top spot for AI investments back from China, which overtook the Americans the year prior. ABI expects the US to reach a 70 percent share of global AI investments.

Lian Jye Su, Principal Analyst at ABI Research, said: 

“The United States is reaping the rewards from its diversified AI investment strategy. 

Top AI startups in the United States come from various sectors, including self-driving cars, industrial manufacturing, robotics process automation, data analytics, and cybersecurity.”

The UK, unable to match the levels of funding allocated to AI research as the likes of the US and China, is taking a different approach.

An index compiled by Oxford Insights last year ranked the UK number one for AI readiness in Europe and only second on the world stage behind Singapore. The US is in fourth place, while China only just makes the top 20.

The UK has focused on AI policy and harnessing the talent from its world-leading universities to ensure the country is ready to embrace the technology’s opportunities.

A dedicated AI council in the UK features:

  • Ocado’s Chief Technology Officer, Paul Clarke
  • Dame Patricia Hodgson, Board Member of the Centre for Data Ethics and Innovation 
  • The Alan Turing Institute Chief Executive, Professor Adrian Smith
  • AI for good founder Kriti Sharma
  • UKRI chief executive Mark Walport
  • Founding Director of the Edinburgh Centre for Robotics, Professor David Lane

British Digital Secretary Jeremy Wright stated: “Britain is already a leading authority in AI. We are home to some of the world’s finest academic institutions, landing record levels of investment to the sector, and attracting the best global tech talent. But we must not be complacent.”

Growing cooperation between the UK and US in a number of technological endeavours could help to harness the strengths of both nations if similarly applied to AI, helping to maintain the countries’ leaderships in the field.

(Photo by Louis Velazquez on Unsplash)

The post The White House is set to boost AI funding by 30 percent appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/08/19/white-house-boost-ai-funding-30-percent/feed/ 1