Machine Learning – AI News https://news.deepgeniusai.com Artificial Intelligence News Tue, 15 Dec 2020 15:57:51 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Machine Learning – AI News https://news.deepgeniusai.com 32 32 From experimentation to implementation: How AI is proving its worth in financial services https://news.deepgeniusai.com/2020/12/15/from-experimentation-to-implementation-how-ai-is-proving-its-worth-in-financial-services/ https://news.deepgeniusai.com/2020/12/15/from-experimentation-to-implementation-how-ai-is-proving-its-worth-in-financial-services/#comments Tue, 15 Dec 2020 15:57:15 +0000 https://news.deepgeniusai.com/?p=10122 For financial institutions, recovering from the pandemic will put an end to tentative experiments with artificial intelligence (AI) and machine learning (ML), and demand their large-scale adoption. The crisis has required financial organisations to respond to customer needs around the clock. Many are therefore transforming with ever-increasing pace, but they must ensure that their core... Read more »

The post From experimentation to implementation: How AI is proving its worth in financial services appeared first on AI News.

]]>
For financial institutions, recovering from the pandemic will put an end to tentative experiments with artificial intelligence (AI) and machine learning (ML), and demand their large-scale adoption. The crisis has required financial organisations to respond to customer needs around the clock. Many are therefore transforming with ever-increasing pace, but they must ensure that their core critical operations continue to run smoothly. This has sparked an interest in AI and ML solutions, which reduce the need for manual intervention in operations, significantly improve security and free up time for innovation. Reducing the time between the generation of an idea and it delivering value for the business, AI and ML promise long-term, strategic advantages for organisations.

We’re now seeing banks transforming into digitally driven enterprises akin to big tech firms, building capabilities that enable a relentless focus on customers. So how can banks and finance institutions make the most of AI, and what are the key use cases in practice?

Benefits across the business

Many financial services firms had already adopted AI and ML prior to the pandemic. However, many had difficulties identifying which key functions benefit most from AI, and so the technology did not always deliver the returns expected. This is set to change in the coming months: increased AI and ML deployment will be at the heart of the economic recovery from COVID-19, and the pandemic has highlighted particular areas where AI should be applied. These range from informing credit decisions and preventing fraud, to improving the customer experience through frictionless, 24/7 interactions.

Some specific financial services processes that can be improved by AI include:

Document processing with intelligent automation

Intelligent and robotic process automation optimise various functions, enhance efficiency, and improve the overall speed and accuracy of core financial processes, leading to substantial cost-savings. One area that has risen in prominence is e-KYC, or ‘electronic know-your-customer’. This is a remote, paperless process that reduces the bureaucratic costs of crucial ‘know-your-customer’ protocols, such as verification of client identities and signatures.

This task once involved repetitive, mundane actions with considerable effort required just to keep track of document handling, loan disbursement and repayment, as well as regulatory reporting of the entire process. However, this year, organisations are embracing intelligent automation platforms that manage, interpret and extract unstructured data, including text, images, scanned documents (handwritten and electronic), faxes, and web content. Running on an NLP (natural language processing) engine, which identifies any missing, unseen, and ill-formed data, these platforms offer near-perfect accuracy and higher reliability. Average handling time is reduced, and firms gain a significant competitive advantage through an improved customer experience.

Efficient and thorough customer support

Virtual assistants can respond to customer needs with minimal employee input. A straightforward  means of increasing productivity, the time and effort spent on generic customer queries is reduced, freeing up teams to focus on longer-term projects that drive innovation across the business.

We’re all familiar with chatbots on e-commerce sites, and such solutions will become increasingly common in the financial services industry, with organisations such as JP Morgan now making use of these bots to streamline their back-office operations and strengthen customer support. The platforms involve COIN, short for ‘contract intelligence’, which runs on an ML system powered by the bank’s private cloud network. As well as creating appropriate responses to general queries, COIN automates legal filing tasks, reviews documents, handles basic IT requests such as password resets, and creates new tools for both bankers and clients with greater proficiency and less human error. 

Risk management analytics

Estimating creditworthiness is largely based on the likelihood of an individual or business repaying a loan. Determining the chances of default underpins the risk management processes at all lending organisations. Even with impeccable data, assessing this has its difficulties, as some individuals and organisations can be untruthful about their ability to pay their loans back.

To combat this, companies such as Lenddo and ZestFinance are using AI for risk assessment, and to determine an individual’s creditworthiness. Credit bureaus such as Equifax also use AI, ML and advanced data and analytical tools to analyse alternate sources in the evaluation of risk, and gain customer insight in the process.

Lenders once used a limited set of data, such as annual salaries and credit scores, for this process. However, thanks to AI, organisations are now able to consider an individual’s entire digital financial footprint to determine the likelihood of default. In addition to traditional data sets, the analysis of this alternative data is particularly useful in determining the creditworthiness of individuals without conventional records of loan or credit history.

The time to adopt is now

The way that businesses and clients interact with each other has changed irreversibly this year, and the finance industry is no different. Before the urgency demanded by the pandemic, financial institutions had been experimenting with AI and ML on a limited scale – mainly as a tick-box exercise in an effort to ‘keep up with the Joneses’. The widespread adoption that has been taking place this year stems from the need to truly innovate and increase resilience across the sector.

Banks and financial institutions are now aware of the key areas that benefit from AI, such as greater efficiency in back office operations, and significant improvements in customer engagement. A transformation process that was in its infancy prior to Covid-19 has accelerated and is fast becoming the standard approach. What’s more, financial organisations that are embracing AI now and prioritising its full implementation will be best placed to reap its rewards in the future.

Photo by Jeffrey Blum on Unsplash

? Attend the co-located 

The post From experimentation to implementation: How AI is proving its worth in financial services appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/15/from-experimentation-to-implementation-how-ai-is-proving-its-worth-in-financial-services/feed/ 1
Algorithmia: AI budgets are increasing but deployment challenges remain https://news.deepgeniusai.com/2020/12/10/algorithmia-ai-budgets-increasing-deployment-challenges-remain/ https://news.deepgeniusai.com/2020/12/10/algorithmia-ai-budgets-increasing-deployment-challenges-remain/#comments Thu, 10 Dec 2020 12:52:07 +0000 https://news.deepgeniusai.com/?p=10099 A new report from Algorithmia has found that enterprise budgets for AI are rapidly increasing but significant deployment challenges remain. Algorithmia’s 2021 Enterprise Trends in Machine Learning report features the views of 403 business leaders involved with machine learning initiatives. Diego Oppenheimer, CEO of Algorithmia, says: “COVID-19 has caused rapid change which has challenged our... Read more »

The post Algorithmia: AI budgets are increasing but deployment challenges remain appeared first on AI News.

]]>
A new report from Algorithmia has found that enterprise budgets for AI are rapidly increasing but significant deployment challenges remain.

Algorithmia’s 2021 Enterprise Trends in Machine Learning report features the views of 403 business leaders involved with machine learning initiatives.

Diego Oppenheimer, CEO of Algorithmia, says:

“COVID-19 has caused rapid change which has challenged our assumptions in many areas. In this rapidly changing environment, organisations are rethinking their investments and seeing the importance of AI/ML to drive revenue and efficiency during uncertain times.

Before the pandemic, the top concern for organisations pursuing AI/ML initiatives was a lack of skilled in-house talent. Today, organisations are worrying more about how to get ML models into production faster and how to ensure their performance over time.

While we don’t want to marginalise these issues, I am encouraged by the fact that the type of challenges have more to do with how to maximise the value of AI/ML investments as opposed to whether or not a company can pursue them at all.”

The main takeaway is that AI budgets are significantly increasing. 83 percent of respondents said they’ve increased their budgets compared to last year.

Despite a difficult year for many companies, business leaders are not being put off of AI investments—in fact, they’re doubling-down.

In Algorithmia’s summer survey, 50 percent of respondents said they plan to spend more on AI this year. Around one in five even said they “plan to spend a lot more.”

76 percent of businesses report they are now prioritising AI/ML over other IT initiatives. 64 percent say the priority of AI/ML has increased relative to other IT initiatives over the last 12 months.

With unemployment figures around the world at their highest for several years – even decades in some cases – it’s at least heartening to hear that 76 percent of respondents said they’ve not reduced the size of their AI/ML teams. 27 percent even report an increase.

43 percent say their AI/ML initiatives “matter way more than we thought” and close to one in four believe their AI/ML initiatives should have been their top priority sooner. Process automation and improving customer experiences are the two main areas for AI investments.

While it’s been all good news so far, there are AI deployment issues being faced by many companies which are yet to be addressed.

Governance is, by far, the biggest AI challenge being faced by companies. 56 percent of the businesses ranked governance, security, and auditability issues as a concern.

Regulatory compliance is vital but can be confusing, especially with different regulations between not just countries but even states. 67 percent of the organisations report having to comply with multiple regulations for their AI/ML deployments.

The next major challenge after governance is with basic deployment and organisational challenges. 

Basic integration issues were ranked by 49 percent of businesses as a problem. Furthermore, more job roles are being involved with AI deployment strategies than ever before—it’s no longer seen as just the domain of data scientists.

However, there’s perhaps some light at the end of the tunnel. Organisations are reporting improved outcomes when using dedicated, third-party MLOps solutions.

While keeping in mind Algorithmia is a third-party MLOps solution, the report claims organisations using such a platform spend an average of around 21 percent less on infrastructure costs. Furthermore, it also helps to free up their data scientists—who spend less time on model deployment.

You can find a full copy of Algorithmia’s report here (requires signup)

The post Algorithmia: AI budgets are increasing but deployment challenges remain appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/10/algorithmia-ai-budgets-increasing-deployment-challenges-remain/feed/ 1
AWS announces nine major updates for its ML platform SageMaker https://news.deepgeniusai.com/2020/12/09/aws-nine-major-updates-ml-platform-sagemaker/ https://news.deepgeniusai.com/2020/12/09/aws-nine-major-updates-ml-platform-sagemaker/#comments Wed, 09 Dec 2020 14:47:48 +0000 https://news.deepgeniusai.com/?p=10096 Amazon Web Services (AWS) has announced nine major new updates for its cloud-based machine learning platform, SageMaker. SageMaker aims to provide a machine learning service which can be used to build, train, and deploy ML models for virtually any use case. During this year’s re:Invent conference, AWS made several announcements to further improve SageMaker’s capabilities.... Read more »

The post AWS announces nine major updates for its ML platform SageMaker appeared first on AI News.

]]>
Amazon Web Services (AWS) has announced nine major new updates for its cloud-based machine learning platform, SageMaker.

SageMaker aims to provide a machine learning service which can be used to build, train, and deploy ML models for virtually any use case.

During this year’s re:Invent conference, AWS made several announcements to further improve SageMaker’s capabilities.

Swami Sivasubramanian, VP of Amazon Machine Learning at AWS, said:

“Hundreds of thousands of everyday developers and data scientists have used our industry-leading machine learning service, Amazon SageMaker, to remove barriers to building, training, and deploying custom machine learning models. One of the best parts about having such a widely-adopted service like SageMaker is that we get lots of customer suggestions which fuel our next set of deliverables.

Today, we are announcing a set of tools for Amazon SageMaker that makes it much easier for developers to build end-to-end machine learning pipelines to prepare, build, train, explain, inspect, monitor, debug, and run custom machine learning models with greater visibility, explainability, and automation at scale.”

The first announcement is Data Wrangler, a feature which aims to automate the preparation of data for machine learning.

Data Wrangler enables customers to choose the data they want from their various data stores and import it with a single click. Over 300 built-in data transformers are included to help customers normalise, transform, and combine features without having to write any code.

Frank Farrall, Principal of AI Ecosystems and Platforms Leader at Deloitte, comments:

“SageMaker Data Wrangler enables us to hit the ground running to address our data preparation needs with a rich collection of transformation tools that accelerate the process of machine learning data preparation needed to take new products to market.

In turn, our clients benefit from the rate at which we scale deployments, enabling us to deliver measurable, sustainable results that meet the needs of our clients in a matter of days rather than months.”

The second announcement is Feature Store. Amazon SageMaker Feature Store provides a new repository that makes it easy to store, update, retrieve, and share machine learning features for training and inference.

Feature Store aims to overcome the problem of storing features which are mapped to multiple models. A purpose-built feature store helps developers to access and share features that make it much easier to name, organise, find, and share sets of features among teams of developers and data scientists. Because it resides in SageMaker Studio – close to where ML models are run – AWS claims it provides single-digit millisecond inference latency.

Mammad Zadeh, VP of Engineering, Data Platform at Intuit, says:

“We have worked closely with AWS in the lead up to the release of Amazon SageMaker Feature Store, and we are excited by the prospect of a fully managed feature store so that we no longer have to maintain multiple feature repositories across our organization.

Our data scientists will be able to use existing features from a central store and drive both standardisation and reuse of features across teams and models.”

Next up, we have SageMaker Pipelines—which claims to be the first purpose-built, easy-to-use continuous integration and continuous delivery (CI/CD) service for machine learning.

Developers can define each step of an end-to-end machine learning workflow including the data-load steps, transformations from Amazon SageMaker Data Wrangler, features stored in Amazon SageMaker Feature Store, training configuration and algorithm set up, debugging steps, and optimisation steps.

SageMaker Clarify may be one of the most important features being debuted by AWS this week considering ongoing events.

Clarify aims to provide bias detection across the machine learning workflow, enabling developers to build greater fairness and transparency into their ML models. Rather than turn to often time-consuming open-source tools, developers can use the integrated solution to quickly try and counter any bias in models.

Andreas Heyden, Executive VP of Digital Innovations for the DFL Group, says:

“Amazon SageMaker Clarify seamlessly integrates with the rest of the Bundesliga Match Facts digital platform and is a key part of our long-term strategy of standardising our machine learning workflows on Amazon SageMaker.

By using AWS’s innovative technologies, such as machine learning, to deliver more in-depth insights and provide fans with a better understanding of the split-second decisions made on the pitch, Bundesliga Match Facts enables viewers to gain deeper insights into the key decisions in each match.”

Deep Profiling for Amazon SageMaker automatically monitors system resource utilisation and provides alerts where required for any detected training bottlenecks. The feature works across frameworks (PyTorch, Apache MXNet, and TensorFlow) and collects system and training metrics automatically without requiring any code changes in training scripts.

Next up, we have Distributed Training on SageMaker which AWS claims makes it possible to train large, complex deep learning models up to two times faster than current approaches.

Kristóf Szalay, CTO at Turbine, comments:

“We use machine learning to train our in silico human cell model, called Simulated Cell, based on a proprietary network architecture. By accurately predicting various interventions on the molecular level, Simulated Cell helps us to discover new cancer drugs and find combination partners for existing therapies.

Training of our simulation is something we continuously iterate on, but on a single machine each training takes days, hindering our ability to iterate on new ideas quickly.

We are very excited about Distributed Training on Amazon SageMaker, which we are expecting to decrease our training times by 90% and to help us focus on our main task: to write a best-of-the-breed codebase for the cell model training.

Amazon SageMaker ultimately allows us to become more effective in our primary mission: to identify and develop novel cancer drugs for patients.”

SageMaker’s Data Parallelism engine scales training jobs from a single GPU to hundreds or thousands by automatically splitting data across multiple GPUs, improving training time by up to 40 percent.

With edge computing advancements increasing rapidly, AWS is keeping pace with SageMaker Edge Manager.

Edge Manager helps developers to optimise, secure, monitor, and maintain ML models deployed on fleets of edge devices. In addition to helping optimise ML models and manage edge devices, Edge Manager also provides the ability to cryptographically sign models, upload prediction data from devices to SageMaker for monitoring and analysis, and view a dashboard which tracks and provided a visual report on the operation of the deployed models within the SageMaker console.

Igor Bergman, VP of Cloud and Software of PCs and Smart Devices at Lenovo, comments:

“SageMaker Edge Manager will help eliminate the manual effort required to optimise, monitor, and continuously improve the models after deployment. With it, we expect our models will run faster and consume less memory than with other comparable machine-learning platforms.

As we extend AI to new applications across the Lenovo services portfolio, we will continue to require a high-performance pipeline that is flexible and scalable both in the cloud and on millions of edge devices. That’s why we selected the Amazon SageMaker platform. With its rich edge-to-cloud and CI/CD workflow capabilities, we can effectively bring our machine learning models to any device workflow for much higher productivity.”

Finally, SageMaker JumpStart aims to make it easier for developers which have little experience with machine learning deployments to get started.

JumpStart provides developers with an easy-to-use, searchable interface to find best-in-class solutions, algorithms, and sample notebooks. Developers can select from several end-to-end machine learning templates(e.g. fraud detection, customer churn prediction, or forecasting) and deploy them directly into their SageMaker Studio environments.

AWS has been on a roll with SageMaker improvements—delivering more than 50 new capabilities over the past year. After this bumper feature drop, we probably shouldn’t expect any more until we’ve put 2020 behind us.

You can find coverage of AWS’ more cloud-focused announcements via our sister publication CloudTech here.

The post AWS announces nine major updates for its ML platform SageMaker appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/09/aws-nine-major-updates-ml-platform-sagemaker/feed/ 1
TensorFlow is now available for those shiny new ARM-based Macs https://news.deepgeniusai.com/2020/11/19/tensorflow-now-available-new-arm-based-macs/ https://news.deepgeniusai.com/2020/11/19/tensorflow-now-available-new-arm-based-macs/#comments Thu, 19 Nov 2020 14:41:57 +0000 https://news.deepgeniusai.com/?p=10039 A new version of machine learning library TensorFlow has been released with optimisations for Apple’s new ARM-based Macs. While still technically in pre-release, the Mac-optimised TensorFlow fork supports native hardware acceleration on Mac devices with M1 or Intel chips through Apple’s ML Compute framework. The new TensorFlow release boasts of an over 10x speed improvement... Read more »

The post TensorFlow is now available for those shiny new ARM-based Macs appeared first on AI News.

]]>
A new version of machine learning library TensorFlow has been released with optimisations for Apple’s new ARM-based Macs.

While still technically in pre-release, the Mac-optimised TensorFlow fork supports native hardware acceleration on Mac devices with M1 or Intel chips through Apple’s ML Compute framework.

The new TensorFlow release boasts of an over 10x speed improvement for common training tasks. While impressive, it has to be taken in the context that the GPU was not previously used for training tasks. 

A look at the benchmarks still indicates a substantial gap between the Intel and M1-based Macs across various machine learning models:

In a blog post, Pankaj Kanwar, Tensor Processing Units Technical Program Manager at Google, and Fred Alcober, TensorFlow Product Marketing Lead at Google, wrote:

“These improvements, combined with the ability of Apple developers being able to execute TensorFlow on iOS through TensorFlow Lite, continue to showcase TensorFlow’s breadth and depth in supporting high-performance ML execution on Apple hardware.”

We can only hope that running these workloads doesn’t turn MacBooks into expensive frying pans—but the remarkable efficiency they’ve displayed so far gives little cause for concern.

The post TensorFlow is now available for those shiny new ARM-based Macs appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/19/tensorflow-now-available-new-arm-based-macs/feed/ 1
NVIDIA DGX Station A100 is an ‘AI data-centre-in-a-box’ https://news.deepgeniusai.com/2020/11/16/nvidia-dgx-station-a100-ai-data-centre-box/ https://news.deepgeniusai.com/2020/11/16/nvidia-dgx-station-a100-ai-data-centre-box/#respond Mon, 16 Nov 2020 16:14:54 +0000 https://news.deepgeniusai.com/?p=10023 NVIDIA has unveiled its DGX Station A100, an “AI data-centre-in-a-box” powered by up to four 80GB versions of the company’s record-setting GPU. The A100 Tensor Core GPU set new MLPerf benchmark records last month—outperforming CPUs by up to 237x in data centre inference. In November, Amazon Web Services made eight A100 GPUs available in each... Read more »

The post NVIDIA DGX Station A100 is an ‘AI data-centre-in-a-box’ appeared first on AI News.

]]>
NVIDIA has unveiled its DGX Station A100, an “AI data-centre-in-a-box” powered by up to four 80GB versions of the company’s record-setting GPU.

The A100 Tensor Core GPU set new MLPerf benchmark records last month—outperforming CPUs by up to 237x in data centre inference. In November, Amazon Web Services made eight A100 GPUs available in each of its P4d instances.

For those who prefer their hardware local, the DGX Station A100 is available in either four 80GB A100 GPUs or four 40GB configurations. The monstrous 80GB version of the A100 has twice the memory of when the GPU was originally unveiled just six months ago.

“We doubled everything in this system to make it more effective for customers,” said Paresh Kharya, senior director of product management for accelerated computing at NVIDIA.

NVIDIA says the two configurations provide options for data science and AI research teams to select a system according to their unique workloads and budgets.

Charlie Boyle, VP and GM of DGX systems at NVIDIA, commented:

“DGX Station A100 brings AI out of the data centre with a server-class system that can plug in anywhere.

Teams of data science and AI researchers can accelerate their work using the same software stack as NVIDIA DGX A100 systems, enabling them to easily scale from development to deployment.”

The memory capacity of the DGX Station A100 powered by the 80GB GPUs is now 640GB, enabling much larger datasets and models.

“To power complex conversational AI models like BERT Large inference, DGX Station A100 is more than 4x faster than the previous generation DGX Station. It delivers nearly a 3x performance boost for BERT Large AI training,” NVIDIA wrote in a release.

DGX A100 640GB configurations can be integrated into the DGX SuperPOD Solution for Enterprise for unparalleled performance. Such “turnkey AI supercomputers” are available in units consisting of 20 DGX A100 systems.

Since acquiring ARM, NVIDIA continues to double-down on its investment in the UK and its local talent.

“We will create an open centre of excellence in the area once home to giants like Isaac Newton and Alan Turing, for whom key NVIDIA technologies are named,” Huang said in September. “We want to propel ARM – and the UK – to global AI leadership.”

NVIDIA’s latest supercomputer, the Cambridge-1, is being installed in the UK and will be one of the first SuperPODs with DGX A100 640GB systems. Cambridge-1 will initially be used by local pioneering companies to supercharge healthcare research.

Dr Kim Branson, SVP and Global Head of AI and ML at GSK, commented:

“Because of the massive size of the datasets we use for drug discovery, we need to push the boundaries of hardware and develop new machine learning software.

We’re building new algorithms and approaches in addition to bringing together the best minds at the intersection of medicine, genetics, and artificial intelligence in the UK’s rich ecosystem.

This new partnership with NVIDIA will also contribute additional computational power and state-of-the-art AI technology.”

The use of AI for healthcare research has received extra attention due to the coronavirus pandemic. A recent simulation of the coronavirus, the largest molecular simulation ever, simulated 305 million atoms and was powered by 27,000 NVIDIA GPUs.

Several promising COVID-19 vaccines in late-stage trials have emerged in recent days which have raised hopes that life could be mostly back to normal by summer, but we never know when the next pandemic may strike and there are still many challenges we all face both in and out of healthcare.

Systems like the DGX Station A100 help to ensure that – whatever challenges we face now and in the future – researchers have the power they need for their vital work.

Both configurations of the DGX Station A100 are expected to begin shipping this quarter.

The post NVIDIA DGX Station A100 is an ‘AI data-centre-in-a-box’ appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/16/nvidia-dgx-station-a100-ai-data-centre-box/feed/ 0
Synthesized’s free tool aims to detect and remove algorithmic biases https://news.deepgeniusai.com/2020/11/12/synthesized-free-tool-detect-remove-algorithmic-biases/ https://news.deepgeniusai.com/2020/11/12/synthesized-free-tool-detect-remove-algorithmic-biases/#respond Thu, 12 Nov 2020 11:13:52 +0000 https://news.deepgeniusai.com/?p=10016 Synthesized has launched a free tool which aims to quickly identify and remove dangerous biases in algorithms. As humans, we all have biases. These biases, often unconsciously, end up in algorithms which are designed to be used across society. In practice, this could mean anything from a news app serving more left-wing or right-wing content—through... Read more »

The post Synthesized’s free tool aims to detect and remove algorithmic biases appeared first on AI News.

]]>
Synthesized has launched a free tool which aims to quickly identify and remove dangerous biases in algorithms.

As humans, we all have biases. These biases, often unconsciously, end up in algorithms which are designed to be used across society.

In practice, this could mean anything from a news app serving more left-wing or right-wing content—through to facial recognition systems which flag some races and genders more than others.

A 2010 study (PDF) by researchers at NIST and the University of Texas found that algorithms designed and tested in East Asia are better at recognising East Asians, while those developed in Western countries are more accurate when detecting Caucasians.

Dr Nicolai Baldin, CEO and Founder of Synthesized, said:

“The reputational risk of all organisations is under threat due to biased data and we’ve seen this will no longer be tolerated at any level. It’s a burning priority now and must be dealt with as a matter of urgency, both from a legal and ethical standpoint.

Last year, Algorithmic Justice League founder Joy Buolamwini gave a presentation during the World Economic Forum on the need to fight AI bias. Buolamwini highlighted the massive disparities in effectiveness when popular facial recognition algorithms were applied to various parts of society.

Synthesized claims its platform is able to automatically identify bias across data attributes like gender, age, race, religion, sexual orientation, and more. 

The platform was designed to be simple-to-use with no coding knowledge required. Users only have to upload a structured data file – as simple as a spreadsheet – to begin analysing for potential biases. A ‘Total Fairness Score’ will be provided to show what percentage of the provided dataset contained biases.

“Synthesized’s Community Edition for Bias Mitigation is one of the first offerings specifically created to understand, investigate, and root out bias in data,” explains Baldin. “We designed the platform to be very accessible, easy-to-use, and highly scalable, as organisations have data stored across a huge range of databases and data silos.”

Some examples of how Synthesized’s tool could be used across industries include:

  • In finance, to create fairer credit ratings
  • In insurance, for more equitable claims
  • In HR, to eliminate biases in hiring processes
  • In universities, for ensuring fairness in admission decisions

Synthesized’s platform uses a proprietary algorithm which is said to be quicker and more accurate than existing techniques for removing biases in datasets. A new synthetic dataset is created which, in theory, should be free of biases.

“With the generation of synthetic data, Synthesized’s platform gives its users the ability to equally distribute all attributes within a dataset to remove bias and rebalance the dataset completely,” the company says.

“Users can also manually change singular data attributes within a dataset, such as gender, providing granular control of the rebalancing process.”

If only MIT used such a tool on its dataset it was forced to remove in July after being found to be racist and misogynistic.

You can find out more about Synthesized’s tool and how to get started here.

(Photo by Agence Olloweb on Unsplash)

The post Synthesized’s free tool aims to detect and remove algorithmic biases appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/12/synthesized-free-tool-detect-remove-algorithmic-biases/feed/ 0
Algorithmia announces Insights for ML model performance monitoring https://news.deepgeniusai.com/2020/11/05/algorithmia-insights-ml-model-performance-monitoring/ https://news.deepgeniusai.com/2020/11/05/algorithmia-insights-ml-model-performance-monitoring/#comments Thu, 05 Nov 2020 17:09:56 +0000 https://news.deepgeniusai.com/?p=10002 Seattle-based Algorithmia has announced Insights, a solution for monitoring the performance of machine learning models. Algorithmia specialises in artificial intelligence operations and management. The company is backed by Google LLC and focuses on simplifying AI projects for enterprises just To Get Started. Diego Oppenheimer, CEO of Algorithmia, says: “Organisations have specific needs when it comes to... Read more »

The post Algorithmia announces Insights for ML model performance monitoring appeared first on AI News.

]]>
Seattle-based Algorithmia has announced Insights, a solution for monitoring the performance of machine learning models.

Algorithmia specialises in artificial intelligence operations and management. The company is backed by Google LLC and focuses on simplifying AI projects for enterprises just To Get Started.

Diego Oppenheimer, CEO of Algorithmia, says:

“Organisations have specific needs when it comes to ML model monitoring and reporting.

For example, they are concerned with compliance as it pertains to external and internal regulations, model performance for improvement of business outcomes, and reducing the risk of model failure.

Algorithmia Insights helps users overcome these issues while making it easier to monitor model performance in the context of other operational metrics and variables.” 

Insights aims to help enterprises to monitor the performance of their machine learning models. Many organisations currently don’t have that ability, or use a complex variety of tools and/or manual processes.

Operational metrics like execution time and request identification are combined with user-defined metrics such as confidence and accuracy to identify data skews, negative feedback loops, and model drift.

Model drift, in layman’s terms, is the degradation of a model’s prediction power due to changes in the environment—which subsequently impacts the relationship between variables. A far more detailed explanation can be found here for those interested.

Algorithmia teamed up with monitoring service Datadog to allow customers to stream operational – as well as user-defined inference metrics – from Algorithmia, to Kafka, and then into Datadog.

Ilan Rabinovitch, Vice President of Product and Community at Datadog, comments:

“ML models are at the heart of today’s business. Understanding how they perform both statistically and operationally is key to success.

By combining the findings of Algorithmia Insights and Datadog’s deep visibility into code and integration, our mutual customers can drive more accurate and performant outcomes from their ML models.”

Through integration with Datadog and its Metrics API, customers can measure and monitor their ML models to immediately detect data drift, model drift, and model bias.

(Photo by Chris Liverani on Unsplash)

The post Algorithmia announces Insights for ML model performance monitoring appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/05/algorithmia-insights-ml-model-performance-monitoring/feed/ 1
NVIDIA chucks its MLPerf-leading A100 GPU into Amazon’s cloud https://news.deepgeniusai.com/2020/11/03/nvidia-mlperf-a100-gpu-amazon-cloud/ https://news.deepgeniusai.com/2020/11/03/nvidia-mlperf-a100-gpu-amazon-cloud/#comments Tue, 03 Nov 2020 15:55:37 +0000 https://news.deepgeniusai.com/?p=9998 NVIDIA’s A100 set a new record in the MLPerf benchmark last month and now it’s accessible through Amazon’s cloud. Amazon Web Services (AWS) first launched a GPU instance 10 years ago with the NVIDIA M2050. It’s rather poetic that, a decade on, NVIDIA is now providing AWS with the hardware to power the next generation... Read more »

The post NVIDIA chucks its MLPerf-leading A100 GPU into Amazon’s cloud appeared first on AI News.

]]>
NVIDIA’s A100 set a new record in the MLPerf benchmark last month and now it’s accessible through Amazon’s cloud.

Amazon Web Services (AWS) first launched a GPU instance 10 years ago with the NVIDIA M2050. It’s rather poetic that, a decade on, NVIDIA is now providing AWS with the hardware to power the next generation of groundbreaking innovations.

The A100 outperformed CPUs in this year’s MLPerf by up to 237x in data centre inference. A single NVIDIA DGX A100 system – with eight A100 GPUs – provides the same performance as nearly 1,000 dual-socket CPU servers on some AI applications.

“We’re at a tipping point as every industry seeks better ways to apply AI to offer new services and grow their business,” said Ian Buck, Vice President of Accelerated Computing at NVIDIA, following the benchmark results.

Businesses can access the A100 in AWS’ P4d instance. NVIDIA claims the instances reduce the time to train machine learning models by up to 3x with FP16 and up to 6x with TF32 compared to the default FP32 precision.

Each P4d instance features eight NVIDIA A100 GPUs. If even more performance is required, customers are able to access over 4,000 GPUs at a time using AWS’s Elastic Fabric Adaptor (EFA).

Dave Brown, Vice President of EC2 at AWS, said:

“The pace at which our customers have used AWS services to build, train, and deploy machine learning applications has been extraordinary. At the same time, we have heard from those customers that they want an even lower-cost way to train their massive machine learning models.

Now, with EC2 UltraClusters of P4d instances powered by NVIDIA’s latest A100 GPUs and petabit-scale networking, we’re making supercomputing-class performance available to virtually everyone, while reducing the time to train machine learning models by 3x, and lowering the cost to train by up to 60% compared to previous generation instances.”

P4d supports 400Gbps networking and makes use of NVIDIA’s technologies including NVLink, NVSwitch, NCCL, and GPUDirect RDMA to further accelerate deep learning training workloads.

Some of AWS’ customers across various industries have already begun exploring how the P4d instance can help their business.

Karley Yoder, VP & GM of Artificial Intelligence at GE Healthcare, commented:

“Our medical imaging devices generate massive amounts of data that need to be processed by our data scientists. With previous GPU clusters, it would take days to train complex AI models, such as Progressive GANs, for simulations and view the results.

Using the new P4d instances reduced processing time from days to hours. We saw two- to three-times greater speed on training models with various image sizes while achieving better performance with increased batch size and higher productivity with a faster model development cycle.”

For an example from a different industry, the research arm of Toyota is exploring how P4d can improve their existing work in developing self-driving vehicles and groundbreaking new robotics.

“The previous generation P3 instances helped us reduce our time to train machine learning models from days to hours,” explained Mike Garrison, Technical Lead of Infrastructure Engineering at Toyota Research Institute.

“We are looking forward to utilizing P4d instances, as the additional GPU memory and more efficient float formats will allow our machine learning team to train with more complex models at an even faster speed.”

P4d instances are currently available in the US East (N. Virginia) and US West (Oregon) regions. AWS says further availability is planned soon.

You can find out more about P4d instances and how to get started here.

The post NVIDIA chucks its MLPerf-leading A100 GPU into Amazon’s cloud appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/03/nvidia-mlperf-a100-gpu-amazon-cloud/feed/ 2
NVIDIA sets another AI inference record in MLPerf https://news.deepgeniusai.com/2020/10/22/nvidia-sets-another-ai-inference-record-mlperf/ https://news.deepgeniusai.com/2020/10/22/nvidia-sets-another-ai-inference-record-mlperf/#comments Thu, 22 Oct 2020 09:16:41 +0000 https://news.deepgeniusai.com/?p=9966 NVIDIA has set yet another record for AI inference in MLPerf with its A100 Tensor Core GPUs. MLPerf consists of five inference benchmarks which cover the main three AI applications today: image classification, object detection, and translation. “Industry-standard MLPerf benchmarks provide relevant performance data on widely used AI networks and help make informed AI platform... Read more »

The post NVIDIA sets another AI inference record in MLPerf appeared first on AI News.

]]>
NVIDIA has set yet another record for AI inference in MLPerf with its A100 Tensor Core GPUs.

MLPerf consists of five inference benchmarks which cover the main three AI applications today: image classification, object detection, and translation.

“Industry-standard MLPerf benchmarks provide relevant performance data on widely used AI networks and help make informed AI platform buying decisions,” said Rangan Majumder, VP of Search and AI at Microsoft.

Last year, NVIDIA led all five benchmarks for both server and offline data centre scenarios with its Turing GPUs. A dozen companies participated.

23 companies participated in this year’s MLPerf but NVIDIA maintained its lead with the A100 outperforming CPUs by up to 237x in data centre inference.

For perspective, NVIDIA notes that a single NVIDIA DGX A100 system – with eight A100 GPUs – provides the same performance as nearly 1,000 dual-socket CPU servers on some AI applications.

“We’re at a tipping point as every industry seeks better ways to apply AI to offer new services and grow their business,” said Ian Buck, Vice President of Accelerated Computing at NVIDIA.

“The work we’ve done to achieve these results on MLPerf gives companies a new level of AI performance to improve our everyday lives.”

The widespread availability of NVIDIA’s AI platform through every major cloud and data centre infrastructure provider is unlocking huge potential for companies across various industries to improve their operations.

The post NVIDIA sets another AI inference record in MLPerf appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/10/22/nvidia-sets-another-ai-inference-record-mlperf/feed/ 1
Microsoft’s new AI auto-captions images for the visually impaired https://news.deepgeniusai.com/2020/10/19/microsoft-new-ai-auto-captions-images-visually-impaired/ https://news.deepgeniusai.com/2020/10/19/microsoft-new-ai-auto-captions-images-visually-impaired/#respond Mon, 19 Oct 2020 11:07:34 +0000 https://news.deepgeniusai.com/?p=9957 A new AI from Microsoft aims to automatically caption images in documents and emails so that software for visual impairments can read it out. Researchers from Microsoft explained their machine learning model in a paper on preprint repository arXiv. The model uses VIsual VOcabulary pre-training (VIVO) which leverages large amounts of paired image-tag data to... Read more »

The post Microsoft’s new AI auto-captions images for the visually impaired appeared first on AI News.

]]>
A new AI from Microsoft aims to automatically caption images in documents and emails so that software for visual impairments can read it out.

Researchers from Microsoft explained their machine learning model in a paper on preprint repository arXiv.

The model uses VIsual VOcabulary pre-training (VIVO) which leverages large amounts of paired image-tag data to learn a visual vocabulary.

A second dataset of properly captioned images is then used to help teach the AI how to best describe the pictures.

“Ideally, everyone would include alt text for all images in documents, on the web, in social media – as this enables people who are blind to access the content and participate in the conversation. But, alas, people don’t,” said Saqib Shaikh, a software engineering manager with Microsoft’s AI platform group.

Overall, the researchers expect the AI to deliver twice the performance of Microsoft’s existing captioning system.

In order to benchmark the performance of their new AI, the researchers entered it into the ‘nocaps’ challenge. As of writing, Microsoft’s AI now ranks first on its leaderboard.

“The nocaps challenge is really how are you able to describe those novel objects that you haven’t seen in your training data?” commented Lijuan Wang, a principal research manager in Microsoft’s research lab.

Developers wanting to get started with building apps using Microsoft’s auto-captioning AI can already do so as it’s available in Azure Cognitive Services’ Computer Vision package.

Microsoft’s impressive SeeingAI application – which uses computer vision to describe an individual’s surroundings for people suffering from vision loss – will be updated with features using the new AI.

“Image captioning is one of the core computer vision capabilities that can enable a broad range of services,” said Xuedong Huang, Microsoft CTO of Azure AI Cognitive Services.

“We’re taking this AI breakthrough to Azure as a platform to serve a broader set of customers,” Huang continued. “It is not just a breakthrough on the research; the time it took to turn that breakthrough into production on Azure is also a breakthrough.”

The improved auto-captioning feature is also expected to be available in Outlook, Word, and PowerPoint later this year.

(Photo by K8 on Unsplash)

The post Microsoft’s new AI auto-captions images for the visually impaired appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/10/19/microsoft-new-ai-auto-captions-images-visually-impaired/feed/ 0