From experimentation to implementation: How AI is proving its worth in financial services

For financial institutions, recovering from the pandemic will put an end to tentative experiments with artificial intelligence (AI) and machine learning (ML), and demand their large-scale adoption. The crisis has required financial organisations to respond to customer needs around the clock. Many are therefore transforming with ever-increasing pace, but they must ensure that their core critical operations continue to run smoothly. This has sparked an interest in AI and ML solutions, which...

Algorithmia: AI budgets are increasing but deployment challenges remain

A new report from Algorithmia has found that enterprise budgets for AI are rapidly increasing but significant deployment challenges remain.

Algorithmia’s 2021 Enterprise Trends in Machine Learning report features the views of 403 business leaders involved with machine learning initiatives.

Diego Oppenheimer, CEO of Algorithmia, says:

“COVID-19 has caused rapid change which has challenged our assumptions in many areas. In this rapidly changing environment,...

AWS announces nine major updates for its ML platform SageMaker

Amazon Web Services (AWS) has announced nine major new updates for its cloud-based machine learning platform, SageMaker.

SageMaker aims to provide a machine learning service which can be used to build, train, and deploy ML models for virtually any use case.

During this year’s re:Invent conference, AWS made several announcements to further improve SageMaker’s capabilities.

Swami Sivasubramanian, VP of Amazon Machine Learning at AWS,...

TensorFlow is now available for those shiny new ARM-based Macs

A new version of machine learning library TensorFlow has been released with optimisations for Apple’s new ARM-based Macs.

While still technically in pre-release, the Mac-optimised TensorFlow fork supports native hardware acceleration on Mac devices with M1 or Intel chips through Apple’s ML Compute framework.

The new TensorFlow release boasts of an over 10x speed improvement for common training tasks. While impressive, it has to be taken in the context that the GPU...

NVIDIA DGX Station A100 is an ‘AI data-centre-in-a-box’

NVIDIA has unveiled its DGX Station A100, an “AI data-centre-in-a-box” powered by up to four 80GB versions of the company’s record-setting GPU.

The A100 Tensor Core GPU set new MLPerf benchmark records last month—outperforming CPUs by up to 237x in data centre inference. In November, Amazon Web Services made eight A100 GPUs available in each of its P4d instances.

For those who prefer their hardware local, the DGX Station A100 is available in either four 80GB A100...

Synthesized’s free tool aims to detect and remove algorithmic biases

Synthesized has launched a free tool which aims to quickly identify and remove dangerous biases in algorithms.

As humans, we all have biases. These biases, often unconsciously, end up in algorithms which are designed to be used across society.

In practice, this could mean anything from a news app serving more left-wing or right-wing content—through to facial recognition systems which flag some races and genders more than others.

A 2010 study (PDF) by...

Algorithmia announces Insights for ML model performance monitoring

Seattle-based Algorithmia has announced Insights, a solution for monitoring the performance of machine learning models.

Algorithmia specialises in artificial intelligence operations and management. The company is backed by Google LLC and focuses on simplifying AI projects for enterprises just To Get Started.

Diego Oppenheimer, CEO of Algorithmia, says:

“Organisations have specific needs when it comes to ML model monitoring and reporting.

For example,...

NVIDIA chucks its MLPerf-leading A100 GPU into Amazon’s cloud

NVIDIA’s A100 set a new record in the MLPerf benchmark last month and now it’s accessible through Amazon’s cloud.

Amazon Web Services (AWS) first launched a GPU instance 10 years ago with the NVIDIA M2050. It’s rather poetic that, a decade on, NVIDIA is now providing AWS with the hardware to power the next generation of groundbreaking innovations.

The A100 outperformed CPUs in this year’s MLPerf by up to 237x in data centre inference. A single NVIDIA DGX A100...

NVIDIA sets another AI inference record in MLPerf

NVIDIA has set yet another record for AI inference in MLPerf with its A100 Tensor Core GPUs.

MLPerf consists of five inference benchmarks which cover the main three AI applications today: image classification, object detection, and translation.

“Industry-standard MLPerf benchmarks provide relevant performance data on widely used AI networks and help make informed AI platform buying decisions,” said Rangan Majumder, VP of Search and AI at Microsoft.

Last...

Microsoft’s new AI auto-captions images for the visually impaired

A new AI from Microsoft aims to automatically caption images in documents and emails so that software for visual impairments can read it out.

Researchers from Microsoft explained their machine learning model in a paper on preprint repository arXiv.

The model uses VIsual VOcabulary pre-training (VIVO) which leverages large amounts of paired image-tag data to learn a visual vocabulary.

A second dataset of properly captioned images is then used to help teach the...