Benchmark – AI News https://news.deepgeniusai.com Artificial Intelligence News Thu, 22 Oct 2020 09:16:43 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Benchmark – AI News https://news.deepgeniusai.com 32 32 NVIDIA sets another AI inference record in MLPerf https://news.deepgeniusai.com/2020/10/22/nvidia-sets-another-ai-inference-record-mlperf/ https://news.deepgeniusai.com/2020/10/22/nvidia-sets-another-ai-inference-record-mlperf/#comments Thu, 22 Oct 2020 09:16:41 +0000 https://news.deepgeniusai.com/?p=9966 NVIDIA has set yet another record for AI inference in MLPerf with its A100 Tensor Core GPUs. MLPerf consists of five inference benchmarks which cover the main three AI applications today: image classification, object detection, and translation. “Industry-standard MLPerf benchmarks provide relevant performance data on widely used AI networks and help make informed AI platform... Read more »

The post NVIDIA sets another AI inference record in MLPerf appeared first on AI News.

]]>
NVIDIA has set yet another record for AI inference in MLPerf with its A100 Tensor Core GPUs.

MLPerf consists of five inference benchmarks which cover the main three AI applications today: image classification, object detection, and translation.

“Industry-standard MLPerf benchmarks provide relevant performance data on widely used AI networks and help make informed AI platform buying decisions,” said Rangan Majumder, VP of Search and AI at Microsoft.

Last year, NVIDIA led all five benchmarks for both server and offline data centre scenarios with its Turing GPUs. A dozen companies participated.

23 companies participated in this year’s MLPerf but NVIDIA maintained its lead with the A100 outperforming CPUs by up to 237x in data centre inference.

For perspective, NVIDIA notes that a single NVIDIA DGX A100 system – with eight A100 GPUs – provides the same performance as nearly 1,000 dual-socket CPU servers on some AI applications.

“We’re at a tipping point as every industry seeks better ways to apply AI to offer new services and grow their business,” said Ian Buck, Vice President of Accelerated Computing at NVIDIA.

“The work we’ve done to achieve these results on MLPerf gives companies a new level of AI performance to improve our everyday lives.”

The widespread availability of NVIDIA’s AI platform through every major cloud and data centre infrastructure provider is unlocking huge potential for companies across various industries to improve their operations.

The post NVIDIA sets another AI inference record in MLPerf appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/10/22/nvidia-sets-another-ai-inference-record-mlperf/feed/ 1
Nvidia comes out on top in first MLPerf inference benchmarks https://news.deepgeniusai.com/2019/11/07/nvidia-comes-out-on-top-in-first-mlperf-inference-benchmarks/ https://news.deepgeniusai.com/2019/11/07/nvidia-comes-out-on-top-in-first-mlperf-inference-benchmarks/#respond Thu, 07 Nov 2019 11:19:57 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6172 The first benchmark results from the MLPerf consortium have been released and Nvidia is a clear winner for inference performance. For those unaware, inference takes a deep learning model and processes incoming data however it’s been trained to. MLPerf is a consortium which aims to provide “fair and useful” standardised benchmarks for inference performance. MLPerf... Read more »

The post Nvidia comes out on top in first MLPerf inference benchmarks appeared first on AI News.

]]>
The first benchmark results from the MLPerf consortium have been released and Nvidia is a clear winner for inference performance.

For those unaware, inference takes a deep learning model and processes incoming data however it’s been trained to.

MLPerf is a consortium which aims to provide “fair and useful” standardised benchmarks for inference performance. MLPerf can be thought of as doing for inference what SPEC does for benchmarking CPUs and general system performance.

The consortium has released its first benchmarking results, a painstaking effort involving over 30 companies and over 200 engineers and practitioners. MLPerf’s first call for submissions led to over 600 measurements spanning 14 companies and 44 systems. 

However, for datacentre inference, only four of the processors are commercially-available:

  • Intel Xeon P9282
  • Habana Goya
  • Google TPUv3
  • Nvidia Turing

Nvidia wasted no time in boasting of its performance beating the three other processors across various neural networks in both server and offline scenarios:

The easiest direct comparisons are possible in the ImageNet ResNet-50 v1.6 offline scenario where the greatest number of major players and startups submitted results.

In that scenario, Nvidia once again boasted the best performance on a per-processor basis with its Titan RTX GPU. Despite the 2x Google Cloud TPU v3-8 submission using eight Intel Skylake processors, it had a similar performance to the SCAN 3XS DBP T496X2 Fluid which used four Titan RTX cards (65,431.40 vs 66,250.40 inputs/second).

Ian Buck, GM and VP of Accelerated Computing at NVIDIA, said:

“AI is at a tipping point as it moves swiftly from research to large-scale deployment for real applications.

AI inference is a tremendous computational challenge. Combining the industry’s most advanced programmable accelerator, the CUDA-X suite of AI algorithms and our deep expertise in AI computing, NVIDIA can help datacentres deploy their large and growing body of complex AI models.”

However, it’s worth noting that the Titan RTX doesn’t support ECC memory so – despite its sterling performance – this omission may prevent its use in some datacentres.

Another interesting takeaway when comparing the Cloud TPU results against Nvidia is the performance difference when moving from offline to server scenarios.

  • Google Cloud TPU v3 offline: 32,716.00
  • Google Cloud TPU v3 server: 16,014.29
  • Nvidia SCAN 3XS DBP T496X2 Fluid offline: 66,250.40
  • Nvidia SCAN 3XS DBP T496X2 Fluid server: 60,030.57

As you can see, the Cloud TPU system performance is slashed by over a half when used in a server scenario. The SCAN 3XS DBP T496X2 Fluid system performance only drops around 10 percent in comparison.

You can peruse MLPerf’s full benchmark results here.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Nvidia comes out on top in first MLPerf inference benchmarks appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/11/07/nvidia-comes-out-on-top-in-first-mlperf-inference-benchmarks/feed/ 0
AnTuTu’s latest benchmark tests AI chip performance https://news.deepgeniusai.com/2019/01/30/antutu-benchmark-ai-chip-performance/ https://news.deepgeniusai.com/2019/01/30/antutu-benchmark-ai-chip-performance/#respond Wed, 30 Jan 2019 12:28:08 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4854 We can now better scrutinise manufacturers’ claims about AI chip performance improvements thanks to AnTuTu’s latest benchmark. If you’ve ever read a comprehensive smartphone review, you’ve likely heard of AnTuTu. The company’s smartphone benchmarking tool is often used for testing and comparing the CPU and 3D performance of devices. With dedicated AI chips now appearing... Read more »

The post AnTuTu’s latest benchmark tests AI chip performance appeared first on AI News.

]]>
We can now better scrutinise manufacturers’ claims about AI chip performance improvements thanks to AnTuTu’s latest benchmark.

If you’ve ever read a comprehensive smartphone review, you’ve likely heard of AnTuTu. The company’s smartphone benchmarking tool is often used for testing and comparing the CPU and 3D performance of devices.

With dedicated AI chips now appearing in devices from the mid-range to flagships, AnTuTu has decided it’s time for a benchmark to determine their performance.

In a blog post, AnTuTu says its benchmark uses two categories – ‘Image Classification’, and ‘Object Recognition’.

AI News tested AnTuTu’s benchmark on a Huawei Mate 20 Pro which currently ranks second on AnTuTu’s general performance leaderboard for Android devices. Huawei often brags about the AI performance of its flagship devices.

The first test classifies 200 images as fast as possible using the Inception v3 neural network:

In the second test, 600-frame video is reviewed using the MobileNet SSD neural network:

AnTuTu then delivers an overall benchmark score, along with the scores for each category.

Here is how our Mate 20 Pro fared:

  • Overall – 65,222
  • Image Classification – 41,717
  • Object Detection – 23,505

Each of the categories is further broken down into scores for ‘speed’ and ‘accuracy’. If accuracy is traded for speed, then a lower score will be given.

AnTuTu says this helps to prevent cheating by devices processing the data fast but without providing the right answers. Smartphone manufacturers have been caught artificially-inflating their benchmarks in the past; so it provides added confidence in the results.

For a general look at the AI features in the Mate 20 Pro, see our video below:

You can download the AI benchmark from AnTuTu here.

The post AnTuTu’s latest benchmark tests AI chip performance appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/01/30/antutu-benchmark-ai-chip-performance/feed/ 0