Nvidia – AI News https://news.deepgeniusai.com Artificial Intelligence News Mon, 07 Dec 2020 16:08:24 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Nvidia – AI News https://news.deepgeniusai.com 32 32 NVIDIA breakthrough emulates images from small datasets for groundbreaking AI training https://news.deepgeniusai.com/2020/12/07/nvidia-emulates-images-small-datasets-ai-training/ https://news.deepgeniusai.com/2020/12/07/nvidia-emulates-images-small-datasets-ai-training/#respond Mon, 07 Dec 2020 16:08:23 +0000 https://news.deepgeniusai.com/?p=10069 NVIDIA’s latest breakthrough emulates new images from existing small datasets with truly groundbreaking potential for AI training. The company demonstrated its latest AI model using a small dataset – just a fraction of the size typically used for a Generative Adversarial Network (GAN) – of artwork from the Metropolitan Museum of Art. From the dataset,... Read more »

The post NVIDIA breakthrough emulates images from small datasets for groundbreaking AI training appeared first on AI News.

]]>
NVIDIA’s latest breakthrough emulates new images from existing small datasets with truly groundbreaking potential for AI training.

The company demonstrated its latest AI model using a small dataset – just a fraction of the size typically used for a Generative Adversarial Network (GAN) – of artwork from the Metropolitan Museum of Art.

From the dataset, NVIDIA’s AI was able to create new images which replicate the style of the original artist’s work. These images can then be used to help train further AI models.

The AI achieved this impressive feat by applying a breakthrough neural network training technique similar to the popular NVIDIA StyleGAN2 model. 

The technique is called Adaptive Discriminator Augmentation (ADA) and NVIDIA claims that it reduces the number of training images required by 10-20x while still getting great results.

David Luebke, VP of Graphics Research at NVIDIA, said:

“These results mean people can use GANs to tackle problems where vast quantities of data are too time-consuming or difficult to obtain.

I can’t wait to see what artists, medical experts and researchers use it for.”

Healthcare is a particularly exciting field where NVIDIA’s research could be applied. For example, it could help to create cancer histology images to train other AI models.

The breakthrough will help with the issues around most current datasets.

Large datasets are often required for AI training but aren’t always available. On the other hand, large datasets are difficult to ensure their content is suitable and does not unintentionally lead to algorithmic bias.

Earlier this year, MIT was forced to remove a large dataset called 80 Million Tiny Images. The dataset is popular for training AIs but was found to contain images labelled with racist, misogynistic, and other unacceptable terms.

A statement on MIT’s website claims it was unaware of the offensive labels and they were “a consequence of the automated data collection procedure that relied on nouns from WordNet.”

The statement goes on to explain the 80 million images contained in the dataset – with sizes of just 32×32 pixels – meant that manual inspection would be almost impossible and couldn’t guarantee all offensive images would be removed.

By starting with a small dataset that can be feasibly checked manually, a technique like NVIDIA’s ADA could be used to create new images which emulate the originals and can scale up to the required size for training AI models.

In a blog post, NVIDIA wrote:

“It typically takes 50,000 to 100,000 training images to train a high-quality GAN. But in many cases, researchers simply don’t have tens or hundreds of thousands of sample images at their disposal.

With just a couple thousand images for training, many GANs would falter at producing realistic results. This problem, called overfitting, occurs when the discriminator simply memorizes the training images and fails to provide useful feedback to the generator.”

You can find NVIDIA’s full research paper here (PDF). The paper is being presented at this year’s NeurIPS conference as one of a record 28 NVIDIA Research papers accepted to the prestigious conference.

The post NVIDIA breakthrough emulates images from small datasets for groundbreaking AI training appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/07/nvidia-emulates-images-small-datasets-ai-training/feed/ 0
NVIDIA DGX Station A100 is an ‘AI data-centre-in-a-box’ https://news.deepgeniusai.com/2020/11/16/nvidia-dgx-station-a100-ai-data-centre-box/ https://news.deepgeniusai.com/2020/11/16/nvidia-dgx-station-a100-ai-data-centre-box/#respond Mon, 16 Nov 2020 16:14:54 +0000 https://news.deepgeniusai.com/?p=10023 NVIDIA has unveiled its DGX Station A100, an “AI data-centre-in-a-box” powered by up to four 80GB versions of the company’s record-setting GPU. The A100 Tensor Core GPU set new MLPerf benchmark records last month—outperforming CPUs by up to 237x in data centre inference. In November, Amazon Web Services made eight A100 GPUs available in each... Read more »

The post NVIDIA DGX Station A100 is an ‘AI data-centre-in-a-box’ appeared first on AI News.

]]>
NVIDIA has unveiled its DGX Station A100, an “AI data-centre-in-a-box” powered by up to four 80GB versions of the company’s record-setting GPU.

The A100 Tensor Core GPU set new MLPerf benchmark records last month—outperforming CPUs by up to 237x in data centre inference. In November, Amazon Web Services made eight A100 GPUs available in each of its P4d instances.

For those who prefer their hardware local, the DGX Station A100 is available in either four 80GB A100 GPUs or four 40GB configurations. The monstrous 80GB version of the A100 has twice the memory of when the GPU was originally unveiled just six months ago.

“We doubled everything in this system to make it more effective for customers,” said Paresh Kharya, senior director of product management for accelerated computing at NVIDIA.

NVIDIA says the two configurations provide options for data science and AI research teams to select a system according to their unique workloads and budgets.

Charlie Boyle, VP and GM of DGX systems at NVIDIA, commented:

“DGX Station A100 brings AI out of the data centre with a server-class system that can plug in anywhere.

Teams of data science and AI researchers can accelerate their work using the same software stack as NVIDIA DGX A100 systems, enabling them to easily scale from development to deployment.”

The memory capacity of the DGX Station A100 powered by the 80GB GPUs is now 640GB, enabling much larger datasets and models.

“To power complex conversational AI models like BERT Large inference, DGX Station A100 is more than 4x faster than the previous generation DGX Station. It delivers nearly a 3x performance boost for BERT Large AI training,” NVIDIA wrote in a release.

DGX A100 640GB configurations can be integrated into the DGX SuperPOD Solution for Enterprise for unparalleled performance. Such “turnkey AI supercomputers” are available in units consisting of 20 DGX A100 systems.

Since acquiring ARM, NVIDIA continues to double-down on its investment in the UK and its local talent.

“We will create an open centre of excellence in the area once home to giants like Isaac Newton and Alan Turing, for whom key NVIDIA technologies are named,” Huang said in September. “We want to propel ARM – and the UK – to global AI leadership.”

NVIDIA’s latest supercomputer, the Cambridge-1, is being installed in the UK and will be one of the first SuperPODs with DGX A100 640GB systems. Cambridge-1 will initially be used by local pioneering companies to supercharge healthcare research.

Dr Kim Branson, SVP and Global Head of AI and ML at GSK, commented:

“Because of the massive size of the datasets we use for drug discovery, we need to push the boundaries of hardware and develop new machine learning software.

We’re building new algorithms and approaches in addition to bringing together the best minds at the intersection of medicine, genetics, and artificial intelligence in the UK’s rich ecosystem.

This new partnership with NVIDIA will also contribute additional computational power and state-of-the-art AI technology.”

The use of AI for healthcare research has received extra attention due to the coronavirus pandemic. A recent simulation of the coronavirus, the largest molecular simulation ever, simulated 305 million atoms and was powered by 27,000 NVIDIA GPUs.

Several promising COVID-19 vaccines in late-stage trials have emerged in recent days which have raised hopes that life could be mostly back to normal by summer, but we never know when the next pandemic may strike and there are still many challenges we all face both in and out of healthcare.

Systems like the DGX Station A100 help to ensure that – whatever challenges we face now and in the future – researchers have the power they need for their vital work.

Both configurations of the DGX Station A100 are expected to begin shipping this quarter.

The post NVIDIA DGX Station A100 is an ‘AI data-centre-in-a-box’ appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/16/nvidia-dgx-station-a100-ai-data-centre-box/feed/ 0
NVIDIA chucks its MLPerf-leading A100 GPU into Amazon’s cloud https://news.deepgeniusai.com/2020/11/03/nvidia-mlperf-a100-gpu-amazon-cloud/ https://news.deepgeniusai.com/2020/11/03/nvidia-mlperf-a100-gpu-amazon-cloud/#comments Tue, 03 Nov 2020 15:55:37 +0000 https://news.deepgeniusai.com/?p=9998 NVIDIA’s A100 set a new record in the MLPerf benchmark last month and now it’s accessible through Amazon’s cloud. Amazon Web Services (AWS) first launched a GPU instance 10 years ago with the NVIDIA M2050. It’s rather poetic that, a decade on, NVIDIA is now providing AWS with the hardware to power the next generation... Read more »

The post NVIDIA chucks its MLPerf-leading A100 GPU into Amazon’s cloud appeared first on AI News.

]]>
NVIDIA’s A100 set a new record in the MLPerf benchmark last month and now it’s accessible through Amazon’s cloud.

Amazon Web Services (AWS) first launched a GPU instance 10 years ago with the NVIDIA M2050. It’s rather poetic that, a decade on, NVIDIA is now providing AWS with the hardware to power the next generation of groundbreaking innovations.

The A100 outperformed CPUs in this year’s MLPerf by up to 237x in data centre inference. A single NVIDIA DGX A100 system – with eight A100 GPUs – provides the same performance as nearly 1,000 dual-socket CPU servers on some AI applications.

“We’re at a tipping point as every industry seeks better ways to apply AI to offer new services and grow their business,” said Ian Buck, Vice President of Accelerated Computing at NVIDIA, following the benchmark results.

Businesses can access the A100 in AWS’ P4d instance. NVIDIA claims the instances reduce the time to train machine learning models by up to 3x with FP16 and up to 6x with TF32 compared to the default FP32 precision.

Each P4d instance features eight NVIDIA A100 GPUs. If even more performance is required, customers are able to access over 4,000 GPUs at a time using AWS’s Elastic Fabric Adaptor (EFA).

Dave Brown, Vice President of EC2 at AWS, said:

“The pace at which our customers have used AWS services to build, train, and deploy machine learning applications has been extraordinary. At the same time, we have heard from those customers that they want an even lower-cost way to train their massive machine learning models.

Now, with EC2 UltraClusters of P4d instances powered by NVIDIA’s latest A100 GPUs and petabit-scale networking, we’re making supercomputing-class performance available to virtually everyone, while reducing the time to train machine learning models by 3x, and lowering the cost to train by up to 60% compared to previous generation instances.”

P4d supports 400Gbps networking and makes use of NVIDIA’s technologies including NVLink, NVSwitch, NCCL, and GPUDirect RDMA to further accelerate deep learning training workloads.

Some of AWS’ customers across various industries have already begun exploring how the P4d instance can help their business.

Karley Yoder, VP & GM of Artificial Intelligence at GE Healthcare, commented:

“Our medical imaging devices generate massive amounts of data that need to be processed by our data scientists. With previous GPU clusters, it would take days to train complex AI models, such as Progressive GANs, for simulations and view the results.

Using the new P4d instances reduced processing time from days to hours. We saw two- to three-times greater speed on training models with various image sizes while achieving better performance with increased batch size and higher productivity with a faster model development cycle.”

For an example from a different industry, the research arm of Toyota is exploring how P4d can improve their existing work in developing self-driving vehicles and groundbreaking new robotics.

“The previous generation P3 instances helped us reduce our time to train machine learning models from days to hours,” explained Mike Garrison, Technical Lead of Infrastructure Engineering at Toyota Research Institute.

“We are looking forward to utilizing P4d instances, as the additional GPU memory and more efficient float formats will allow our machine learning team to train with more complex models at an even faster speed.”

P4d instances are currently available in the US East (N. Virginia) and US West (Oregon) regions. AWS says further availability is planned soon.

You can find out more about P4d instances and how to get started here.

The post NVIDIA chucks its MLPerf-leading A100 GPU into Amazon’s cloud appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/03/nvidia-mlperf-a100-gpu-amazon-cloud/feed/ 2
NVIDIA sets another AI inference record in MLPerf https://news.deepgeniusai.com/2020/10/22/nvidia-sets-another-ai-inference-record-mlperf/ https://news.deepgeniusai.com/2020/10/22/nvidia-sets-another-ai-inference-record-mlperf/#comments Thu, 22 Oct 2020 09:16:41 +0000 https://news.deepgeniusai.com/?p=9966 NVIDIA has set yet another record for AI inference in MLPerf with its A100 Tensor Core GPUs. MLPerf consists of five inference benchmarks which cover the main three AI applications today: image classification, object detection, and translation. “Industry-standard MLPerf benchmarks provide relevant performance data on widely used AI networks and help make informed AI platform... Read more »

The post NVIDIA sets another AI inference record in MLPerf appeared first on AI News.

]]>
NVIDIA has set yet another record for AI inference in MLPerf with its A100 Tensor Core GPUs.

MLPerf consists of five inference benchmarks which cover the main three AI applications today: image classification, object detection, and translation.

“Industry-standard MLPerf benchmarks provide relevant performance data on widely used AI networks and help make informed AI platform buying decisions,” said Rangan Majumder, VP of Search and AI at Microsoft.

Last year, NVIDIA led all five benchmarks for both server and offline data centre scenarios with its Turing GPUs. A dozen companies participated.

23 companies participated in this year’s MLPerf but NVIDIA maintained its lead with the A100 outperforming CPUs by up to 237x in data centre inference.

For perspective, NVIDIA notes that a single NVIDIA DGX A100 system – with eight A100 GPUs – provides the same performance as nearly 1,000 dual-socket CPU servers on some AI applications.

“We’re at a tipping point as every industry seeks better ways to apply AI to offer new services and grow their business,” said Ian Buck, Vice President of Accelerated Computing at NVIDIA.

“The work we’ve done to achieve these results on MLPerf gives companies a new level of AI performance to improve our everyday lives.”

The widespread availability of NVIDIA’s AI platform through every major cloud and data centre infrastructure provider is unlocking huge potential for companies across various industries to improve their operations.

The post NVIDIA sets another AI inference record in MLPerf appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/10/22/nvidia-sets-another-ai-inference-record-mlperf/feed/ 1
GTC 2020: Using AI to help put COVID-19 in the rear-view mirror https://news.deepgeniusai.com/2020/10/05/gtc-2020-ai-help-covid19-rear-view-mirror/ https://news.deepgeniusai.com/2020/10/05/gtc-2020-ai-help-covid19-rear-view-mirror/#respond Mon, 05 Oct 2020 15:21:22 +0000 https://news.deepgeniusai.com/?p=9924 This year’s GTC is Nvidia’s biggest event yet, but – like the rest of the world – it’s had to adapt to the unusual circumstances we all find ourselves in. Huang swapped his usual big stage for nine clips with such exotic backdrops as his kitchen. AI is helping with COVID-19 research around the world... Read more »

The post GTC 2020: Using AI to help put COVID-19 in the rear-view mirror appeared first on AI News.

]]>
This year’s GTC is Nvidia’s biggest event yet, but – like the rest of the world – it’s had to adapt to the unusual circumstances we all find ourselves in. Huang swapped his usual big stage for nine clips with such exotic backdrops as his kitchen.

AI is helping with COVID-19 research around the world and much of it is being powered by NVIDIA GPUs. It’s a daunting task, new drugs often cost over $2.5 billion in research and development — doubling every nine years — and 90 percent of efforts fail.

Nvidia wants to help speed up discoveries of vital medicines while reducing costs

“COVID-19 hits home this urgency [for new tools],” Huang says.

Huang announced NVIDIA Clara Discovery—a suite of tools for assisting scientists in discovering lifesaving new drugs.

NVIDIA Clara combines imaging, radiology, and genomics to help develop healthcare AI applications. Pre-trained AI models and application-specific frameworks help researchers to find targets, build compounds, and develop responses.

Dr Hal Barron, Chief Scientific Officer and President of R&D at GSK, commented:

“AI and machine learning are like a new microscope that will help scientists to see things that they couldn’t see otherwise.

NVIDIA’s investment in computing, combined with the power of deep learning, will enable solutions to some of the life sciences industry’s greatest challenges and help us continue to deliver transformational medicines and vaccines to patients.

Together with GSK’s new AI lab in London, I am delighted that these advanced technologies will now be available to help the UK’s outstanding scientists.”

Researchers can now use biomedical-specific language models for their work, thanks to a breakthrough in natural language processing. This means researchers can organise and activate large datasets, research literature, and sort through papers or patents on existing treatments and other vital real-world data.

“Where there are popular industry tools, our computer scientists accelerate them,” Huang said. “Where no tools exist, we develop them—like NVIDIA Parabricks, Clara Imaging, BioMegatron, BioBERT, NVIDIA RAPIDS.”

We’re all hoping COVID-19 research – using such powerful new tools available to scientists – can lead to a vaccine within a year or two, when they have often taken a decade or longer to create.

“The use of big data, supercomputing, and artificial intelligence has the potential to transform research and development; from target identification through clinical research and all the way to the launch of new medicines,” commented Editor Weatherall, Ph.D., Head of Data Science and AI at AstraZeneca.

During his keynote, Huang provided more details about NVIDIA’s effort to build the UK’s fastest supercomputer – which will be used to further healthcare research – the Cambridge-1.

NVIDIA has established partnerships with companies leading the fight against COVID-19 and other viruses including AstraZeneca, GSK, King’s College London, the Guy’s and St Thomas’ NHS Foundation Trust, and startup Oxford Nanopore. These partners can harness Cambridge-1 for their vital research.

“Tackling the world’s most pressing challenges in healthcare requires massively powerful computing resources to harness the capabilities of AI,” said Huang. “The Cambridge-1 supercomputer will serve as a hub of innovation for the UK and further the groundbreaking work being done by the nation’s researchers in critical healthcare and drug discovery.”

And, for organisations wanting to set up their own AI supercomputers, NVIDIA has announced DGX SuperPODs as the world’s first turnkey AI infrastructure. The solution was developed from years of research for NVIDIA’s own work in healthcare, automotive, healthcare, conversational AI, recommender systems, data science and computer graphics.

While Huang has a nice kitchen, I’m sure he’d like to be back on the big stage for his GTC 2021 keynote. We’d certainly all love COVID-19 to be well and truly in the rear-view mirror.

(Photo by Elwin de Witte on Unsplash)

The post GTC 2020: Using AI to help put COVID-19 in the rear-view mirror appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/10/05/gtc-2020-ai-help-covid19-rear-view-mirror/feed/ 0
GTC 2020: Nvidia doubles-down on its UK AI investments https://news.deepgeniusai.com/2020/10/05/gtc-2020-nvidia-doubles-down-uk-ai-investments/ https://news.deepgeniusai.com/2020/10/05/gtc-2020-nvidia-doubles-down-uk-ai-investments/#respond Mon, 05 Oct 2020 14:16:48 +0000 https://news.deepgeniusai.com/?p=9918 Jensen Huang, CEO of NVIDIA, has kicked off the company’s annual GTC conference with a series of AI announcements—including a doubling-down of its UK investments. NVIDIA is investing heavily in the UK’s accelerating AI sector. The company announced its acquisition of legendary semiconductor giant Arm for $40 billion back in September along with the promise... Read more »

The post GTC 2020: Nvidia doubles-down on its UK AI investments appeared first on AI News.

]]>
Jensen Huang, CEO of NVIDIA, has kicked off the company’s annual GTC conference with a series of AI announcements—including a doubling-down of its UK investments.

NVIDIA is investing heavily in the UK’s accelerating AI sector. The company announced its acquisition of legendary semiconductor giant Arm for $40 billion back in September along with the promise to open a new AI centre in Cambridge.

“We will create an open centre of excellence in the area once home to giants like Isaac Newton and Alan Turing, for whom key NVIDIA technologies are named,” Huang said at the time. “We want to propel Arm – and the UK – to global AI leadership.”

NVIDIA promises to advance Arm’s platform in three major ways:

  • NVIDIA will complement Arm partners with GPU, networking, storage and security technologies to create complete accelerated platforms.
  • NVIDIA will work with Arm partners to create platforms for HPC, cloud, edge and PC — this requires chips, systems, and system software.
  • NVIDIA will port the NVIDIA AI and NVIDIA RTX engines to Arm.

“Today, these capabilities are available only on x86,” Huang said, “With this initiative, Arm platforms will also be leading-edge at accelerated and AI computing.”

Huang also provided more details about NVIDIA’s effort to build the UK’s fastest supercomputer, the Cambridge-1.

Cambridge-1 will boast 400 petaflops of AI performance and will be used by NVIDIA for its vast AI and healthcare collaborations in the UK across academia, industry, and startups.

“Tackling the world’s most pressing challenges in healthcare requires massively powerful computing resources to harness the capabilities of AI,” said Huang. “The Cambridge-1 supercomputer will serve as a hub of innovation for the UK and further the groundbreaking work being done by the nation’s researchers in critical healthcare and drug discovery.”

The company’s first partners are AstraZeneca, GSK, King’s College London, the Guy’s and St Thomas’ NHS Foundation Trust, and startup Oxford Nanopore. A partnership with GSK will also see the world’s first AI drug discovery lab built in London.

“Because of the massive size of the datasets we use for drug discovery, we need to push the boundaries of hardware and develop new machine learning software,” commented Dr Kim Branson, senior vice president and global head of AI and ML at GSK.

“We’re building new algorithms and approaches in addition to bringing together the best minds at the intersection of medicine, genetics and artificial intelligence in the UK’s rich ecosystem. This new partnership with NVIDIA will also contribute additional computational power and state-of-the-art AI technology.”

While there were some natural concerns that Arm’s acquisition would see operations move from the UK to the US, NVIDIA clearly wants to build up its operations in what’s quickly becoming Europe’s AI epicentre.

(Photo by A Perry on Unsplash)

The post GTC 2020: Nvidia doubles-down on its UK AI investments appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/10/05/gtc-2020-nvidia-doubles-down-uk-ai-investments/feed/ 0
Nvidia and ARM will open ‘world-class’ AI centre in Cambridge https://news.deepgeniusai.com/2020/09/14/nvidia-arm-world-class-ai-centre-cambridge/ https://news.deepgeniusai.com/2020/09/14/nvidia-arm-world-class-ai-centre-cambridge/#respond Mon, 14 Sep 2020 12:52:49 +0000 https://news.deepgeniusai.com/?p=9848 Nvidia is already putting its $40 billion ARM acquisition to good use by opening a “world-class” AI centre in Cambridge. British chip designer ARM’s technology is at the heart of most mobile devices. Meanwhile, Nvidia’s GPUs are increasingly being used for AI computation in servers, desktops, and even things like self-driving vehicles. However, Nvidia was... Read more »

The post Nvidia and ARM will open ‘world-class’ AI centre in Cambridge appeared first on AI News.

]]>
Nvidia is already putting its $40 billion ARM acquisition to good use by opening a “world-class” AI centre in Cambridge.

British chip designer ARM’s technology is at the heart of most mobile devices. Meanwhile, Nvidia’s GPUs are increasingly being used for AI computation in servers, desktops, and even things like self-driving vehicles.

However, Nvidia was most interested in ARM’s presence in edge devices—which it estimates to be in the region of 180 billion.

Jensen Huang, CEO of Nvidia, said:

“ARM is an incredible company and it employs some of the greatest engineering minds in the world. But we believe we can make ARM even more incredible and take it to even higher levels.

We want to propel it — and the UK — to global AI leadership.”

There were concerns Nvidia’s acquisition would lead to job losses, but the company has promised to keep the business in the UK. The company says it’s planning to hire more staff and retain ARM’s iconic brand.

Nvidia is going further in its commitment to the UK by opening a new AI centre in Cambridge, which is home to an increasing number of exciting startups in the field such as FiveAI, Prowler.io, Fetch.ai, and Darktrace.

“We will create an open centre of excellence in the area once home to giants like Isaac Newton and Alan Turing, for whom key NVIDIA technologies are named.

Here, leading scientists, engineers and researchers from the UK and around the world will come to develop their ideas, collaborate and conduct their ground-breaking work in areas like healthcare, life sciences, self-driving cars, and other fields.”

The new centre will have five key features when it opens:

  • ARM/Nvidia-based supercomputer – set to be one of the most powerful AI supercomputers in the world.
  • Research Fellowships and Partnerships – Nvidia will use the centre to establish new UK-based research partnerships, expanding on successful relationships already established with King’s College and Oxford.
  • AI Training – Nvidia will make its AI curriculum available across the UK to help create job opportunities and prepare “the next generation of UK developers for AI leadership”
  • Startup Accelerator – With so many of the world’s most exciting AI companies launching in the UK, the Nvidia Inception accelerator will help startups succeed by providing access to the aforementioned supercomputer, connections to researchers from NVIDIA and partners, technical training, and marketing promotion.
  • Industry Collaboration – AI is still in its infancy but will impact every industry to some extent. Nvidia says its new research facility will be an open hub for industry collaboration, building on the company’s existing relationships with the likes of GSK, Oxford Nanopore, and other leaders in their fields.

The UK is Europe’s leader in AI and the British government is investing heavily in ensuring it maintains its pole position. Beyond funding, the UK is also aiming to ensure it’s among the best places to run an AI company.

Current EU rules, especially around data, are often seen as limiting the development of European AI companies when compared to elsewhere in the world. While the UK will have to avoid being accused of doing a so-called “bonfire of regulations” post-Brexit, data collection regulations is likely an area which will be relaxed.

In the UK’s historic trade deal signed with Japan last week, several enhancements were made over the blanket EU-Japan deal signed earlier this year. Among the perceived improvements is the “free flow of data” by not enforcing localisation requirements, and that algorithms can remain private.

UK trade secretary Liz Truss said: “The agreement we have negotiated – in record time and in challenging circumstances – goes far beyond the existing EU deal, as it secures new wins for British businesses in our great manufacturing, food and drink, and tech industries.”

Japan and the UK, as two global tech giants, are expected to deepen their collaboration in the coming years—building on the trade deal signed last week.

Shigeki Ishizuka, Chairman of the Japan Electronics and Information Technology Industries Association, said: “We are confident that this mutual relationship will be further strengthened as an ambitious agreement that will contribute to the promotion of cooperation in research and development, the promotion of innovation, and the further expansion of inter-company collaboration.”

Nvidia’s investment shows that it has confidence in the UK’s strong AI foundations continuing to gain momentum in the coming years.

(Photo by A Perry on Unsplash)

The post Nvidia and ARM will open ‘world-class’ AI centre in Cambridge appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/09/14/nvidia-arm-world-class-ai-centre-cambridge/feed/ 0
NVIDIA’s AI-focused Ampere GPUs are now available in Google Cloud https://news.deepgeniusai.com/2020/07/08/nvidia-ai-ampere-gpus-available-google-cloud/ https://news.deepgeniusai.com/2020/07/08/nvidia-ai-ampere-gpus-available-google-cloud/#respond Wed, 08 Jul 2020 10:56:12 +0000 https://news.deepgeniusai.com/?p=9734 Google Cloud users can now harness the power of NVIDIA’s Ampere GPUs for their AI workloads. The specific GPU added to Google Cloud is the NVIDIA A100 Tensor Core which was announced just last month. NVIDIA says the A100 “has come to the cloud faster than any NVIDIA GPU in history.” NVIDIA claims the A100... Read more »

The post NVIDIA’s AI-focused Ampere GPUs are now available in Google Cloud appeared first on AI News.

]]>
Google Cloud users can now harness the power of NVIDIA’s Ampere GPUs for their AI workloads.

The specific GPU added to Google Cloud is the NVIDIA A100 Tensor Core which was announced just last month. NVIDIA says the A100 “has come to the cloud faster than any NVIDIA GPU in history.”

NVIDIA claims the A100 boosts training and inference performance by up to 20x over its predecessors. Large AI models like BERT can be trained in just 37 minutes on a cluster of 1,024 A100s.

For those who enjoy their measurements in teraflops (TFLOPS), the A100 delivers around 19.5 TFLOPS in single-precision performance and 156 TFLOPS for Tensor Float 32 workloads.

Manish Sainani, Director of Product Management at Google Cloud, said:

“Google Cloud customers often look to us to provide the latest hardware and software services to help them drive innovation on AI and scientific computing workloads.

With our new A2 VM family, we are proud to be the first major cloud provider to market NVIDIA A100 GPUs, just as we were with NVIDIA T4 GPUs. We are excited to see what our customers will do with these new capabilities.”

The announcement couldn’t have arrived at a better time – with many looking to harness AI for solutions to the COVID-19 pandemic, in addition to other global challenges such as climate change.

Aside from AI training and inference, other things customers will be able to achieve with the new capabilities include data analytics, scientific computing, genomics, edge video analytics, and 5G services.

The new Ampere-based data center GPUs are now available in Alpha on Google Cloud. Users can access instances of up to 16 A100 GPUs, which provides a total of 640GB of GPU memory and 1.3TB of system memory.

You can register your interest for access here.

The post NVIDIA’s AI-focused Ampere GPUs are now available in Google Cloud appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/08/nvidia-ai-ampere-gpus-available-google-cloud/feed/ 0
Nvidia and IBM provide further AI solutions to fight COVID-19 https://news.deepgeniusai.com/2020/04/07/nvidia-ibm-ai-solutions-fight-covid-19/ https://news.deepgeniusai.com/2020/04/07/nvidia-ibm-ai-solutions-fight-covid-19/#respond Tue, 07 Apr 2020 12:08:30 +0000 https://news.deepgeniusai.com/?p=9521 Nvidia and IBM have released further AI solutions to help in the global fight against COVID-19. COVID-19 knows no borders and has resulted in around 75,896 deaths as of writing. People around the world are hoping AI can deliver solutions for tackling the coronavirus which has wrecked social and economic havoc. AI News has already... Read more »

The post Nvidia and IBM provide further AI solutions to fight COVID-19 appeared first on AI News.

]]>
Nvidia and IBM have released further AI solutions to help in the global fight against COVID-19.

COVID-19 knows no borders and has resulted in around 75,896 deaths as of writing. People around the world are hoping AI can deliver solutions for tackling the coronavirus which has wrecked social and economic havoc.

AI News has already covered work from predominantly Asian tech firms including Alibaba, Baidu, Tencent, Daergen, and Insilico Medicine, but more and more Western companies are stepping up with solutions.

Nvidia

Nvidia was among the first Western tech giants to offer up some of its substantial resources towards fighting COVID-19. Last month, it gave COVID-19 researchers free access to its Parabricks genome-sequencing software.

27,000 of Nvidia’s GPUs are also powering IBM’s Summit supercomputer which researchers were able to use to simulate 8,000 compounds in a matter of days and identify 77 small-molecule compounds, such as medications and natural compounds, that have shown the potential to impair COVID-19’s ability to dock with and infect host cells.

This week, Nvidia announced that it’s joined the COVID-19 High-Performance Computing Consortium. The consortium brings together leaders from the US government, industry, and academia, and supports researchers by providing access to 30 supercomputers with over 400 petaflops of compute performance.

Nvidia, for its part, will contribute its expertise to the consortium in areas such as AI,  supercomputing, drug discovery, molecular dynamics, genomics, medical imaging, and data analytics.

“The COVID-19 HPC Consortium is the Apollo Program of our time,” said Ian Buck, VP and GM of Accelerated Computing at Nvidia. “Not a race to the moon, this is a race for humanity. The rocket ships are GPU supercomputers, and their fuel is scientific knowledge. NVIDIA is going to help by making these rockets travel as fast as they can.”

Nvidia has also packaged COVID-19 tools on NGC, the company’s hub for GPU-accelerated software. All of the COVID-19 tools on NGC are publicly available.

IBM

IBM has also been active in providing tools and expertise in the fight against COVID-19 and helped to launch the aforementioned COVID-19 HPC Consortium.

Last month, IBM launched COVID-19 data in its Weather Channel app to provide access to detailed virus tracking. The company also merged its Micromedex online medication reference database with EBSCO’s DynaMed peer-reviewed clinical content to form a single comprehensive resource with the goal of improving clinical decision-making.

Our sister publication Developer reported on IBM’s Call for Code initiative expanding to include COVID-19 solutions last week. By tapping into IBM’s vast developer community and offering “starter kits” consisting of relevant tools, innovators from around the world can quickly get to work building potentially lifesaving solutions.

IBM has made two new COVID-19 announcements over the past week.

The first is an AI deep search tool which takes reputable data from the White House, a coalition of research groups, and licensed databases from the DrugBank, Clinicaltrials.gov and GenBank. Researchers can use IBM’s tool to quickly extract critical knowledge regarding COVID-19 from the large collection of papers.

IBM will also make its Functional Genomics Platform available for free during the COVID-19 pandemic. The platform is a cloud-based repository and research tool which includes genes, proteins, and other molecular targets, and is built to discover the molecular features of viral and bacterial genomes. Using the platform, researchers can accelerate the discovery of molecular targets required for drug design, test development, and treatment.

So there we have some of the latest AI solutions from Nvidia and IBM designed to help tackle the COVID-19 pandemic we’re all facing. We’ll keep you posted on further tools and developments that may be of assistance.

Stay safe, stay home where possible, and thank you to all key workers and researchers helping to get us out the other side of this faster.

(Photo by CDC on Unsplash)

The post Nvidia and IBM provide further AI solutions to fight COVID-19 appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/04/07/nvidia-ibm-ai-solutions-fight-covid-19/feed/ 0
Nvidia comes out on top in first MLPerf inference benchmarks https://news.deepgeniusai.com/2019/11/07/nvidia-comes-out-on-top-in-first-mlperf-inference-benchmarks/ https://news.deepgeniusai.com/2019/11/07/nvidia-comes-out-on-top-in-first-mlperf-inference-benchmarks/#respond Thu, 07 Nov 2019 11:19:57 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6172 The first benchmark results from the MLPerf consortium have been released and Nvidia is a clear winner for inference performance. For those unaware, inference takes a deep learning model and processes incoming data however it’s been trained to. MLPerf is a consortium which aims to provide “fair and useful” standardised benchmarks for inference performance. MLPerf... Read more »

The post Nvidia comes out on top in first MLPerf inference benchmarks appeared first on AI News.

]]>
The first benchmark results from the MLPerf consortium have been released and Nvidia is a clear winner for inference performance.

For those unaware, inference takes a deep learning model and processes incoming data however it’s been trained to.

MLPerf is a consortium which aims to provide “fair and useful” standardised benchmarks for inference performance. MLPerf can be thought of as doing for inference what SPEC does for benchmarking CPUs and general system performance.

The consortium has released its first benchmarking results, a painstaking effort involving over 30 companies and over 200 engineers and practitioners. MLPerf’s first call for submissions led to over 600 measurements spanning 14 companies and 44 systems. 

However, for datacentre inference, only four of the processors are commercially-available:

  • Intel Xeon P9282
  • Habana Goya
  • Google TPUv3
  • Nvidia Turing

Nvidia wasted no time in boasting of its performance beating the three other processors across various neural networks in both server and offline scenarios:

The easiest direct comparisons are possible in the ImageNet ResNet-50 v1.6 offline scenario where the greatest number of major players and startups submitted results.

In that scenario, Nvidia once again boasted the best performance on a per-processor basis with its Titan RTX GPU. Despite the 2x Google Cloud TPU v3-8 submission using eight Intel Skylake processors, it had a similar performance to the SCAN 3XS DBP T496X2 Fluid which used four Titan RTX cards (65,431.40 vs 66,250.40 inputs/second).

Ian Buck, GM and VP of Accelerated Computing at NVIDIA, said:

“AI is at a tipping point as it moves swiftly from research to large-scale deployment for real applications.

AI inference is a tremendous computational challenge. Combining the industry’s most advanced programmable accelerator, the CUDA-X suite of AI algorithms and our deep expertise in AI computing, NVIDIA can help datacentres deploy their large and growing body of complex AI models.”

However, it’s worth noting that the Titan RTX doesn’t support ECC memory so – despite its sterling performance – this omission may prevent its use in some datacentres.

Another interesting takeaway when comparing the Cloud TPU results against Nvidia is the performance difference when moving from offline to server scenarios.

  • Google Cloud TPU v3 offline: 32,716.00
  • Google Cloud TPU v3 server: 16,014.29
  • Nvidia SCAN 3XS DBP T496X2 Fluid offline: 66,250.40
  • Nvidia SCAN 3XS DBP T496X2 Fluid server: 60,030.57

As you can see, the Cloud TPU system performance is slashed by over a half when used in a server scenario. The SCAN 3XS DBP T496X2 Fluid system performance only drops around 10 percent in comparison.

You can peruse MLPerf’s full benchmark results here.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Nvidia comes out on top in first MLPerf inference benchmarks appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/11/07/nvidia-comes-out-on-top-in-first-mlperf-inference-benchmarks/feed/ 0