Hardware – AI News https://news.deepgeniusai.com Artificial Intelligence News Tue, 22 Dec 2020 16:10:06 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Hardware – AI News https://news.deepgeniusai.com 32 32 Chinese AI chipmaker Horizon endeavours to raise $700M to rival NVIDIA https://news.deepgeniusai.com/2020/12/22/chinese-ai-chipmaker-horizon-raise-700m-rival-nvidia/ https://news.deepgeniusai.com/2020/12/22/chinese-ai-chipmaker-horizon-raise-700m-rival-nvidia/#comments Tue, 22 Dec 2020 16:10:04 +0000 https://news.deepgeniusai.com/?p=10133 AI chipmaker Horizon Robotics is seeking to raise $700 million in a new funding round. Horizon is often seen as potentially becoming China’s equivalent of NVIDIA. The company is founded by Dr Kai Yu, a prominent industry figure with quite the credentials. Yu led Baidu’s AI Research lab for three years, founded the Baidu Institute... Read more »

The post Chinese AI chipmaker Horizon endeavours to raise $700M to rival NVIDIA appeared first on AI News.

]]>
AI chipmaker Horizon Robotics is seeking to raise $700 million in a new funding round.

Horizon is often seen as potentially becoming China’s equivalent of NVIDIA. The company is founded by Dr Kai Yu, a prominent industry figure with quite the credentials.

Yu led Baidu’s AI Research lab for three years, founded the Baidu Institute of Deep Learning, and launched the company’s autonomous driving business unit.

Furthermore, Yu has taught at Stanford University, published over 60 papers, and even won first place in the ImageNet challenge which evaluates algorithms for object detection and image classification.

China is yet to produce a chipset firm which can match the capabilities of Western equivalents.

With increasing US sanctions making it more difficult for Chinese firms to access American semiconductors, a number of homegrown companies are emerging and gaining attention from investors.

Horizon is just five-years-old and specialises in making AI chips for robots and autonomous vehicles. The company has already attracted significant funding.

Around two years ago, Horizon completed a $600 million funding round with a $3 billion valuation. The company has secured $150 million so far as part of this latest round.

While it’s likely the incoming Biden administration in the US will take a less strict approach to trade with China, it seems Beijing wants to build more homegrown alternatives which can match or surpass Western counterparts.

Chinese tech giants like Huawei are investing significant resources in their chip manufacturing capabilities to ensure the country has the tech it needs to power groundbreaking advancements like self-driving cars.

The post Chinese AI chipmaker Horizon endeavours to raise $700M to rival NVIDIA appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/22/chinese-ai-chipmaker-horizon-raise-700m-rival-nvidia/feed/ 1
TensorFlow is now available for those shiny new ARM-based Macs https://news.deepgeniusai.com/2020/11/19/tensorflow-now-available-new-arm-based-macs/ https://news.deepgeniusai.com/2020/11/19/tensorflow-now-available-new-arm-based-macs/#comments Thu, 19 Nov 2020 14:41:57 +0000 https://news.deepgeniusai.com/?p=10039 A new version of machine learning library TensorFlow has been released with optimisations for Apple’s new ARM-based Macs. While still technically in pre-release, the Mac-optimised TensorFlow fork supports native hardware acceleration on Mac devices with M1 or Intel chips through Apple’s ML Compute framework. The new TensorFlow release boasts of an over 10x speed improvement... Read more »

The post TensorFlow is now available for those shiny new ARM-based Macs appeared first on AI News.

]]>
A new version of machine learning library TensorFlow has been released with optimisations for Apple’s new ARM-based Macs.

While still technically in pre-release, the Mac-optimised TensorFlow fork supports native hardware acceleration on Mac devices with M1 or Intel chips through Apple’s ML Compute framework.

The new TensorFlow release boasts of an over 10x speed improvement for common training tasks. While impressive, it has to be taken in the context that the GPU was not previously used for training tasks. 

A look at the benchmarks still indicates a substantial gap between the Intel and M1-based Macs across various machine learning models:

In a blog post, Pankaj Kanwar, Tensor Processing Units Technical Program Manager at Google, and Fred Alcober, TensorFlow Product Marketing Lead at Google, wrote:

“These improvements, combined with the ability of Apple developers being able to execute TensorFlow on iOS through TensorFlow Lite, continue to showcase TensorFlow’s breadth and depth in supporting high-performance ML execution on Apple hardware.”

We can only hope that running these workloads doesn’t turn MacBooks into expensive frying pans—but the remarkable efficiency they’ve displayed so far gives little cause for concern.

The post TensorFlow is now available for those shiny new ARM-based Macs appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/19/tensorflow-now-available-new-arm-based-macs/feed/ 1
NVIDIA DGX Station A100 is an ‘AI data-centre-in-a-box’ https://news.deepgeniusai.com/2020/11/16/nvidia-dgx-station-a100-ai-data-centre-box/ https://news.deepgeniusai.com/2020/11/16/nvidia-dgx-station-a100-ai-data-centre-box/#respond Mon, 16 Nov 2020 16:14:54 +0000 https://news.deepgeniusai.com/?p=10023 NVIDIA has unveiled its DGX Station A100, an “AI data-centre-in-a-box” powered by up to four 80GB versions of the company’s record-setting GPU. The A100 Tensor Core GPU set new MLPerf benchmark records last month—outperforming CPUs by up to 237x in data centre inference. In November, Amazon Web Services made eight A100 GPUs available in each... Read more »

The post NVIDIA DGX Station A100 is an ‘AI data-centre-in-a-box’ appeared first on AI News.

]]>
NVIDIA has unveiled its DGX Station A100, an “AI data-centre-in-a-box” powered by up to four 80GB versions of the company’s record-setting GPU.

The A100 Tensor Core GPU set new MLPerf benchmark records last month—outperforming CPUs by up to 237x in data centre inference. In November, Amazon Web Services made eight A100 GPUs available in each of its P4d instances.

For those who prefer their hardware local, the DGX Station A100 is available in either four 80GB A100 GPUs or four 40GB configurations. The monstrous 80GB version of the A100 has twice the memory of when the GPU was originally unveiled just six months ago.

“We doubled everything in this system to make it more effective for customers,” said Paresh Kharya, senior director of product management for accelerated computing at NVIDIA.

NVIDIA says the two configurations provide options for data science and AI research teams to select a system according to their unique workloads and budgets.

Charlie Boyle, VP and GM of DGX systems at NVIDIA, commented:

“DGX Station A100 brings AI out of the data centre with a server-class system that can plug in anywhere.

Teams of data science and AI researchers can accelerate their work using the same software stack as NVIDIA DGX A100 systems, enabling them to easily scale from development to deployment.”

The memory capacity of the DGX Station A100 powered by the 80GB GPUs is now 640GB, enabling much larger datasets and models.

“To power complex conversational AI models like BERT Large inference, DGX Station A100 is more than 4x faster than the previous generation DGX Station. It delivers nearly a 3x performance boost for BERT Large AI training,” NVIDIA wrote in a release.

DGX A100 640GB configurations can be integrated into the DGX SuperPOD Solution for Enterprise for unparalleled performance. Such “turnkey AI supercomputers” are available in units consisting of 20 DGX A100 systems.

Since acquiring ARM, NVIDIA continues to double-down on its investment in the UK and its local talent.

“We will create an open centre of excellence in the area once home to giants like Isaac Newton and Alan Turing, for whom key NVIDIA technologies are named,” Huang said in September. “We want to propel ARM – and the UK – to global AI leadership.”

NVIDIA’s latest supercomputer, the Cambridge-1, is being installed in the UK and will be one of the first SuperPODs with DGX A100 640GB systems. Cambridge-1 will initially be used by local pioneering companies to supercharge healthcare research.

Dr Kim Branson, SVP and Global Head of AI and ML at GSK, commented:

“Because of the massive size of the datasets we use for drug discovery, we need to push the boundaries of hardware and develop new machine learning software.

We’re building new algorithms and approaches in addition to bringing together the best minds at the intersection of medicine, genetics, and artificial intelligence in the UK’s rich ecosystem.

This new partnership with NVIDIA will also contribute additional computational power and state-of-the-art AI technology.”

The use of AI for healthcare research has received extra attention due to the coronavirus pandemic. A recent simulation of the coronavirus, the largest molecular simulation ever, simulated 305 million atoms and was powered by 27,000 NVIDIA GPUs.

Several promising COVID-19 vaccines in late-stage trials have emerged in recent days which have raised hopes that life could be mostly back to normal by summer, but we never know when the next pandemic may strike and there are still many challenges we all face both in and out of healthcare.

Systems like the DGX Station A100 help to ensure that – whatever challenges we face now and in the future – researchers have the power they need for their vital work.

Both configurations of the DGX Station A100 are expected to begin shipping this quarter.

The post NVIDIA DGX Station A100 is an ‘AI data-centre-in-a-box’ appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/16/nvidia-dgx-station-a100-ai-data-centre-box/feed/ 0
Sony has a new ‘AI robotics’ drone division called Airpeak https://news.deepgeniusai.com/2020/11/10/sony-new-ai-robotics-drone-division-airpeak/ https://news.deepgeniusai.com/2020/11/10/sony-new-ai-robotics-drone-division-airpeak/#comments Tue, 10 Nov 2020 11:04:30 +0000 https://news.deepgeniusai.com/?p=10008 Sony’s latest division, Airpeak, is described as being “in the field of AI robotics” and will focus on next-generation drones. Despite incidents of reckless flying, drones unlock huge opportunities. We regularly see beautiful photography and videography shot using drones—but, of course, they can do so much more. Sony has built a stellar reputation in media... Read more »

The post Sony has a new ‘AI robotics’ drone division called Airpeak appeared first on AI News.

]]>
Sony’s latest division, Airpeak, is described as being “in the field of AI robotics” and will focus on next-generation drones.

Despite incidents of reckless flying, drones unlock huge opportunities. We regularly see beautiful photography and videography shot using drones—but, of course, they can do so much more.

Sony has built a stellar reputation in media capture. The company builds great cameras – both for itself and sensors it supplies to other manufacturers (like its new IMX686) – and its software like Vegas Pro is the defacto choice for many creative professionals.

In a press release, Sony wrote:

“Airpeak will support the creativity of video creators to the fullest extent possible, aiming to contribute to the further development of the entertainment industry as well as to improve efficiency and savings in various industries.

Airpeak will also promote this project to enable drone-use with the highest level of safety and reliability in the environments where this has been difficult in the past.”

The focus on supporting video creators is to be expected from Sony, but the mention of various industries suggests the company has bigger plans.

In the photography/videography space alone, Sony will face stiff competition from established players like DJI.

Despite being the current industry leader, DJI has begun diversifying its products in recent years due to a decline in drone popularity for consumer purposes. This is mostly due to increasing restrictions in many countries around where drones can fly and even requiring permits (the FAA, for example, requires users to register all drones over a certain size.)

A patent granted to Sony back in January suggests the company may start relatively simple:

However, Sony could use its AI and robotics expertise to stand out in other exciting areas where drones have a lot of potential such as emergency response, delivering supplies, assisting in warehouses/factories, and even tackling small fires before they spread.

The language Sony uses suggests the company will target a wide range of customers from everyday consumers to large enterprise deployments.

Sony plans to reveal further details about Airpeak in the Spring of 2021.

The post Sony has a new ‘AI robotics’ drone division called Airpeak appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/10/sony-new-ai-robotics-drone-division-airpeak/feed/ 1
NVIDIA chucks its MLPerf-leading A100 GPU into Amazon’s cloud https://news.deepgeniusai.com/2020/11/03/nvidia-mlperf-a100-gpu-amazon-cloud/ https://news.deepgeniusai.com/2020/11/03/nvidia-mlperf-a100-gpu-amazon-cloud/#comments Tue, 03 Nov 2020 15:55:37 +0000 https://news.deepgeniusai.com/?p=9998 NVIDIA’s A100 set a new record in the MLPerf benchmark last month and now it’s accessible through Amazon’s cloud. Amazon Web Services (AWS) first launched a GPU instance 10 years ago with the NVIDIA M2050. It’s rather poetic that, a decade on, NVIDIA is now providing AWS with the hardware to power the next generation... Read more »

The post NVIDIA chucks its MLPerf-leading A100 GPU into Amazon’s cloud appeared first on AI News.

]]>
NVIDIA’s A100 set a new record in the MLPerf benchmark last month and now it’s accessible through Amazon’s cloud.

Amazon Web Services (AWS) first launched a GPU instance 10 years ago with the NVIDIA M2050. It’s rather poetic that, a decade on, NVIDIA is now providing AWS with the hardware to power the next generation of groundbreaking innovations.

The A100 outperformed CPUs in this year’s MLPerf by up to 237x in data centre inference. A single NVIDIA DGX A100 system – with eight A100 GPUs – provides the same performance as nearly 1,000 dual-socket CPU servers on some AI applications.

“We’re at a tipping point as every industry seeks better ways to apply AI to offer new services and grow their business,” said Ian Buck, Vice President of Accelerated Computing at NVIDIA, following the benchmark results.

Businesses can access the A100 in AWS’ P4d instance. NVIDIA claims the instances reduce the time to train machine learning models by up to 3x with FP16 and up to 6x with TF32 compared to the default FP32 precision.

Each P4d instance features eight NVIDIA A100 GPUs. If even more performance is required, customers are able to access over 4,000 GPUs at a time using AWS’s Elastic Fabric Adaptor (EFA).

Dave Brown, Vice President of EC2 at AWS, said:

“The pace at which our customers have used AWS services to build, train, and deploy machine learning applications has been extraordinary. At the same time, we have heard from those customers that they want an even lower-cost way to train their massive machine learning models.

Now, with EC2 UltraClusters of P4d instances powered by NVIDIA’s latest A100 GPUs and petabit-scale networking, we’re making supercomputing-class performance available to virtually everyone, while reducing the time to train machine learning models by 3x, and lowering the cost to train by up to 60% compared to previous generation instances.”

P4d supports 400Gbps networking and makes use of NVIDIA’s technologies including NVLink, NVSwitch, NCCL, and GPUDirect RDMA to further accelerate deep learning training workloads.

Some of AWS’ customers across various industries have already begun exploring how the P4d instance can help their business.

Karley Yoder, VP & GM of Artificial Intelligence at GE Healthcare, commented:

“Our medical imaging devices generate massive amounts of data that need to be processed by our data scientists. With previous GPU clusters, it would take days to train complex AI models, such as Progressive GANs, for simulations and view the results.

Using the new P4d instances reduced processing time from days to hours. We saw two- to three-times greater speed on training models with various image sizes while achieving better performance with increased batch size and higher productivity with a faster model development cycle.”

For an example from a different industry, the research arm of Toyota is exploring how P4d can improve their existing work in developing self-driving vehicles and groundbreaking new robotics.

“The previous generation P3 instances helped us reduce our time to train machine learning models from days to hours,” explained Mike Garrison, Technical Lead of Infrastructure Engineering at Toyota Research Institute.

“We are looking forward to utilizing P4d instances, as the additional GPU memory and more efficient float formats will allow our machine learning team to train with more complex models at an even faster speed.”

P4d instances are currently available in the US East (N. Virginia) and US West (Oregon) regions. AWS says further availability is planned soon.

You can find out more about P4d instances and how to get started here.

The post NVIDIA chucks its MLPerf-leading A100 GPU into Amazon’s cloud appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/03/nvidia-mlperf-a100-gpu-amazon-cloud/feed/ 2
NVIDIA sets another AI inference record in MLPerf https://news.deepgeniusai.com/2020/10/22/nvidia-sets-another-ai-inference-record-mlperf/ https://news.deepgeniusai.com/2020/10/22/nvidia-sets-another-ai-inference-record-mlperf/#comments Thu, 22 Oct 2020 09:16:41 +0000 https://news.deepgeniusai.com/?p=9966 NVIDIA has set yet another record for AI inference in MLPerf with its A100 Tensor Core GPUs. MLPerf consists of five inference benchmarks which cover the main three AI applications today: image classification, object detection, and translation. “Industry-standard MLPerf benchmarks provide relevant performance data on widely used AI networks and help make informed AI platform... Read more »

The post NVIDIA sets another AI inference record in MLPerf appeared first on AI News.

]]>
NVIDIA has set yet another record for AI inference in MLPerf with its A100 Tensor Core GPUs.

MLPerf consists of five inference benchmarks which cover the main three AI applications today: image classification, object detection, and translation.

“Industry-standard MLPerf benchmarks provide relevant performance data on widely used AI networks and help make informed AI platform buying decisions,” said Rangan Majumder, VP of Search and AI at Microsoft.

Last year, NVIDIA led all five benchmarks for both server and offline data centre scenarios with its Turing GPUs. A dozen companies participated.

23 companies participated in this year’s MLPerf but NVIDIA maintained its lead with the A100 outperforming CPUs by up to 237x in data centre inference.

For perspective, NVIDIA notes that a single NVIDIA DGX A100 system – with eight A100 GPUs – provides the same performance as nearly 1,000 dual-socket CPU servers on some AI applications.

“We’re at a tipping point as every industry seeks better ways to apply AI to offer new services and grow their business,” said Ian Buck, Vice President of Accelerated Computing at NVIDIA.

“The work we’ve done to achieve these results on MLPerf gives companies a new level of AI performance to improve our everyday lives.”

The widespread availability of NVIDIA’s AI platform through every major cloud and data centre infrastructure provider is unlocking huge potential for companies across various industries to improve their operations.

The post NVIDIA sets another AI inference record in MLPerf appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/10/22/nvidia-sets-another-ai-inference-record-mlperf/feed/ 1
Intel, Ubotica, and the ESA launch the first AI satellite https://news.deepgeniusai.com/2020/10/20/intel-ubotica-esa-launch-first-ai-satellite/ https://news.deepgeniusai.com/2020/10/20/intel-ubotica-esa-launch-first-ai-satellite/#respond Tue, 20 Oct 2020 15:18:13 +0000 https://news.deepgeniusai.com/?p=9961 Intel, Ubotica, and the European Space Agency (ESA) have launched the first AI satellite into Earth’s orbit. The PhiSat-1 satellite is about the size of a cereal box and was ejected from a rocket’s dispenser alongside 45 other satellites. The rocket launched from Guiana Space Centre on September 2nd. Intel has integrated its Movidius Myriad... Read more »

The post Intel, Ubotica, and the ESA launch the first AI satellite appeared first on AI News.

]]>
Intel, Ubotica, and the European Space Agency (ESA) have launched the first AI satellite into Earth’s orbit.

The PhiSat-1 satellite is about the size of a cereal box and was ejected from a rocket’s dispenser alongside 45 other satellites. The rocket launched from Guiana Space Centre on September 2nd.

Intel has integrated its Movidius Myriad 2 Vision Processing Unit (VPU) into PhiSat-1 – enabling large amounts of data to be processed on the device. This helps to prevent useless data being sent back to Earth and consuming precious bandwidth.

“The capability that sensors have to produce data increases by a factor of 100 every generation, while our capabilities to download data are increasing, but only by a factor of three, four, five per generation,” says Gianluca Furano, data systems and onboard computing lead at the ESA.

Around 30 percent data savings are expected by using AI at the edge on the PhiSat-1.

“Space is the ultimate edge,” says Aubrey Dunne, chief technology officer of Ubotica. “The Myriad was absolutely designed from the ground up to have an impressive compute capability but in a very low power envelope, and that really suits space applications.”

PhiSat-1 is currently in a sun-synchronous orbit around 329 miles (530 km) above Earth and travelling at over 17,000mph (27,500kmh).

The satellite’s mission is to assess things like polar ice for monitoring climate change, and soil moisture for the growth of crops. One day it could help to spot wildfires in minutes rather than hours or detect environmental accidents at sea.

A successor, PhiSat-2, is currently planned to test more of these possibilities. PhiSat-2 will also carry another Myriad 2.

Myriad 2 was not originally designed for use in orbit. Specialist chips which are protected against radiation are typically used for space missions and can be “up to two decades behind state-of-the-art commercial technology,” explains Dunne.

Incredibly, the Myriad 2 survived 36 straight hours of being blasted with radiation at CERN in late-2018 without any modifications.

ESA announced the joint team was “happy to reveal the first-ever hardware-accelerated AI inference of Earth observation images on an in-orbit satellite.”

PhiSat-1 and PhiSat-2 will be part of a future network with intersatellite communication systems.

(Image Credit: CERN/M. Brice)

The post Intel, Ubotica, and the ESA launch the first AI satellite appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/10/20/intel-ubotica-esa-launch-first-ai-satellite/feed/ 0
South Korea wants to develop 50 types of AI chips by 2030 https://news.deepgeniusai.com/2020/10/13/south-korea-develop-50-types-ai-chips-2030/ https://news.deepgeniusai.com/2020/10/13/south-korea-develop-50-types-ai-chips-2030/#respond Tue, 13 Oct 2020 16:31:41 +0000 https://news.deepgeniusai.com/?p=9953 South Korea has set itself the ambitious national target of developing 50 types of AI chips within the next decade. The country’s ICT ministry made the announcement this week as South Korea positions itself to move beyond its historic foothold in memory chips into artificial intelligence semiconductors. South Korea is investing heavily in AI; especially... Read more »

The post South Korea wants to develop 50 types of AI chips by 2030 appeared first on AI News.

]]>
South Korea has set itself the ambitious national target of developing 50 types of AI chips within the next decade.

The country’s ICT ministry made the announcement this week as South Korea positions itself to move beyond its historic foothold in memory chips into artificial intelligence semiconductors.

South Korea is investing heavily in AI; especially in the hardware which makes it possible.

Around one trillion won ($871 million) will be spent on developing next-generation AI chips before 2029. The current plan is to be in a position to produce AI chips nationally by 2022 and build a 3,000-strong army of experts within the decade.

Last year, President Moon Jae-in announced a ‘National Strategy for Artificial Intelligence’ (PDF) and set out his desire for South Korea to lead in the technology.

In a foreword, President Moon Jae-in wrote:

“The era of the Fourth Industrial Revolution is indeed an age in which imagination can change the world. Korea is neither the first country to have ushered in the era of artificial intelligence nor the country with the best AI technology at present. However, the country has people capable of turning their imagination into reality and taking on challenges to pursue novelty.

Even in the throes of the 1997 Asian financial crisis, the country led the Internet Revolution and now boasts world-class manufacturing competitiveness, globally unmatched ICT infrastructure and abundant data concerning e-government.

If we link artificial intelligence primarily with the sectors in which we’ve accumulated extensive experience and competitiveness, such as manufacturing and semiconductors, we will be able to give birth to the smartest yet most human-like artificial intelligence. The Government will join forces with developers to help them fully utilize their imaginations and turn their ideas into reality.”

South Korea is home to tech giants such as Samsung and SK hynix which continue to offer global innovations. However, it’s understandable South Korea wants to ensure it secures a slice of what will be a lucrative market.

Analysts from McKinsey predict AI chips will generate around $67 billion in revenue by 2025 and capture around 20 percent of all semiconductor demand.

South Korea, for its part, wants to own 20 percent of the global AI chip market by the end of this decade.

The post South Korea wants to develop 50 types of AI chips by 2030 appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/10/13/south-korea-develop-50-types-ai-chips-2030/feed/ 0
British AI chipmaker Graphcore claims Nvidia’s crown with GC200 processor https://news.deepgeniusai.com/2020/07/15/british-ai-graphcore-nvidia-gc200-processor/ https://news.deepgeniusai.com/2020/07/15/british-ai-graphcore-nvidia-gc200-processor/#respond Wed, 15 Jul 2020 13:40:34 +0000 https://news.deepgeniusai.com/?p=9749 Graphcore, a British AI chipmaker, has unveiled a powerful new processor which takes Nvidia’s crown. Bristol-based Graphcore ranked number one on Fast Company’s top 10 most innovative AI companies of 2020 list. Nvidia, for comparison, ranked fifth. Fast Company’s confidence in Graphcore clearly isn’t misplaced. Announcing its GC200 processor, Graphcore says its new chip is... Read more »

The post British AI chipmaker Graphcore claims Nvidia’s crown with GC200 processor appeared first on AI News.

]]>
Graphcore, a British AI chipmaker, has unveiled a powerful new processor which takes Nvidia’s crown.

Bristol-based Graphcore ranked number one on Fast Company’s top 10 most innovative AI companies of 2020 list. Nvidia, for comparison, ranked fifth.

Fast Company’s confidence in Graphcore clearly isn’t misplaced. Announcing its GC200 processor, Graphcore says its new chip is the world’s most complex.

The GC200 processor boasts 59.4 billion transistors and takes the crown from Nvidia’s A100 as the world’s largest. The A100 was announced by Nvidia earlier this year and features 54 billion transistors.

Each GC200 chip has 1,472 independent processor cores and 8,832 separate parallel threads, all supported by 900MB of in-processor RAM.

Graphcore says that up to 64,000 of the 7nm GC200 chips can be linked to create a massive parallel processor with around 16 exaflops of computational power and petabytes of power. Such a system would be able to support AI models with trillions of parameters.

“We are impressed with Graphcore’s technology for energy-efficient construction and execution of large, next-generation ML models, and we expect significant performance gains for several of our AI-oriented research projects in medical imaging and cardiac simulations,” comments Are Magnus Bruaset, Research Director at Simula Research Laboratory.

“We are also pursuing other avenues of research that can push the envelope for Graphcore’s multi-IPU systems, such as how to efficiently conduct large-scale, sparse linear algebra operations commonly found in physics-based HPC workloads.”

The GC200 is just the second chip to be launched by Graphcore. Compared to the first generation, the GC200 delivers an up to 9.3x performance increase.

Graphcore’s founders believe the IPU approach that the company is taking is more efficient than Nvidia’s GPU route. The ability to scale up to thousands of IPU processors in existing compute infrastructures could mean that the cost could be 10-20x lower than using GPUs.

Back in February, Graphcore announced that it had raised $150 million in funding for its R&D. The company’s total valuation is $1.95 billion.

Graphcore was fortunate to have secured its cash before the COVID-19 pandemic really hit – with many startups reporting difficulties obtaining vital funding where there was previous interest. Undoubtedly, the GC200 will help to power research to get us through this pandemic and all the other challenges the world faces now and in the future.

The post British AI chipmaker Graphcore claims Nvidia’s crown with GC200 processor appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/15/british-ai-graphcore-nvidia-gc200-processor/feed/ 0
Neuralink will share progress on linking human brains with AI next month https://news.deepgeniusai.com/2020/07/09/neuralink-share-progress-linking-human-brains-ai/ https://news.deepgeniusai.com/2020/07/09/neuralink-share-progress-linking-human-brains-ai/#respond Thu, 09 Jul 2020 16:29:10 +0000 https://news.deepgeniusai.com/?p=9740 Elon Musk’s startup Neuralink says it will share progress next month on the company’s mission to link human brains with AI. Musk made the announcement of the announcement on Twitter: When Musk appeared on Joe Rogan’s podcast in September 2018, the CEO told Rogan that Neuralink’s long-term goal is to enable human brains to be... Read more »

The post Neuralink will share progress on linking human brains with AI next month appeared first on AI News.

]]>
Elon Musk’s startup Neuralink says it will share progress next month on the company’s mission to link human brains with AI.

Musk made the announcement of the announcement on Twitter:

When Musk appeared on Joe Rogan’s podcast in September 2018, the CEO told Rogan that Neuralink’s long-term goal is to enable human brains to be “symbiotic with AI”, adding that the company would have “something interesting to announce in a few months, that’s at least an order of magnitude better than anything else; probably better than anyone thinks is possible”.

Neuralink held an event in San Francisco in July last year, during simpler times, where the company said it aims to insert electrodes into the brains of monkeys and humans to enable them to control computers.

“Threads” which are covered in the electrodes are implanted in the brain near the neurons and synapses by a robot surgeon. These threads record the information being transmitted onto a sensor called the N1.

That, of course, is a very simplified version of a rather complex task. During the event, Neuralink demonstrated that the company’s tech had already been successfully inserted into the brain of a rat and was able to record the information being transmitted by its neurons.

At the time, Musk said he wanted Neuralink to start human trials this year.

Neuralink went quiet since the event last year, with its Twitter account not making a single tweet since then. Now, it seems, the company is ready to share some notable progress.

(Photo by Robina Weermeijer on Unsplash)

The post Neuralink will share progress on linking human brains with AI next month appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/09/neuralink-share-progress-linking-human-brains-ai/feed/ 0