processor – AI News https://news.deepgeniusai.com Artificial Intelligence News Wed, 15 Jul 2020 13:40:36 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png processor – AI News https://news.deepgeniusai.com 32 32 British AI chipmaker Graphcore claims Nvidia’s crown with GC200 processor https://news.deepgeniusai.com/2020/07/15/british-ai-graphcore-nvidia-gc200-processor/ https://news.deepgeniusai.com/2020/07/15/british-ai-graphcore-nvidia-gc200-processor/#respond Wed, 15 Jul 2020 13:40:34 +0000 https://news.deepgeniusai.com/?p=9749 Graphcore, a British AI chipmaker, has unveiled a powerful new processor which takes Nvidia’s crown. Bristol-based Graphcore ranked number one on Fast Company’s top 10 most innovative AI companies of 2020 list. Nvidia, for comparison, ranked fifth. Fast Company’s confidence in Graphcore clearly isn’t misplaced. Announcing its GC200 processor, Graphcore says its new chip is... Read more »

The post British AI chipmaker Graphcore claims Nvidia’s crown with GC200 processor appeared first on AI News.

]]>
Graphcore, a British AI chipmaker, has unveiled a powerful new processor which takes Nvidia’s crown.

Bristol-based Graphcore ranked number one on Fast Company’s top 10 most innovative AI companies of 2020 list. Nvidia, for comparison, ranked fifth.

Fast Company’s confidence in Graphcore clearly isn’t misplaced. Announcing its GC200 processor, Graphcore says its new chip is the world’s most complex.

The GC200 processor boasts 59.4 billion transistors and takes the crown from Nvidia’s A100 as the world’s largest. The A100 was announced by Nvidia earlier this year and features 54 billion transistors.

Each GC200 chip has 1,472 independent processor cores and 8,832 separate parallel threads, all supported by 900MB of in-processor RAM.

Graphcore says that up to 64,000 of the 7nm GC200 chips can be linked to create a massive parallel processor with around 16 exaflops of computational power and petabytes of power. Such a system would be able to support AI models with trillions of parameters.

“We are impressed with Graphcore’s technology for energy-efficient construction and execution of large, next-generation ML models, and we expect significant performance gains for several of our AI-oriented research projects in medical imaging and cardiac simulations,” comments Are Magnus Bruaset, Research Director at Simula Research Laboratory.

“We are also pursuing other avenues of research that can push the envelope for Graphcore’s multi-IPU systems, such as how to efficiently conduct large-scale, sparse linear algebra operations commonly found in physics-based HPC workloads.”

The GC200 is just the second chip to be launched by Graphcore. Compared to the first generation, the GC200 delivers an up to 9.3x performance increase.

Graphcore’s founders believe the IPU approach that the company is taking is more efficient than Nvidia’s GPU route. The ability to scale up to thousands of IPU processors in existing compute infrastructures could mean that the cost could be 10-20x lower than using GPUs.

Back in February, Graphcore announced that it had raised $150 million in funding for its R&D. The company’s total valuation is $1.95 billion.

Graphcore was fortunate to have secured its cash before the COVID-19 pandemic really hit – with many startups reporting difficulties obtaining vital funding where there was previous interest. Undoubtedly, the GC200 will help to power research to get us through this pandemic and all the other challenges the world faces now and in the future.

The post British AI chipmaker Graphcore claims Nvidia’s crown with GC200 processor appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/15/british-ai-graphcore-nvidia-gc200-processor/feed/ 0
Intel unwraps its first chip for AI and calls it Spring Hill https://news.deepgeniusai.com/2019/08/21/intel-ai-powered-chip-spring-hill/ https://news.deepgeniusai.com/2019/08/21/intel-ai-powered-chip-spring-hill/#respond Wed, 21 Aug 2019 10:17:07 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5956 Intel has unwrapped its first processor that is designed for artificial intelligence and is planned for use in data centres. The new Nervana Neural Network Processor for Inference (NNP-I) processor has a more approachable codename of Spring Hill. Spring Hill is a modified 10nm Ice Lake processor which sits on a PCB and slots into... Read more »

The post Intel unwraps its first chip for AI and calls it Spring Hill appeared first on AI News.

]]>
Intel has unwrapped its first processor that is designed for artificial intelligence and is planned for use in data centres.

The new Nervana Neural Network Processor for Inference (NNP-I) processor has a more approachable codename of Spring Hill.

Spring Hill is a modified 10nm Ice Lake processor which sits on a PCB and slots into an M.2 port typically used for storage.

According to Intel, the use of a modified Ice Lake processor allows Spring Hill to handle large workloads and consume minimal power. Two compute cores and the graphics engine have been removed from the standard Ice Lake design to accommodate 12 Inference Compute Engines (ICE).

In a summary, Intel detailed six main benefits it expects from Spring Hill:

  1. Best in class perf/power efficiency for major data inference workloads.
  2. Scalable performance at wide power range.
  3. High degree of programmability w/o compromising perf/power efficiency.
  4. Data centre at scale.
  5. Spring Hill solution – Silicon and SW stack – sampling with definitional partners/customers on multiple real-life topologies.
  6. Next two generations in planning/design.

Intel’s first chip for AI comes after the company invested in several Isreali artificial intelligence startups including Habana Labs and NeuroBlade. The investments formed part of Intel’s strategy called ‘AI Everywhere’ which aims to increase the firm’s presence in the market.

Naveen Rao, Intel vice president and general manager, Artificial Intelligence Products Group, said:

“To get to a future state of ‘AI everywhere,’ we’ll need to address the crush of data being generated and ensure enterprises are empowered to make efficient use of their data, processing it where it’s collected when it makes sense and making smarter use of their upstream resources.

Data centers and the cloud need to have access to performant and scalable general purpose computing and specialized acceleration for complex AI applications. In this future vision of AI everywhere, a holistic approach is needed—from hardware to software to applications.”

Facebook has said it will be using Intel’s new Spring Hill processor. Intel already has two more generations of the NNP-I in development.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Intel unwraps its first chip for AI and calls it Spring Hill appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/08/21/intel-ai-powered-chip-spring-hill/feed/ 0
Smells Like AI Spirit: Baidu will help develop Intel’s Nervana neural processor https://news.deepgeniusai.com/2019/07/03/ai-baidu-develop-intel-nervana-processor/ https://news.deepgeniusai.com/2019/07/03/ai-baidu-develop-intel-nervana-processor/#respond Wed, 03 Jul 2019 11:51:08 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5802 Intel announced during Baidu’s Create conference this week that Baidu will help to develop the former’s Nervana Neural Network Processor. Speaking on stage at the conference in Beijing, Intel corporate vice president Naveen Rao made the announcement. “The next few years will see an explosion in the complexity of AI models and the need for... Read more »

The post Smells Like AI Spirit: Baidu will help develop Intel’s Nervana neural processor appeared first on AI News.

]]>
Intel announced during Baidu’s Create conference this week that Baidu will help to develop the former’s Nervana Neural Network Processor.

Speaking on stage at the conference in Beijing, Intel corporate vice president Naveen Rao made the announcement.

“The next few years will see an explosion in the complexity of AI models and the need for massive deep learning compute at scale. Intel and Baidu are focusing their decade-long collaboration on building radical new hardware, codesigned with enabling software, that will evolve with this new reality – something we call ‘AI 2.0.’

Intel’s so-called Neural Network Processor for Training is codenamed NNP-T 1000 and designed for training deep learning models at lightning speed. A large amount (32GB) of HBM memory and local SRAM is put closer to where computation happens to enable more storage of model parameters on-die, saving significant power for an increase in performance.

The NNP-T 1000 is set to ship alongside the Neural Network Processor for Inference (NNP-I 1000) chip later this year. As the name suggests, the NNP-I 1000 is designed for AI inferencing and features general-purpose processor cores based on Intel’s Ice Lake architecture.

Baidu and Intel have a history of collaborating in AI. Intel has helped to optimise Baidu’s PaddlePaddle deep learning framework for its Xeon Scalable processors since 2016. More recently, Baidu and Intel developed the BIE-AI-Box – a hardware kit for analysing the frames of footage captured by cockpit cameras.

Intel sees a great deal of its future growth in AI. The company’s AI chips generated $1 billion in revenue last year and Intel expects a growth rate of 30 percent annually up to $10 billion by 2022.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Smells Like AI Spirit: Baidu will help develop Intel’s Nervana neural processor appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/07/03/ai-baidu-develop-intel-nervana-processor/feed/ 0
Nvidia CEO is ‘happy to help’ if Tesla’s AI chip ambitions fail https://news.deepgeniusai.com/2018/08/17/nvidia-ceo-help-tesla-ai-chip/ https://news.deepgeniusai.com/2018/08/17/nvidia-ceo-help-tesla-ai-chip/#respond Fri, 17 Aug 2018 14:46:20 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3650 Nvidia CEO Jensen Huang has teased his company is ‘happy to help’ if Tesla fails its goal to launch a competitor AI chip. Tesla currently uses Nvidia’s silicon for its vehicles. The company’s CEO, Elon Musk, said earlier this month that he’s a “big fan” of Nvidia but that an in-house AI chip would be... Read more »

The post Nvidia CEO is ‘happy to help’ if Tesla’s AI chip ambitions fail appeared first on AI News.

]]>
Nvidia CEO Jensen Huang has teased his company is ‘happy to help’ if Tesla fails its goal to launch a competitor AI chip.

Tesla currently uses Nvidia’s silicon for its vehicles. The company’s CEO, Elon Musk, said earlier this month that he’s a “big fan” of Nvidia but that an in-house AI chip would be able to outperform those of the leading processor manufacturer.

During a conference call on Thursday, Huang said its customers are “super excited” about  Nvidia’s Xavier technology for autonomous machines. He also notes that it’s currently in production, whereas Tesla’s rival is yet-to-be-seen.

Here’s what Huang had to say during the call:

“With respect to the next generation, it is the case that when we first started working on autonomous vehicles, they needed our help. We used the 3-year-old Pascal GPU for the current generation of Autopilot computers.

It’s very clear now that in order to have a safe Autopilot system, we need a lot more computing horsepower. In order to have safe computing, in order to have safe driving, the algorithms have to be rich. It has to be able to handle corner conditions in a lot of diverse situations.

Every time there are more and more corner conditions or more subtle things that you have to do, or you have to drive more smoothly or be able to take turns more quickly, all of those requirements require greater computing capability. And that’s exactly the reason why we built Xavier. Xavier is in production now. We’re seeing great success and customers are super excited about Xavier.

That’s exactly the reason why we built it. It’s super hard to build Xavier and all the software stack on top of it. If it doesn’t turn out for whatever reason for them [Tesla] you can give me a call and I’d be more than happy to help.”

The conference call was carried out following the release of Nvidia’s fiscal earnings report where the company reported better-than-expected earnings.

“Growth across every platform – AI, Gaming, Professional Visualization, self-driving cars – drove another great quarter,” said Huang. “Fueling our growth is the widening gap between demand for computing across every industry and the limits reached by traditional computing.”

However, due to lower-than-expected revenue guidance, Nvidia stock fell by six percent on Thursday following the earnings report.

What are your thoughts on Huang’s comments? Let us know below.

 

The post Nvidia CEO is ‘happy to help’ if Tesla’s AI chip ambitions fail appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/08/17/nvidia-ceo-help-tesla-ai-chip/feed/ 0
Intel’s AI chip business is now worth $1bn per year, $10bn by 2022 https://news.deepgeniusai.com/2018/08/09/intel-ai-business-worth/ https://news.deepgeniusai.com/2018/08/09/intel-ai-business-worth/#respond Thu, 09 Aug 2018 16:00:38 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3615 The size of Intel’s AI chip business today is huge, but it’s nothing compared to where it expects to be in just four years’ time. Speaking during the company’s Innovation Summit in Santa Clara, Intel Executive VP Navin Shenoy revealed a new focus on AI development. The company’s AI-focused Xeon processors generated $1 billion in... Read more »

The post Intel’s AI chip business is now worth $1bn per year, $10bn by 2022 appeared first on AI News.

]]>
The size of Intel’s AI chip business today is huge, but it’s nothing compared to where it expects to be in just four years’ time.

Speaking during the company’s Innovation Summit in Santa Clara, Intel Executive VP Navin Shenoy revealed a new focus on AI development.

The company’s AI-focused Xeon processors generated $1 billion in revenues during 2017. By 2022, it expects to be generating around $10 billion per year.

AI is set to be implemented in many areas of our lives in the coming years, across a variety of devices.

Shenoy claims recent breakthroughs have increased the company’s AI performance by 200x since 2014. He teases further improvements are on their way in upcoming releases.

The company will be launching its ‘Cascade Lake’ Xeon processor later this year with 11 times better performance for AI image recognition.

Arriving in 2019 will be ‘Cooper Lake’ which uses 14-nanometer manufacturing and will feature even better performance. In 2020, however, the company is targeting ‘Ice Lake’ with 10-nanometer manufacturing technology.

“After 50 years, this is the biggest opportunity for the company,” says Shenoy. “We have 20 percent of this market today.”

The admission it currently has a small share of the market today is bold and shows the company is confident about significantly upping that percentage in the coming years. It faces significant competition from Nvidia in particular.

Intel’s revenues were around a third data-centric five years ago. Now, it’s around half of Intel’s business.

Shenoy’s comments today show how seriously Intel is taking its AI business and the firm’s confidence it will be a major player.

What are your thoughts on Intel’s AI business?

 

The post Intel’s AI chip business is now worth $1bn per year, $10bn by 2022 appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/08/09/intel-ai-business-worth/feed/ 0