Developers – AI News https://news.deepgeniusai.com Artificial Intelligence News Wed, 09 Dec 2020 14:47:50 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Developers – AI News https://news.deepgeniusai.com 32 32 AWS announces nine major updates for its ML platform SageMaker https://news.deepgeniusai.com/2020/12/09/aws-nine-major-updates-ml-platform-sagemaker/ https://news.deepgeniusai.com/2020/12/09/aws-nine-major-updates-ml-platform-sagemaker/#comments Wed, 09 Dec 2020 14:47:48 +0000 https://news.deepgeniusai.com/?p=10096 Amazon Web Services (AWS) has announced nine major new updates for its cloud-based machine learning platform, SageMaker. SageMaker aims to provide a machine learning service which can be used to build, train, and deploy ML models for virtually any use case. During this year’s re:Invent conference, AWS made several announcements to further improve SageMaker’s capabilities.... Read more »

The post AWS announces nine major updates for its ML platform SageMaker appeared first on AI News.

]]>
Amazon Web Services (AWS) has announced nine major new updates for its cloud-based machine learning platform, SageMaker.

SageMaker aims to provide a machine learning service which can be used to build, train, and deploy ML models for virtually any use case.

During this year’s re:Invent conference, AWS made several announcements to further improve SageMaker’s capabilities.

Swami Sivasubramanian, VP of Amazon Machine Learning at AWS, said:

“Hundreds of thousands of everyday developers and data scientists have used our industry-leading machine learning service, Amazon SageMaker, to remove barriers to building, training, and deploying custom machine learning models. One of the best parts about having such a widely-adopted service like SageMaker is that we get lots of customer suggestions which fuel our next set of deliverables.

Today, we are announcing a set of tools for Amazon SageMaker that makes it much easier for developers to build end-to-end machine learning pipelines to prepare, build, train, explain, inspect, monitor, debug, and run custom machine learning models with greater visibility, explainability, and automation at scale.”

The first announcement is Data Wrangler, a feature which aims to automate the preparation of data for machine learning.

Data Wrangler enables customers to choose the data they want from their various data stores and import it with a single click. Over 300 built-in data transformers are included to help customers normalise, transform, and combine features without having to write any code.

Frank Farrall, Principal of AI Ecosystems and Platforms Leader at Deloitte, comments:

“SageMaker Data Wrangler enables us to hit the ground running to address our data preparation needs with a rich collection of transformation tools that accelerate the process of machine learning data preparation needed to take new products to market.

In turn, our clients benefit from the rate at which we scale deployments, enabling us to deliver measurable, sustainable results that meet the needs of our clients in a matter of days rather than months.”

The second announcement is Feature Store. Amazon SageMaker Feature Store provides a new repository that makes it easy to store, update, retrieve, and share machine learning features for training and inference.

Feature Store aims to overcome the problem of storing features which are mapped to multiple models. A purpose-built feature store helps developers to access and share features that make it much easier to name, organise, find, and share sets of features among teams of developers and data scientists. Because it resides in SageMaker Studio – close to where ML models are run – AWS claims it provides single-digit millisecond inference latency.

Mammad Zadeh, VP of Engineering, Data Platform at Intuit, says:

“We have worked closely with AWS in the lead up to the release of Amazon SageMaker Feature Store, and we are excited by the prospect of a fully managed feature store so that we no longer have to maintain multiple feature repositories across our organization.

Our data scientists will be able to use existing features from a central store and drive both standardisation and reuse of features across teams and models.”

Next up, we have SageMaker Pipelines—which claims to be the first purpose-built, easy-to-use continuous integration and continuous delivery (CI/CD) service for machine learning.

Developers can define each step of an end-to-end machine learning workflow including the data-load steps, transformations from Amazon SageMaker Data Wrangler, features stored in Amazon SageMaker Feature Store, training configuration and algorithm set up, debugging steps, and optimisation steps.

SageMaker Clarify may be one of the most important features being debuted by AWS this week considering ongoing events.

Clarify aims to provide bias detection across the machine learning workflow, enabling developers to build greater fairness and transparency into their ML models. Rather than turn to often time-consuming open-source tools, developers can use the integrated solution to quickly try and counter any bias in models.

Andreas Heyden, Executive VP of Digital Innovations for the DFL Group, says:

“Amazon SageMaker Clarify seamlessly integrates with the rest of the Bundesliga Match Facts digital platform and is a key part of our long-term strategy of standardising our machine learning workflows on Amazon SageMaker.

By using AWS’s innovative technologies, such as machine learning, to deliver more in-depth insights and provide fans with a better understanding of the split-second decisions made on the pitch, Bundesliga Match Facts enables viewers to gain deeper insights into the key decisions in each match.”

Deep Profiling for Amazon SageMaker automatically monitors system resource utilisation and provides alerts where required for any detected training bottlenecks. The feature works across frameworks (PyTorch, Apache MXNet, and TensorFlow) and collects system and training metrics automatically without requiring any code changes in training scripts.

Next up, we have Distributed Training on SageMaker which AWS claims makes it possible to train large, complex deep learning models up to two times faster than current approaches.

Kristóf Szalay, CTO at Turbine, comments:

“We use machine learning to train our in silico human cell model, called Simulated Cell, based on a proprietary network architecture. By accurately predicting various interventions on the molecular level, Simulated Cell helps us to discover new cancer drugs and find combination partners for existing therapies.

Training of our simulation is something we continuously iterate on, but on a single machine each training takes days, hindering our ability to iterate on new ideas quickly.

We are very excited about Distributed Training on Amazon SageMaker, which we are expecting to decrease our training times by 90% and to help us focus on our main task: to write a best-of-the-breed codebase for the cell model training.

Amazon SageMaker ultimately allows us to become more effective in our primary mission: to identify and develop novel cancer drugs for patients.”

SageMaker’s Data Parallelism engine scales training jobs from a single GPU to hundreds or thousands by automatically splitting data across multiple GPUs, improving training time by up to 40 percent.

With edge computing advancements increasing rapidly, AWS is keeping pace with SageMaker Edge Manager.

Edge Manager helps developers to optimise, secure, monitor, and maintain ML models deployed on fleets of edge devices. In addition to helping optimise ML models and manage edge devices, Edge Manager also provides the ability to cryptographically sign models, upload prediction data from devices to SageMaker for monitoring and analysis, and view a dashboard which tracks and provided a visual report on the operation of the deployed models within the SageMaker console.

Igor Bergman, VP of Cloud and Software of PCs and Smart Devices at Lenovo, comments:

“SageMaker Edge Manager will help eliminate the manual effort required to optimise, monitor, and continuously improve the models after deployment. With it, we expect our models will run faster and consume less memory than with other comparable machine-learning platforms.

As we extend AI to new applications across the Lenovo services portfolio, we will continue to require a high-performance pipeline that is flexible and scalable both in the cloud and on millions of edge devices. That’s why we selected the Amazon SageMaker platform. With its rich edge-to-cloud and CI/CD workflow capabilities, we can effectively bring our machine learning models to any device workflow for much higher productivity.”

Finally, SageMaker JumpStart aims to make it easier for developers which have little experience with machine learning deployments to get started.

JumpStart provides developers with an easy-to-use, searchable interface to find best-in-class solutions, algorithms, and sample notebooks. Developers can select from several end-to-end machine learning templates(e.g. fraud detection, customer churn prediction, or forecasting) and deploy them directly into their SageMaker Studio environments.

AWS has been on a roll with SageMaker improvements—delivering more than 50 new capabilities over the past year. After this bumper feature drop, we probably shouldn’t expect any more until we’ve put 2020 behind us.

You can find coverage of AWS’ more cloud-focused announcements via our sister publication CloudTech here.

The post AWS announces nine major updates for its ML platform SageMaker appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/09/aws-nine-major-updates-ml-platform-sagemaker/feed/ 1
TensorFlow is now available for those shiny new ARM-based Macs https://news.deepgeniusai.com/2020/11/19/tensorflow-now-available-new-arm-based-macs/ https://news.deepgeniusai.com/2020/11/19/tensorflow-now-available-new-arm-based-macs/#comments Thu, 19 Nov 2020 14:41:57 +0000 https://news.deepgeniusai.com/?p=10039 A new version of machine learning library TensorFlow has been released with optimisations for Apple’s new ARM-based Macs. While still technically in pre-release, the Mac-optimised TensorFlow fork supports native hardware acceleration on Mac devices with M1 or Intel chips through Apple’s ML Compute framework. The new TensorFlow release boasts of an over 10x speed improvement... Read more »

The post TensorFlow is now available for those shiny new ARM-based Macs appeared first on AI News.

]]>
A new version of machine learning library TensorFlow has been released with optimisations for Apple’s new ARM-based Macs.

While still technically in pre-release, the Mac-optimised TensorFlow fork supports native hardware acceleration on Mac devices with M1 or Intel chips through Apple’s ML Compute framework.

The new TensorFlow release boasts of an over 10x speed improvement for common training tasks. While impressive, it has to be taken in the context that the GPU was not previously used for training tasks. 

A look at the benchmarks still indicates a substantial gap between the Intel and M1-based Macs across various machine learning models:

In a blog post, Pankaj Kanwar, Tensor Processing Units Technical Program Manager at Google, and Fred Alcober, TensorFlow Product Marketing Lead at Google, wrote:

“These improvements, combined with the ability of Apple developers being able to execute TensorFlow on iOS through TensorFlow Lite, continue to showcase TensorFlow’s breadth and depth in supporting high-performance ML execution on Apple hardware.”

We can only hope that running these workloads doesn’t turn MacBooks into expensive frying pans—but the remarkable efficiency they’ve displayed so far gives little cause for concern.

The post TensorFlow is now available for those shiny new ARM-based Macs appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/19/tensorflow-now-available-new-arm-based-macs/feed/ 1
NVIDIA chucks its MLPerf-leading A100 GPU into Amazon’s cloud https://news.deepgeniusai.com/2020/11/03/nvidia-mlperf-a100-gpu-amazon-cloud/ https://news.deepgeniusai.com/2020/11/03/nvidia-mlperf-a100-gpu-amazon-cloud/#comments Tue, 03 Nov 2020 15:55:37 +0000 https://news.deepgeniusai.com/?p=9998 NVIDIA’s A100 set a new record in the MLPerf benchmark last month and now it’s accessible through Amazon’s cloud. Amazon Web Services (AWS) first launched a GPU instance 10 years ago with the NVIDIA M2050. It’s rather poetic that, a decade on, NVIDIA is now providing AWS with the hardware to power the next generation... Read more »

The post NVIDIA chucks its MLPerf-leading A100 GPU into Amazon’s cloud appeared first on AI News.

]]>
NVIDIA’s A100 set a new record in the MLPerf benchmark last month and now it’s accessible through Amazon’s cloud.

Amazon Web Services (AWS) first launched a GPU instance 10 years ago with the NVIDIA M2050. It’s rather poetic that, a decade on, NVIDIA is now providing AWS with the hardware to power the next generation of groundbreaking innovations.

The A100 outperformed CPUs in this year’s MLPerf by up to 237x in data centre inference. A single NVIDIA DGX A100 system – with eight A100 GPUs – provides the same performance as nearly 1,000 dual-socket CPU servers on some AI applications.

“We’re at a tipping point as every industry seeks better ways to apply AI to offer new services and grow their business,” said Ian Buck, Vice President of Accelerated Computing at NVIDIA, following the benchmark results.

Businesses can access the A100 in AWS’ P4d instance. NVIDIA claims the instances reduce the time to train machine learning models by up to 3x with FP16 and up to 6x with TF32 compared to the default FP32 precision.

Each P4d instance features eight NVIDIA A100 GPUs. If even more performance is required, customers are able to access over 4,000 GPUs at a time using AWS’s Elastic Fabric Adaptor (EFA).

Dave Brown, Vice President of EC2 at AWS, said:

“The pace at which our customers have used AWS services to build, train, and deploy machine learning applications has been extraordinary. At the same time, we have heard from those customers that they want an even lower-cost way to train their massive machine learning models.

Now, with EC2 UltraClusters of P4d instances powered by NVIDIA’s latest A100 GPUs and petabit-scale networking, we’re making supercomputing-class performance available to virtually everyone, while reducing the time to train machine learning models by 3x, and lowering the cost to train by up to 60% compared to previous generation instances.”

P4d supports 400Gbps networking and makes use of NVIDIA’s technologies including NVLink, NVSwitch, NCCL, and GPUDirect RDMA to further accelerate deep learning training workloads.

Some of AWS’ customers across various industries have already begun exploring how the P4d instance can help their business.

Karley Yoder, VP & GM of Artificial Intelligence at GE Healthcare, commented:

“Our medical imaging devices generate massive amounts of data that need to be processed by our data scientists. With previous GPU clusters, it would take days to train complex AI models, such as Progressive GANs, for simulations and view the results.

Using the new P4d instances reduced processing time from days to hours. We saw two- to three-times greater speed on training models with various image sizes while achieving better performance with increased batch size and higher productivity with a faster model development cycle.”

For an example from a different industry, the research arm of Toyota is exploring how P4d can improve their existing work in developing self-driving vehicles and groundbreaking new robotics.

“The previous generation P3 instances helped us reduce our time to train machine learning models from days to hours,” explained Mike Garrison, Technical Lead of Infrastructure Engineering at Toyota Research Institute.

“We are looking forward to utilizing P4d instances, as the additional GPU memory and more efficient float formats will allow our machine learning team to train with more complex models at an even faster speed.”

P4d instances are currently available in the US East (N. Virginia) and US West (Oregon) regions. AWS says further availability is planned soon.

You can find out more about P4d instances and how to get started here.

The post NVIDIA chucks its MLPerf-leading A100 GPU into Amazon’s cloud appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/03/nvidia-mlperf-a100-gpu-amazon-cloud/feed/ 2
Eggplant launches AI-powered software testing in the cloud https://news.deepgeniusai.com/2020/10/06/eggplant-ai-powered-software-testing-cloud/ https://news.deepgeniusai.com/2020/10/06/eggplant-ai-powered-software-testing-cloud/#respond Tue, 06 Oct 2020 11:11:17 +0000 https://news.deepgeniusai.com/?p=9929 Automation specialists Eggplant have launched a new AI-powered software testing platform. The cloud-based solution aims to help accelerate the delivery of software in a rapidly-changing world while maintaining a high bar of quality. Gareth Smith, CTO of Eggplant, said: “The launch of our cloud platform is a significant milestone in our mission to rid the... Read more »

The post Eggplant launches AI-powered software testing in the cloud appeared first on AI News.

]]>
Automation specialists Eggplant have launched a new AI-powered software testing platform.

The cloud-based solution aims to help accelerate the delivery of software in a rapidly-changing world while maintaining a high bar of quality.

Gareth Smith, CTO of Eggplant, said:

“The launch of our cloud platform is a significant milestone in our mission to rid the world of bad software. In our new normal, delivering speed and agility at scale has never been more critical.

Every business can easily tap into Eggplants’ AI-powered automation platform to accelerate the pace of delivery while ensuring a high-quality digital experience.” 

Enterprises have accelerated their shift to the cloud due to the pandemic and resulting increases in things such as home working.

Recent research from Centrify found that 51 percent of businesses which embraced a cloud-first model were able to handle the challenges presented by COVID-19 far more effectively.

Eggplant’s Digital Automation Intelligence (DAI) Platform features:

  • Cloud-based end-to-end automation: The scalable fusion engine provides frictionless and efficient continuous and parallel end-to-end testing in the cloud, for any apps and websites, and on any target platforms. 
  • Monitoring insights: The addition of advanced user experience (UX) data points and metrics, enables customers to benchmark their applications UX performance. These insights, added to the UX behaviour helps improve SEO. 
  • Fully automated self-healing test assets: The use of AI identifies the tests needed and builds and runs them automatically, under full user control. These tests are self-healing, and automatically adapt as the system-under-test evolves.   

The solution helps to support the “citizen developer” movement—using AI to enable no-code/low-code development for people with minimal programming knowledge.

Both cloud and AI ranked highly in a recent study (PDF) by Deloitte of the most relevant technologies “to operate in the new normal”. Cloud and cybersecurity were joint first with 80 percent of respondents, followed by cognitive and AI tools (73%) and the IoT (65%).

Eggplant’s combination of AI and cloud technologies should help businesses to deal with COVID-19’s unique challenges and beyond.

(Photo by CHUTTERSNAP on Unsplash)

The post Eggplant launches AI-powered software testing in the cloud appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/10/06/eggplant-ai-powered-software-testing-cloud/feed/ 0
Full-stack AI solution SingularityNET switches Ethereum for Cardano https://news.deepgeniusai.com/2020/10/01/full-stack-ai-solution-singularitynet-ethereum-cardano/ https://news.deepgeniusai.com/2020/10/01/full-stack-ai-solution-singularitynet-ethereum-cardano/#respond Thu, 01 Oct 2020 12:20:36 +0000 https://news.deepgeniusai.com/?p=9893 Full-stack AI solution SingularityNET is switching the Ethereum blockchain for peer-reviewed rival Cardano. SingularityNET is a decentralised AI marketplace which has the ultimate goal of forming the basis for the emergence of the world’s first true Artificial General Intelligence (AGI). One of the brightest and most respected minds in AI leads the SingularityNET project, Dr... Read more »

The post Full-stack AI solution SingularityNET switches Ethereum for Cardano appeared first on AI News.

]]>
Full-stack AI solution SingularityNET is switching the Ethereum blockchain for peer-reviewed rival Cardano.

SingularityNET is a decentralised AI marketplace which has the ultimate goal of forming the basis for the emergence of the world’s first true Artificial General Intelligence (AGI).

One of the brightest and most respected minds in AI leads the SingularityNET project, Dr Ben Goertzel.

“Current speed and cost issues with the Ethereum blockchain have increased the urgency of exploring alternatives for SingluarityNET’s blockchain underpinning,” says Goertzel.

“The ambitious Ethereum 2.0 design holds promise but the timing of rollout of different aspects of this next-generation Ethereum remains unclear, along with many of the practical particulars.“

SingularityNET claims that Cardano has now reached a level of maturity which makes it possible to port such a complex project to the new blockchain.

Back in August, Goertzel gave a talk at the Cardano Summit on why functional programming – enabled by Cardano’s Haskell and new Plutus languages – is invaluable for blockchain-based AI:

Current AIs in the SingularityNET marketplace are designed for specific, relatively straightforward tasks like image/language processing, time series analysis, and genomics data analysis. The project’s Android SDK has been used for tasks like separating vocals from music in the SongSplitter app.

While useful, these AIs show the current limitations of the technology today. SingularityNET, true to its name, has far more ambitious plans.

The broader vision of SingularityNET is to decentralise AI away from “big tech” and prevent AIs from being siloed so that one AI can outsource work to others and use their specific expertise to solve problems. This will ultimately bring us a step closer to AGI which acts more like a human by seeking help where needed.

In a blog post, SingularityNET provides further technical details about its decision to switch to Cardano:

“There may also be synergies between SingularityNET-Cardano integration and the OpenCog Hyperon initiative, which is focused on creating a more scalable, flexible and usable successor to the current OpenCog AGI R&D platform (which underlies a handful of specialized AI agents currently running on the SingularityNET network).

The OpenCog AGI design involves a metagraph knowledge store called the Atomspace, concurrently and cooperatively acted on by a number of different cognitive processes representing different learning and reasoning methods such as probabilistic logic, evolutionary learning, pattern mining and neural pattern recognition. Currently, to integrate OpenCog into SingularityNET, one creates a SingularityNET agent wrapping a whole OpenCog system with its own internal Atomspace and AI-process population.

However, in a SingularityNET-on-Cardano approach, it may eventually be possible to take a more decentralized approach in which the Hyperon Atomspace is provided as a service to any SingularityNET agent who needs it, and many of the cognitive processes involved in the Hyperon design are represented as SingularityNET agents that interact with Atomspace via channels set up via SingularityNET protocols. Such an approach would exploit the deep commonalities between the new version of OpenCog’s Atomese language being created for Hyperon and the dependent type based API of APIs under current exploration. The result would be a more fundamentally decentralized approach to AGI design.”

A fascinating interview between Cardano founder Charles Hoskinson and Dr Goertzel can be viewed here:

Cardano vs Ethereum

Ethereum is currently the world’s largest decentralised platform but suffers from slow speeds and increasing transaction costs. A switch to a more efficient and environmentally-friendly Proof-of-Stake consensus is underway which – along with new scaling innovations – should address Ethereum’s issues. However, it’s expected to be several years before Ethereum 2.0 is fully rolled out.

Cardano has observed Ethereum’s problems and is taking its time to address them with a scientific and peer-reviewed approach; which brings legitimacy to the project that will be needed for enterprise adoption.

While it could appear from the outside like Cardano has been lazy and is far behind other projects – after all, it’s yet to even support smart contracts – this is far from the case. Cardano is often ranked top of all blockchain projects for development activity and has continued signing large partnerships.

“Cardano has gone from strength to strength this year, and having the backing of such a prominent organisation only reaffirms this,” comments Hoskinson. 

“SingularityNET is a project we’ve followed for a long time, and we’re excited to see how the Cardano blockchain can help SingularityNET realise its ambitious goals.”   

Smart contracts are due to launch on Cardano in the coming months as part of what it calls its ‘Goguen’ phase. Unlike Ethereum, Cardano is using Proof-of-Stake from the start and won’t have the speed, cost, and scalability problems of the current decentralised platform leader.

Cardano will even become the most decentralised network in the space following the recent successful launch of its ‘Shelley’ upgrade.

On its website, Cardano explains:

“We expect Cardano to be 50-100 times more decentralized than other large blockchain networks, with the incentives scheme designed to reach equilibrium around 1,000 stake pools.

Current prominent blockchain networks are often controlled by less than 10 mining pools, exposing them to serious risk of compromise by malicious behavior – something which Cardano avoids with a system inherently designed to encourage greater decentralization.”

We’re already seeing groundbreaking projects like SingularityNET beginning to shift over to Cardano. While it may appear that Cardano has a long way to catch up with Ethereum, it’s worth remembering that – of Ethereum’s close to 3,000 apps – only a minority carry significant value or have many active users.

There is currently around $11 billion “locked up” in Ethereum’s much-vaunted DeFi projects. That’s nothing to be sniffed at, but it only takes some big projects to move to Cardano to put a big dent in Ethereum’s primary use case. It’s also worth remembering the whole space is very young with plenty of growth potential—the global legacy financial system is worth hundreds of trillions of dollars.

While multiple blockchain projects will likely co-exist, Cardano’s ability to “flip” Ethereum is looking more possible than ever.

The post Full-stack AI solution SingularityNET switches Ethereum for Cardano appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/10/01/full-stack-ai-solution-singularitynet-ethereum-cardano/feed/ 0
Nvidia and ARM will open ‘world-class’ AI centre in Cambridge https://news.deepgeniusai.com/2020/09/14/nvidia-arm-world-class-ai-centre-cambridge/ https://news.deepgeniusai.com/2020/09/14/nvidia-arm-world-class-ai-centre-cambridge/#respond Mon, 14 Sep 2020 12:52:49 +0000 https://news.deepgeniusai.com/?p=9848 Nvidia is already putting its $40 billion ARM acquisition to good use by opening a “world-class” AI centre in Cambridge. British chip designer ARM’s technology is at the heart of most mobile devices. Meanwhile, Nvidia’s GPUs are increasingly being used for AI computation in servers, desktops, and even things like self-driving vehicles. However, Nvidia was... Read more »

The post Nvidia and ARM will open ‘world-class’ AI centre in Cambridge appeared first on AI News.

]]>
Nvidia is already putting its $40 billion ARM acquisition to good use by opening a “world-class” AI centre in Cambridge.

British chip designer ARM’s technology is at the heart of most mobile devices. Meanwhile, Nvidia’s GPUs are increasingly being used for AI computation in servers, desktops, and even things like self-driving vehicles.

However, Nvidia was most interested in ARM’s presence in edge devices—which it estimates to be in the region of 180 billion.

Jensen Huang, CEO of Nvidia, said:

“ARM is an incredible company and it employs some of the greatest engineering minds in the world. But we believe we can make ARM even more incredible and take it to even higher levels.

We want to propel it — and the UK — to global AI leadership.”

There were concerns Nvidia’s acquisition would lead to job losses, but the company has promised to keep the business in the UK. The company says it’s planning to hire more staff and retain ARM’s iconic brand.

Nvidia is going further in its commitment to the UK by opening a new AI centre in Cambridge, which is home to an increasing number of exciting startups in the field such as FiveAI, Prowler.io, Fetch.ai, and Darktrace.

“We will create an open centre of excellence in the area once home to giants like Isaac Newton and Alan Turing, for whom key NVIDIA technologies are named.

Here, leading scientists, engineers and researchers from the UK and around the world will come to develop their ideas, collaborate and conduct their ground-breaking work in areas like healthcare, life sciences, self-driving cars, and other fields.”

The new centre will have five key features when it opens:

  • ARM/Nvidia-based supercomputer – set to be one of the most powerful AI supercomputers in the world.
  • Research Fellowships and Partnerships – Nvidia will use the centre to establish new UK-based research partnerships, expanding on successful relationships already established with King’s College and Oxford.
  • AI Training – Nvidia will make its AI curriculum available across the UK to help create job opportunities and prepare “the next generation of UK developers for AI leadership”
  • Startup Accelerator – With so many of the world’s most exciting AI companies launching in the UK, the Nvidia Inception accelerator will help startups succeed by providing access to the aforementioned supercomputer, connections to researchers from NVIDIA and partners, technical training, and marketing promotion.
  • Industry Collaboration – AI is still in its infancy but will impact every industry to some extent. Nvidia says its new research facility will be an open hub for industry collaboration, building on the company’s existing relationships with the likes of GSK, Oxford Nanopore, and other leaders in their fields.

The UK is Europe’s leader in AI and the British government is investing heavily in ensuring it maintains its pole position. Beyond funding, the UK is also aiming to ensure it’s among the best places to run an AI company.

Current EU rules, especially around data, are often seen as limiting the development of European AI companies when compared to elsewhere in the world. While the UK will have to avoid being accused of doing a so-called “bonfire of regulations” post-Brexit, data collection regulations is likely an area which will be relaxed.

In the UK’s historic trade deal signed with Japan last week, several enhancements were made over the blanket EU-Japan deal signed earlier this year. Among the perceived improvements is the “free flow of data” by not enforcing localisation requirements, and that algorithms can remain private.

UK trade secretary Liz Truss said: “The agreement we have negotiated – in record time and in challenging circumstances – goes far beyond the existing EU deal, as it secures new wins for British businesses in our great manufacturing, food and drink, and tech industries.”

Japan and the UK, as two global tech giants, are expected to deepen their collaboration in the coming years—building on the trade deal signed last week.

Shigeki Ishizuka, Chairman of the Japan Electronics and Information Technology Industries Association, said: “We are confident that this mutual relationship will be further strengthened as an ambitious agreement that will contribute to the promotion of cooperation in research and development, the promotion of innovation, and the further expansion of inter-company collaboration.”

Nvidia’s investment shows that it has confidence in the UK’s strong AI foundations continuing to gain momentum in the coming years.

(Photo by A Perry on Unsplash)

The post Nvidia and ARM will open ‘world-class’ AI centre in Cambridge appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/09/14/nvidia-arm-world-class-ai-centre-cambridge/feed/ 0
Microsoft: The UK must increase its AI skills, or risk falling behind https://news.deepgeniusai.com/2020/08/12/microsoft-uk-ai-skills-risk-falling-behind/ https://news.deepgeniusai.com/2020/08/12/microsoft-uk-ai-skills-risk-falling-behind/#comments Wed, 12 Aug 2020 13:46:27 +0000 https://news.deepgeniusai.com/?p=9809 A report from Microsoft warns that the UK faces an AI skills gap which may harm its global competitiveness. The research, titled AI Skills in the UK, shines a spotlight on some concerning issues. For its UK report, Microsoft used data from a global AI skills study featuring more than 12,000 people in 20 countries... Read more »

The post Microsoft: The UK must increase its AI skills, or risk falling behind appeared first on AI News.

]]>
A report from Microsoft warns that the UK faces an AI skills gap which may harm its global competitiveness.

The research, titled AI Skills in the UK, shines a spotlight on some concerning issues.

For its UK report, Microsoft used data from a global AI skills study featuring more than 12,000 people in 20 countries to see how the UK is doing in comparison to the rest of the world.

Most notably, compared to the rest of the world, the UK is seeing a higher failure rate for AI projects. 29 percent of AI ventures launched by UK businesses have generated no commercial value compared to the 19 percent average elsewhere in the world.

35 percent of British business leaders foresee an AI skills gap within two years, while 28 percent believe there already is one (above the global average of 24%).

However, it seems UK businesses aren’t helping to prepare employees with the skills they need. Just 17 percent of British employees have been part of AI reskilling efforts (compared to the global figure of 38 percent.)

Agata Nowakowska, AVP EMEA at Skillsoft, said:

“UK employers will have to address the growing digital skills gap within the workforce to ensure their business is able to fully leverage every digital transformation investment that’s made. With technologies like AI and cloud becoming as commonplace as word processing or email in the workplace, firms will need to ensure employees can use such tools and aren’t apprehensive about using them.

Organisations will need to think holistically about managing reskilling, upskilling and job transitioning. As the war for talent intensifies, employee development and talent pooling will become increasingly vital to building a modern workforce that’s adaptable and flexible. Addressing and easing workplace role transitions will require new training models and approaches that include on-the-job training and opportunities that support and signpost workers to opportunities to upgrade their skills.” 

Currently, a mere 32 percent of British employees feel their workplace is doing enough to prepare them for an AI-enabled future (compared to the global average of 42%)

“The most successful organisations will be the ones that transform both technically and culturally, equipping their people with the skills and knowledge to become the best competitive asset they have,” comments Simon Lambert, Chief Learning Officer for Microsoft UK.

“Human ingenuity is what will make the difference – AI technology alone will not be enough.”

AI brain drain

It’s well-documented that the UK suffers from a “brain drain” problem. The country’s renowned universities – like Oxford and Cambridge – produce globally desirable AI talent, but they’re often swooped up by Silicon Valley giants who are willing to pay much higher salaries than many British firms.

In one example, a senior professor from Imperial College London couldn’t understand why one of her students was not turning up to any classes. Most people wouldn’t pay £9,250 per year in tuition fees and not turn up. The professor called her student to find out why he’d completed three years but wasn’t turning up for his final year. She found that he was offered a six-figure salary at Apple. 

This problem also applies to teachers who are needed to pass their knowledge onto the future generations. Many are lured away from academia to work on groundbreaking projects with almost endless resources, less administrative duties, and be paid handsomely for it too.

Some companies, Microsoft included, have taken measures to address the brain drain problem. After all, a lack of AI talent harms the entire industry.

Dr Chris Bishop, Director of Microsoft’s Research Lab in Cambridge, said:

“One thing we’ve seen over the past few years is: because there are so many opportunities for people with skills in machine learning, particularly in industry, we’ve seen a lot of outflux of top academic talent to industry.

This concerns us because it’s those top academic professors and researchers who are responsible not just for doing research, but also for nurturing the next generation of talent in this field.”

Since 2018, Microsoft has funded a program for training the next generation of data scientists and machine-learning engineers called the Microsoft Research-Cambridge University Machine Learning Initiative.

Microsoft partners with universities to ensure it doesn’t steal talent, allows employees to continue roles in teaching, funds some related PhD scholarships, sends researchers to co-supervise students in universities, and offers paid internships to work alongside teams at Microsoft on projects.

You can find the full AI Skills in the UK report here.

(Photo by William Warby on Unsplash)

The post Microsoft: The UK must increase its AI skills, or risk falling behind appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/08/12/microsoft-uk-ai-skills-risk-falling-behind/feed/ 1
Google’s Model Card Toolkit aims to bring transparency to AI https://news.deepgeniusai.com/2020/07/30/google-model-card-toolkit-ai/ https://news.deepgeniusai.com/2020/07/30/google-model-card-toolkit-ai/#respond Thu, 30 Jul 2020 16:02:21 +0000 https://news.deepgeniusai.com/?p=9782 Google has released a toolkit which it hopes will bring some transparency to AI models. People are wary of big tech companies like Google. People are also concerned about AI. Combine the two and you’ve got a general distrust which can hinder important advancements. Model Card Toolkit aims to step in and facilitate AI model... Read more »

The post Google’s Model Card Toolkit aims to bring transparency to AI appeared first on AI News.

]]>
Google has released a toolkit which it hopes will bring some transparency to AI models.

People are wary of big tech companies like Google. People are also concerned about AI. Combine the two and you’ve got a general distrust which can hinder important advancements.

Model Card Toolkit aims to step in and facilitate AI model transparency reporting for developers, regulators, and downstream users.

Google launched Model Cards itself over the past year, something that the company first conceptualised in an October 2018 whitepaper.

Model Cards provide a structured framework for reporting on ML model provenance, usage, and ethics-informed evaluation and give a detailed overview of a model’s suggested uses and limitations. 

So far, Google has released Model Cards for open source models built on its MediaPipe platform as well as its commercial Cloud Vision API Face Detection and Object Detection services.

Google’s new toolkit for Model Cards will simplify the process of creating them for third parties by compiling the data and helping build interfaces orientated for specific audiences.

Here’s an example of a Model Card:

MediaPipe has published their Model Cards for each of their open-source models in their GitHub repository.

To demonstrate how the Model Cards Toolkit can be used in practice, Google has released a Colab tutorial that builds a Model Card for a simple classification model trained on the UCI Census Income dataset.

If you just want to dive right in, you can access the Model Cards Toolkit here.

(Photo by Marc Schulte on Unsplash)

The post Google’s Model Card Toolkit aims to bring transparency to AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/30/google-model-card-toolkit-ai/feed/ 0
DeepCode provides AI code reviews for over four million developers https://news.deepgeniusai.com/2020/07/21/deepcode-ai-code-reviews-four-million-developers/ https://news.deepgeniusai.com/2020/07/21/deepcode-ai-code-reviews-four-million-developers/#respond Tue, 21 Jul 2020 15:40:27 +0000 https://news.deepgeniusai.com/?p=9759 AI-powered code reviewer DeepCode has announced it’s checked the code of over four million developers. DeepCode’s machine learning-based bot is fluent in JavaScript, TypeScript, Java, C/C++, and Python. “Our data shows that over 50% of repositories have critical issues and every second pull-request has warnings about issues that need to be fixed,” said Boris Paskalev,... Read more »

The post DeepCode provides AI code reviews for over four million developers appeared first on AI News.

]]>
AI-powered code reviewer DeepCode has announced it’s checked the code of over four million developers.

DeepCode’s machine learning-based bot is fluent in JavaScript, TypeScript, Java, C/C++, and Python.

“Our data shows that over 50% of repositories have critical issues and every second pull-request has warnings about issues that need to be fixed,” said Boris Paskalev, CEO and co-founder of DeepCode.

“By using DeepCode, these issues are automatically identified and logically explained as suggestions are made about how to fix them before code is deployed.”

Over the past few months, DeepCode has focused on improving the JavaScript skills of the bot. JavaScript frameworks and libraries such as Vue.js and React are supported. A demo of DeepCode’s analysis of React can be found here.

DeepCode claims its bot is now “up to 50x faster and finding more than double the number of serious bugs over all other tools combined while maintaining over 80% accuracy.”

The bot has been trained using machine learning to analyse hundreds of millions of commits across the vast number of open source projects freely available. DeepCode says it’s able to identify bugs before they happen.

A recent survey by DeepCode found that 85 percent of people want software companies to focus less on new features and more on fixing bugs and security issues.

“Too many software companies still believe that new features are what users want the most,” commented Paskalev. “As this survey shows, what people really want is quality software that is safe to use.”

DeepCode is free for open source software and commercial teams of up to 30 developers. You can start analysing your code by connecting your GitHub, BitBucket, or GitLab account here.

The post DeepCode provides AI code reviews for over four million developers appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/21/deepcode-ai-code-reviews-four-million-developers/feed/ 0
NVIDIA’s AI-focused Ampere GPUs are now available in Google Cloud https://news.deepgeniusai.com/2020/07/08/nvidia-ai-ampere-gpus-available-google-cloud/ https://news.deepgeniusai.com/2020/07/08/nvidia-ai-ampere-gpus-available-google-cloud/#respond Wed, 08 Jul 2020 10:56:12 +0000 https://news.deepgeniusai.com/?p=9734 Google Cloud users can now harness the power of NVIDIA’s Ampere GPUs for their AI workloads. The specific GPU added to Google Cloud is the NVIDIA A100 Tensor Core which was announced just last month. NVIDIA says the A100 “has come to the cloud faster than any NVIDIA GPU in history.” NVIDIA claims the A100... Read more »

The post NVIDIA’s AI-focused Ampere GPUs are now available in Google Cloud appeared first on AI News.

]]>
Google Cloud users can now harness the power of NVIDIA’s Ampere GPUs for their AI workloads.

The specific GPU added to Google Cloud is the NVIDIA A100 Tensor Core which was announced just last month. NVIDIA says the A100 “has come to the cloud faster than any NVIDIA GPU in history.”

NVIDIA claims the A100 boosts training and inference performance by up to 20x over its predecessors. Large AI models like BERT can be trained in just 37 minutes on a cluster of 1,024 A100s.

For those who enjoy their measurements in teraflops (TFLOPS), the A100 delivers around 19.5 TFLOPS in single-precision performance and 156 TFLOPS for Tensor Float 32 workloads.

Manish Sainani, Director of Product Management at Google Cloud, said:

“Google Cloud customers often look to us to provide the latest hardware and software services to help them drive innovation on AI and scientific computing workloads.

With our new A2 VM family, we are proud to be the first major cloud provider to market NVIDIA A100 GPUs, just as we were with NVIDIA T4 GPUs. We are excited to see what our customers will do with these new capabilities.”

The announcement couldn’t have arrived at a better time – with many looking to harness AI for solutions to the COVID-19 pandemic, in addition to other global challenges such as climate change.

Aside from AI training and inference, other things customers will be able to achieve with the new capabilities include data analytics, scientific computing, genomics, edge video analytics, and 5G services.

The new Ampere-based data center GPUs are now available in Alpha on Google Cloud. Users can access instances of up to 16 A100 GPUs, which provides a total of 640GB of GPU memory and 1.3TB of system memory.

You can register your interest for access here.

The post NVIDIA’s AI-focused Ampere GPUs are now available in Google Cloud appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/08/nvidia-ai-ampere-gpus-available-google-cloud/feed/ 0