platform – AI News https://news.deepgeniusai.com Artificial Intelligence News Wed, 09 Dec 2020 14:47:50 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png platform – AI News https://news.deepgeniusai.com 32 32 AWS announces nine major updates for its ML platform SageMaker https://news.deepgeniusai.com/2020/12/09/aws-nine-major-updates-ml-platform-sagemaker/ https://news.deepgeniusai.com/2020/12/09/aws-nine-major-updates-ml-platform-sagemaker/#comments Wed, 09 Dec 2020 14:47:48 +0000 https://news.deepgeniusai.com/?p=10096 Amazon Web Services (AWS) has announced nine major new updates for its cloud-based machine learning platform, SageMaker. SageMaker aims to provide a machine learning service which can be used to build, train, and deploy ML models for virtually any use case. During this year’s re:Invent conference, AWS made several announcements to further improve SageMaker’s capabilities.... Read more »

The post AWS announces nine major updates for its ML platform SageMaker appeared first on AI News.

]]>
Amazon Web Services (AWS) has announced nine major new updates for its cloud-based machine learning platform, SageMaker.

SageMaker aims to provide a machine learning service which can be used to build, train, and deploy ML models for virtually any use case.

During this year’s re:Invent conference, AWS made several announcements to further improve SageMaker’s capabilities.

Swami Sivasubramanian, VP of Amazon Machine Learning at AWS, said:

“Hundreds of thousands of everyday developers and data scientists have used our industry-leading machine learning service, Amazon SageMaker, to remove barriers to building, training, and deploying custom machine learning models. One of the best parts about having such a widely-adopted service like SageMaker is that we get lots of customer suggestions which fuel our next set of deliverables.

Today, we are announcing a set of tools for Amazon SageMaker that makes it much easier for developers to build end-to-end machine learning pipelines to prepare, build, train, explain, inspect, monitor, debug, and run custom machine learning models with greater visibility, explainability, and automation at scale.”

The first announcement is Data Wrangler, a feature which aims to automate the preparation of data for machine learning.

Data Wrangler enables customers to choose the data they want from their various data stores and import it with a single click. Over 300 built-in data transformers are included to help customers normalise, transform, and combine features without having to write any code.

Frank Farrall, Principal of AI Ecosystems and Platforms Leader at Deloitte, comments:

“SageMaker Data Wrangler enables us to hit the ground running to address our data preparation needs with a rich collection of transformation tools that accelerate the process of machine learning data preparation needed to take new products to market.

In turn, our clients benefit from the rate at which we scale deployments, enabling us to deliver measurable, sustainable results that meet the needs of our clients in a matter of days rather than months.”

The second announcement is Feature Store. Amazon SageMaker Feature Store provides a new repository that makes it easy to store, update, retrieve, and share machine learning features for training and inference.

Feature Store aims to overcome the problem of storing features which are mapped to multiple models. A purpose-built feature store helps developers to access and share features that make it much easier to name, organise, find, and share sets of features among teams of developers and data scientists. Because it resides in SageMaker Studio – close to where ML models are run – AWS claims it provides single-digit millisecond inference latency.

Mammad Zadeh, VP of Engineering, Data Platform at Intuit, says:

“We have worked closely with AWS in the lead up to the release of Amazon SageMaker Feature Store, and we are excited by the prospect of a fully managed feature store so that we no longer have to maintain multiple feature repositories across our organization.

Our data scientists will be able to use existing features from a central store and drive both standardisation and reuse of features across teams and models.”

Next up, we have SageMaker Pipelines—which claims to be the first purpose-built, easy-to-use continuous integration and continuous delivery (CI/CD) service for machine learning.

Developers can define each step of an end-to-end machine learning workflow including the data-load steps, transformations from Amazon SageMaker Data Wrangler, features stored in Amazon SageMaker Feature Store, training configuration and algorithm set up, debugging steps, and optimisation steps.

SageMaker Clarify may be one of the most important features being debuted by AWS this week considering ongoing events.

Clarify aims to provide bias detection across the machine learning workflow, enabling developers to build greater fairness and transparency into their ML models. Rather than turn to often time-consuming open-source tools, developers can use the integrated solution to quickly try and counter any bias in models.

Andreas Heyden, Executive VP of Digital Innovations for the DFL Group, says:

“Amazon SageMaker Clarify seamlessly integrates with the rest of the Bundesliga Match Facts digital platform and is a key part of our long-term strategy of standardising our machine learning workflows on Amazon SageMaker.

By using AWS’s innovative technologies, such as machine learning, to deliver more in-depth insights and provide fans with a better understanding of the split-second decisions made on the pitch, Bundesliga Match Facts enables viewers to gain deeper insights into the key decisions in each match.”

Deep Profiling for Amazon SageMaker automatically monitors system resource utilisation and provides alerts where required for any detected training bottlenecks. The feature works across frameworks (PyTorch, Apache MXNet, and TensorFlow) and collects system and training metrics automatically without requiring any code changes in training scripts.

Next up, we have Distributed Training on SageMaker which AWS claims makes it possible to train large, complex deep learning models up to two times faster than current approaches.

Kristóf Szalay, CTO at Turbine, comments:

“We use machine learning to train our in silico human cell model, called Simulated Cell, based on a proprietary network architecture. By accurately predicting various interventions on the molecular level, Simulated Cell helps us to discover new cancer drugs and find combination partners for existing therapies.

Training of our simulation is something we continuously iterate on, but on a single machine each training takes days, hindering our ability to iterate on new ideas quickly.

We are very excited about Distributed Training on Amazon SageMaker, which we are expecting to decrease our training times by 90% and to help us focus on our main task: to write a best-of-the-breed codebase for the cell model training.

Amazon SageMaker ultimately allows us to become more effective in our primary mission: to identify and develop novel cancer drugs for patients.”

SageMaker’s Data Parallelism engine scales training jobs from a single GPU to hundreds or thousands by automatically splitting data across multiple GPUs, improving training time by up to 40 percent.

With edge computing advancements increasing rapidly, AWS is keeping pace with SageMaker Edge Manager.

Edge Manager helps developers to optimise, secure, monitor, and maintain ML models deployed on fleets of edge devices. In addition to helping optimise ML models and manage edge devices, Edge Manager also provides the ability to cryptographically sign models, upload prediction data from devices to SageMaker for monitoring and analysis, and view a dashboard which tracks and provided a visual report on the operation of the deployed models within the SageMaker console.

Igor Bergman, VP of Cloud and Software of PCs and Smart Devices at Lenovo, comments:

“SageMaker Edge Manager will help eliminate the manual effort required to optimise, monitor, and continuously improve the models after deployment. With it, we expect our models will run faster and consume less memory than with other comparable machine-learning platforms.

As we extend AI to new applications across the Lenovo services portfolio, we will continue to require a high-performance pipeline that is flexible and scalable both in the cloud and on millions of edge devices. That’s why we selected the Amazon SageMaker platform. With its rich edge-to-cloud and CI/CD workflow capabilities, we can effectively bring our machine learning models to any device workflow for much higher productivity.”

Finally, SageMaker JumpStart aims to make it easier for developers which have little experience with machine learning deployments to get started.

JumpStart provides developers with an easy-to-use, searchable interface to find best-in-class solutions, algorithms, and sample notebooks. Developers can select from several end-to-end machine learning templates(e.g. fraud detection, customer churn prediction, or forecasting) and deploy them directly into their SageMaker Studio environments.

AWS has been on a roll with SageMaker improvements—delivering more than 50 new capabilities over the past year. After this bumper feature drop, we probably shouldn’t expect any more until we’ve put 2020 behind us.

You can find coverage of AWS’ more cloud-focused announcements via our sister publication CloudTech here.

The post AWS announces nine major updates for its ML platform SageMaker appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/09/aws-nine-major-updates-ml-platform-sagemaker/feed/ 1
Google Cloud AI Platform updates make it ‘faster and more flexible’ https://news.deepgeniusai.com/2019/10/29/google-cloud-ai-platform-updates-faster-flexible/ https://news.deepgeniusai.com/2019/10/29/google-cloud-ai-platform-updates-faster-flexible/#respond Tue, 29 Oct 2019 15:23:34 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6144 Google has issued several updates for its Cloud AI Platform which aims to make it ‘faster and more flexible’ for running machine learning workloads. Cloud AI Platform is Google’s machine learning platform-as-a-service (ML PaaS) designed for AI developers, engineers, and data scientists. The platform is end-to-end and supports the full development cycle from preparing data,... Read more »

The post Google Cloud AI Platform updates make it ‘faster and more flexible’ appeared first on AI News.

]]>
Google has issued several updates for its Cloud AI Platform which aims to make it ‘faster and more flexible’ for running machine learning workloads.

Cloud AI Platform is Google’s machine learning platform-as-a-service (ML PaaS) designed for AI developers, engineers, and data scientists. The platform is end-to-end and supports the full development cycle from preparing data, to training, all the way to building and deploying machine learning models.

Among the most noteworthy additions to the platform is support for Nvidia GPUs. As Google explains, “ML models are so complex that they only run with acceptable latency on machines with many CPUs, or with accelerators like NVIDIA GPUs. This is especially true of models processing unstructured data like images, video, or text.”

Previously, Cloud AI Platform only supported one vCPU and 2GB of RAM. You can now add GPUs, like the inference-optimised, low latency NVIDIA T4, for AI Platform Prediction. The basic tier adds support for up to four vCPUs.

AI Platform Prediction is being used by Conservation International, a Washington.-based organisation with the mission “to responsibly and sustainably care for nature, our global biodiversity, for the wellbeing of humanity,” for a collaborative project called Wildlife Insights.

“Wildlife Insights will turn millions of wildlife images into critical data points that help us better understand, protect and save wildlife populations around the world,” explains Eric H. Fegraus, Senior Director, Conservation Technology.

“Google Cloud’s AI Platform helps us reliably serve machine learning models and easily integrate their predictions with our application. Fast predictions, in a responsive and scalable GPU hardware environment, are critical for our user experience.”

Support for running custom containers in which to train models has also become generally available. Users can supply their own Docker images with an ML framework preinstalled to run on AI Platform. Developers can test container images locally before they’re deployed to the cloud.

Customers aiming to use the platform for inference – hosting a trained model that responds with predictions – can now do so. Machine learning models can be hosted using the Google Cloud AI Platform and AI Platform Prediction can be used to infer target values for obtaining new data.

Oh, and AI Platform Prediction is now built on Kubernetes which enabled Google to “build a reliable and fast serving system with all the flexibility that machine learning demands.”

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Google Cloud AI Platform updates make it ‘faster and more flexible’ appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/10/29/google-cloud-ai-platform-updates-faster-flexible/feed/ 0
Box will launch ‘Skills Kit’ for building custom AI integrations https://news.deepgeniusai.com/2018/08/31/box-skills-lot-building-ai-integrations/ https://news.deepgeniusai.com/2018/08/31/box-skills-lot-building-ai-integrations/#respond Fri, 31 Aug 2018 14:21:31 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3689 California-based cloud content management and file sharing service provider Box recently announced that its Skills Kit platform, which allows organisations and developers to build AI integrations for interacting with stored content on their own, will be available to all customers in December 2018. The Box Skills framework was first announced in 2017 and was developing... Read more »

The post Box will launch ‘Skills Kit’ for building custom AI integrations appeared first on AI News.

]]>
California-based cloud content management and file sharing service provider Box recently announced that its Skills Kit platform, which allows organisations and developers to build AI integrations for interacting with stored content on their own, will be available to all customers in December 2018.

The Box Skills framework was first announced in 2017 and was developing an additional layer called ‘Box Skills Kit’ since inception. The latter is a toolkit that allows companies to develop their own bespoke versions of these integrations. The toolkit has attracted development from the likes of IBM, Microsoft, Google, Deloitte, and AIM Consulting.

Chief product officer at Box, Jeetu Patel, said: “Artificial intelligence has the potential to unlock incredible insights, and we are building the world’s best framework, in Box Skills, for bringing that intelligence to enterprise content.”

The Skills Kit has already been used by spirits firm Remy Cointreau. This work involved taking the basic Box Skill for automatically matching metadata to uploaded images, and modifying it so that it would identify specific company products in images. This is how the uploaded images are sorted into specific folders without the need for human interaction or verification.

Box also revealed that its Box Skills platform, which earlier only offered pre-built AI integrations, can now host custom AI models built by third-party AI firms. This means that if an organisation prefers a specific machine learning model built by IBM Watson Studio, Google Cloud AutoML, Microsoft Azure Custom Vision, or AWS SageMaker, can now be integrated into the Box platform to utilise the stored data.

The company also announced updates to its core automation services, which now enables customers to build their own scripts for repetitive workloads. For instance, a marketing team could automate the creation of a template at the beginning of every month and notify specific users to begin collaborating on a new pitch.

Box’s solution appears to be aimed towards smaller work groups that have predictable repetitive tasks in between periods of ad hoc collaboration. It’s less suited for more complicated tasks or those which are unpredictable.

The dashboard for creating these pre-scripted events is very simple, as every automation is based on the premise of ‘if this, then that’. This means that automated processes can be designed quickly by using the drop-down menus.

Box Skills supports over 20 different types of input and output, and includes options for targeting metadata, specific files, or entire folders.

Are you looking forward to the release of Box Skills?

 

The post Box will launch ‘Skills Kit’ for building custom AI integrations appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/08/31/box-skills-lot-building-ai-integrations/feed/ 0