Model – AI News https://news.deepgeniusai.com Artificial Intelligence News Thu, 05 Nov 2020 17:09:58 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Model – AI News https://news.deepgeniusai.com 32 32 Algorithmia announces Insights for ML model performance monitoring https://news.deepgeniusai.com/2020/11/05/algorithmia-insights-ml-model-performance-monitoring/ https://news.deepgeniusai.com/2020/11/05/algorithmia-insights-ml-model-performance-monitoring/#comments Thu, 05 Nov 2020 17:09:56 +0000 https://news.deepgeniusai.com/?p=10002 Seattle-based Algorithmia has announced Insights, a solution for monitoring the performance of machine learning models. Algorithmia specialises in artificial intelligence operations and management. The company is backed by Google LLC and focuses on simplifying AI projects for enterprises just To Get Started. Diego Oppenheimer, CEO of Algorithmia, says: “Organisations have specific needs when it comes to... Read more »

The post Algorithmia announces Insights for ML model performance monitoring appeared first on AI News.

]]>
Seattle-based Algorithmia has announced Insights, a solution for monitoring the performance of machine learning models.

Algorithmia specialises in artificial intelligence operations and management. The company is backed by Google LLC and focuses on simplifying AI projects for enterprises just To Get Started.

Diego Oppenheimer, CEO of Algorithmia, says:

“Organisations have specific needs when it comes to ML model monitoring and reporting.

For example, they are concerned with compliance as it pertains to external and internal regulations, model performance for improvement of business outcomes, and reducing the risk of model failure.

Algorithmia Insights helps users overcome these issues while making it easier to monitor model performance in the context of other operational metrics and variables.” 

Insights aims to help enterprises to monitor the performance of their machine learning models. Many organisations currently don’t have that ability, or use a complex variety of tools and/or manual processes.

Operational metrics like execution time and request identification are combined with user-defined metrics such as confidence and accuracy to identify data skews, negative feedback loops, and model drift.

Model drift, in layman’s terms, is the degradation of a model’s prediction power due to changes in the environment—which subsequently impacts the relationship between variables. A far more detailed explanation can be found here for those interested.

Algorithmia teamed up with monitoring service Datadog to allow customers to stream operational – as well as user-defined inference metrics – from Algorithmia, to Kafka, and then into Datadog.

Ilan Rabinovitch, Vice President of Product and Community at Datadog, comments:

“ML models are at the heart of today’s business. Understanding how they perform both statistically and operationally is key to success.

By combining the findings of Algorithmia Insights and Datadog’s deep visibility into code and integration, our mutual customers can drive more accurate and performant outcomes from their ML models.”

Through integration with Datadog and its Metrics API, customers can measure and monitor their ML models to immediately detect data drift, model drift, and model bias.

(Photo by Chris Liverani on Unsplash)

The post Algorithmia announces Insights for ML model performance monitoring appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/05/algorithmia-insights-ml-model-performance-monitoring/feed/ 1
Microsoft’s new AI auto-captions images for the visually impaired https://news.deepgeniusai.com/2020/10/19/microsoft-new-ai-auto-captions-images-visually-impaired/ https://news.deepgeniusai.com/2020/10/19/microsoft-new-ai-auto-captions-images-visually-impaired/#respond Mon, 19 Oct 2020 11:07:34 +0000 https://news.deepgeniusai.com/?p=9957 A new AI from Microsoft aims to automatically caption images in documents and emails so that software for visual impairments can read it out. Researchers from Microsoft explained their machine learning model in a paper on preprint repository arXiv. The model uses VIsual VOcabulary pre-training (VIVO) which leverages large amounts of paired image-tag data to... Read more »

The post Microsoft’s new AI auto-captions images for the visually impaired appeared first on AI News.

]]>
A new AI from Microsoft aims to automatically caption images in documents and emails so that software for visual impairments can read it out.

Researchers from Microsoft explained their machine learning model in a paper on preprint repository arXiv.

The model uses VIsual VOcabulary pre-training (VIVO) which leverages large amounts of paired image-tag data to learn a visual vocabulary.

A second dataset of properly captioned images is then used to help teach the AI how to best describe the pictures.

“Ideally, everyone would include alt text for all images in documents, on the web, in social media – as this enables people who are blind to access the content and participate in the conversation. But, alas, people don’t,” said Saqib Shaikh, a software engineering manager with Microsoft’s AI platform group.

Overall, the researchers expect the AI to deliver twice the performance of Microsoft’s existing captioning system.

In order to benchmark the performance of their new AI, the researchers entered it into the ‘nocaps’ challenge. As of writing, Microsoft’s AI now ranks first on its leaderboard.

“The nocaps challenge is really how are you able to describe those novel objects that you haven’t seen in your training data?” commented Lijuan Wang, a principal research manager in Microsoft’s research lab.

Developers wanting to get started with building apps using Microsoft’s auto-captioning AI can already do so as it’s available in Azure Cognitive Services’ Computer Vision package.

Microsoft’s impressive SeeingAI application – which uses computer vision to describe an individual’s surroundings for people suffering from vision loss – will be updated with features using the new AI.

“Image captioning is one of the core computer vision capabilities that can enable a broad range of services,” said Xuedong Huang, Microsoft CTO of Azure AI Cognitive Services.

“We’re taking this AI breakthrough to Azure as a platform to serve a broader set of customers,” Huang continued. “It is not just a breakthrough on the research; the time it took to turn that breakthrough into production on Azure is also a breakthrough.”

The improved auto-captioning feature is also expected to be available in Outlook, Word, and PowerPoint later this year.

(Photo by K8 on Unsplash)

The post Microsoft’s new AI auto-captions images for the visually impaired appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/10/19/microsoft-new-ai-auto-captions-images-visually-impaired/feed/ 0