machine learning – AI News https://news.deepgeniusai.com Artificial Intelligence News Thu, 10 Dec 2020 12:52:08 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png machine learning – AI News https://news.deepgeniusai.com 32 32 Algorithmia: AI budgets are increasing but deployment challenges remain https://news.deepgeniusai.com/2020/12/10/algorithmia-ai-budgets-increasing-deployment-challenges-remain/ https://news.deepgeniusai.com/2020/12/10/algorithmia-ai-budgets-increasing-deployment-challenges-remain/#comments Thu, 10 Dec 2020 12:52:07 +0000 https://news.deepgeniusai.com/?p=10099 A new report from Algorithmia has found that enterprise budgets for AI are rapidly increasing but significant deployment challenges remain. Algorithmia’s 2021 Enterprise Trends in Machine Learning report features the views of 403 business leaders involved with machine learning initiatives. Diego Oppenheimer, CEO of Algorithmia, says: “COVID-19 has caused rapid change which has challenged our... Read more »

The post Algorithmia: AI budgets are increasing but deployment challenges remain appeared first on AI News.

]]>
A new report from Algorithmia has found that enterprise budgets for AI are rapidly increasing but significant deployment challenges remain.

Algorithmia’s 2021 Enterprise Trends in Machine Learning report features the views of 403 business leaders involved with machine learning initiatives.

Diego Oppenheimer, CEO of Algorithmia, says:

“COVID-19 has caused rapid change which has challenged our assumptions in many areas. In this rapidly changing environment, organisations are rethinking their investments and seeing the importance of AI/ML to drive revenue and efficiency during uncertain times.

Before the pandemic, the top concern for organisations pursuing AI/ML initiatives was a lack of skilled in-house talent. Today, organisations are worrying more about how to get ML models into production faster and how to ensure their performance over time.

While we don’t want to marginalise these issues, I am encouraged by the fact that the type of challenges have more to do with how to maximise the value of AI/ML investments as opposed to whether or not a company can pursue them at all.”

The main takeaway is that AI budgets are significantly increasing. 83 percent of respondents said they’ve increased their budgets compared to last year.

Despite a difficult year for many companies, business leaders are not being put off of AI investments—in fact, they’re doubling-down.

In Algorithmia’s summer survey, 50 percent of respondents said they plan to spend more on AI this year. Around one in five even said they “plan to spend a lot more.”

76 percent of businesses report they are now prioritising AI/ML over other IT initiatives. 64 percent say the priority of AI/ML has increased relative to other IT initiatives over the last 12 months.

With unemployment figures around the world at their highest for several years – even decades in some cases – it’s at least heartening to hear that 76 percent of respondents said they’ve not reduced the size of their AI/ML teams. 27 percent even report an increase.

43 percent say their AI/ML initiatives “matter way more than we thought” and close to one in four believe their AI/ML initiatives should have been their top priority sooner. Process automation and improving customer experiences are the two main areas for AI investments.

While it’s been all good news so far, there are AI deployment issues being faced by many companies which are yet to be addressed.

Governance is, by far, the biggest AI challenge being faced by companies. 56 percent of the businesses ranked governance, security, and auditability issues as a concern.

Regulatory compliance is vital but can be confusing, especially with different regulations between not just countries but even states. 67 percent of the organisations report having to comply with multiple regulations for their AI/ML deployments.

The next major challenge after governance is with basic deployment and organisational challenges. 

Basic integration issues were ranked by 49 percent of businesses as a problem. Furthermore, more job roles are being involved with AI deployment strategies than ever before—it’s no longer seen as just the domain of data scientists.

However, there’s perhaps some light at the end of the tunnel. Organisations are reporting improved outcomes when using dedicated, third-party MLOps solutions.

While keeping in mind Algorithmia is a third-party MLOps solution, the report claims organisations using such a platform spend an average of around 21 percent less on infrastructure costs. Furthermore, it also helps to free up their data scientists—who spend less time on model deployment.

You can find a full copy of Algorithmia’s report here (requires signup)

The post Algorithmia: AI budgets are increasing but deployment challenges remain appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/10/algorithmia-ai-budgets-increasing-deployment-challenges-remain/feed/ 1
AWS announces nine major updates for its ML platform SageMaker https://news.deepgeniusai.com/2020/12/09/aws-nine-major-updates-ml-platform-sagemaker/ https://news.deepgeniusai.com/2020/12/09/aws-nine-major-updates-ml-platform-sagemaker/#comments Wed, 09 Dec 2020 14:47:48 +0000 https://news.deepgeniusai.com/?p=10096 Amazon Web Services (AWS) has announced nine major new updates for its cloud-based machine learning platform, SageMaker. SageMaker aims to provide a machine learning service which can be used to build, train, and deploy ML models for virtually any use case. During this year’s re:Invent conference, AWS made several announcements to further improve SageMaker’s capabilities.... Read more »

The post AWS announces nine major updates for its ML platform SageMaker appeared first on AI News.

]]>
Amazon Web Services (AWS) has announced nine major new updates for its cloud-based machine learning platform, SageMaker.

SageMaker aims to provide a machine learning service which can be used to build, train, and deploy ML models for virtually any use case.

During this year’s re:Invent conference, AWS made several announcements to further improve SageMaker’s capabilities.

Swami Sivasubramanian, VP of Amazon Machine Learning at AWS, said:

“Hundreds of thousands of everyday developers and data scientists have used our industry-leading machine learning service, Amazon SageMaker, to remove barriers to building, training, and deploying custom machine learning models. One of the best parts about having such a widely-adopted service like SageMaker is that we get lots of customer suggestions which fuel our next set of deliverables.

Today, we are announcing a set of tools for Amazon SageMaker that makes it much easier for developers to build end-to-end machine learning pipelines to prepare, build, train, explain, inspect, monitor, debug, and run custom machine learning models with greater visibility, explainability, and automation at scale.”

The first announcement is Data Wrangler, a feature which aims to automate the preparation of data for machine learning.

Data Wrangler enables customers to choose the data they want from their various data stores and import it with a single click. Over 300 built-in data transformers are included to help customers normalise, transform, and combine features without having to write any code.

Frank Farrall, Principal of AI Ecosystems and Platforms Leader at Deloitte, comments:

“SageMaker Data Wrangler enables us to hit the ground running to address our data preparation needs with a rich collection of transformation tools that accelerate the process of machine learning data preparation needed to take new products to market.

In turn, our clients benefit from the rate at which we scale deployments, enabling us to deliver measurable, sustainable results that meet the needs of our clients in a matter of days rather than months.”

The second announcement is Feature Store. Amazon SageMaker Feature Store provides a new repository that makes it easy to store, update, retrieve, and share machine learning features for training and inference.

Feature Store aims to overcome the problem of storing features which are mapped to multiple models. A purpose-built feature store helps developers to access and share features that make it much easier to name, organise, find, and share sets of features among teams of developers and data scientists. Because it resides in SageMaker Studio – close to where ML models are run – AWS claims it provides single-digit millisecond inference latency.

Mammad Zadeh, VP of Engineering, Data Platform at Intuit, says:

“We have worked closely with AWS in the lead up to the release of Amazon SageMaker Feature Store, and we are excited by the prospect of a fully managed feature store so that we no longer have to maintain multiple feature repositories across our organization.

Our data scientists will be able to use existing features from a central store and drive both standardisation and reuse of features across teams and models.”

Next up, we have SageMaker Pipelines—which claims to be the first purpose-built, easy-to-use continuous integration and continuous delivery (CI/CD) service for machine learning.

Developers can define each step of an end-to-end machine learning workflow including the data-load steps, transformations from Amazon SageMaker Data Wrangler, features stored in Amazon SageMaker Feature Store, training configuration and algorithm set up, debugging steps, and optimisation steps.

SageMaker Clarify may be one of the most important features being debuted by AWS this week considering ongoing events.

Clarify aims to provide bias detection across the machine learning workflow, enabling developers to build greater fairness and transparency into their ML models. Rather than turn to often time-consuming open-source tools, developers can use the integrated solution to quickly try and counter any bias in models.

Andreas Heyden, Executive VP of Digital Innovations for the DFL Group, says:

“Amazon SageMaker Clarify seamlessly integrates with the rest of the Bundesliga Match Facts digital platform and is a key part of our long-term strategy of standardising our machine learning workflows on Amazon SageMaker.

By using AWS’s innovative technologies, such as machine learning, to deliver more in-depth insights and provide fans with a better understanding of the split-second decisions made on the pitch, Bundesliga Match Facts enables viewers to gain deeper insights into the key decisions in each match.”

Deep Profiling for Amazon SageMaker automatically monitors system resource utilisation and provides alerts where required for any detected training bottlenecks. The feature works across frameworks (PyTorch, Apache MXNet, and TensorFlow) and collects system and training metrics automatically without requiring any code changes in training scripts.

Next up, we have Distributed Training on SageMaker which AWS claims makes it possible to train large, complex deep learning models up to two times faster than current approaches.

Kristóf Szalay, CTO at Turbine, comments:

“We use machine learning to train our in silico human cell model, called Simulated Cell, based on a proprietary network architecture. By accurately predicting various interventions on the molecular level, Simulated Cell helps us to discover new cancer drugs and find combination partners for existing therapies.

Training of our simulation is something we continuously iterate on, but on a single machine each training takes days, hindering our ability to iterate on new ideas quickly.

We are very excited about Distributed Training on Amazon SageMaker, which we are expecting to decrease our training times by 90% and to help us focus on our main task: to write a best-of-the-breed codebase for the cell model training.

Amazon SageMaker ultimately allows us to become more effective in our primary mission: to identify and develop novel cancer drugs for patients.”

SageMaker’s Data Parallelism engine scales training jobs from a single GPU to hundreds or thousands by automatically splitting data across multiple GPUs, improving training time by up to 40 percent.

With edge computing advancements increasing rapidly, AWS is keeping pace with SageMaker Edge Manager.

Edge Manager helps developers to optimise, secure, monitor, and maintain ML models deployed on fleets of edge devices. In addition to helping optimise ML models and manage edge devices, Edge Manager also provides the ability to cryptographically sign models, upload prediction data from devices to SageMaker for monitoring and analysis, and view a dashboard which tracks and provided a visual report on the operation of the deployed models within the SageMaker console.

Igor Bergman, VP of Cloud and Software of PCs and Smart Devices at Lenovo, comments:

“SageMaker Edge Manager will help eliminate the manual effort required to optimise, monitor, and continuously improve the models after deployment. With it, we expect our models will run faster and consume less memory than with other comparable machine-learning platforms.

As we extend AI to new applications across the Lenovo services portfolio, we will continue to require a high-performance pipeline that is flexible and scalable both in the cloud and on millions of edge devices. That’s why we selected the Amazon SageMaker platform. With its rich edge-to-cloud and CI/CD workflow capabilities, we can effectively bring our machine learning models to any device workflow for much higher productivity.”

Finally, SageMaker JumpStart aims to make it easier for developers which have little experience with machine learning deployments to get started.

JumpStart provides developers with an easy-to-use, searchable interface to find best-in-class solutions, algorithms, and sample notebooks. Developers can select from several end-to-end machine learning templates(e.g. fraud detection, customer churn prediction, or forecasting) and deploy them directly into their SageMaker Studio environments.

AWS has been on a roll with SageMaker improvements—delivering more than 50 new capabilities over the past year. After this bumper feature drop, we probably shouldn’t expect any more until we’ve put 2020 behind us.

You can find coverage of AWS’ more cloud-focused announcements via our sister publication CloudTech here.

The post AWS announces nine major updates for its ML platform SageMaker appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/09/aws-nine-major-updates-ml-platform-sagemaker/feed/ 1
TensorFlow is now available for those shiny new ARM-based Macs https://news.deepgeniusai.com/2020/11/19/tensorflow-now-available-new-arm-based-macs/ https://news.deepgeniusai.com/2020/11/19/tensorflow-now-available-new-arm-based-macs/#comments Thu, 19 Nov 2020 14:41:57 +0000 https://news.deepgeniusai.com/?p=10039 A new version of machine learning library TensorFlow has been released with optimisations for Apple’s new ARM-based Macs. While still technically in pre-release, the Mac-optimised TensorFlow fork supports native hardware acceleration on Mac devices with M1 or Intel chips through Apple’s ML Compute framework. The new TensorFlow release boasts of an over 10x speed improvement... Read more »

The post TensorFlow is now available for those shiny new ARM-based Macs appeared first on AI News.

]]>
A new version of machine learning library TensorFlow has been released with optimisations for Apple’s new ARM-based Macs.

While still technically in pre-release, the Mac-optimised TensorFlow fork supports native hardware acceleration on Mac devices with M1 or Intel chips through Apple’s ML Compute framework.

The new TensorFlow release boasts of an over 10x speed improvement for common training tasks. While impressive, it has to be taken in the context that the GPU was not previously used for training tasks. 

A look at the benchmarks still indicates a substantial gap between the Intel and M1-based Macs across various machine learning models:

In a blog post, Pankaj Kanwar, Tensor Processing Units Technical Program Manager at Google, and Fred Alcober, TensorFlow Product Marketing Lead at Google, wrote:

“These improvements, combined with the ability of Apple developers being able to execute TensorFlow on iOS through TensorFlow Lite, continue to showcase TensorFlow’s breadth and depth in supporting high-performance ML execution on Apple hardware.”

We can only hope that running these workloads doesn’t turn MacBooks into expensive frying pans—but the remarkable efficiency they’ve displayed so far gives little cause for concern.

The post TensorFlow is now available for those shiny new ARM-based Macs appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/19/tensorflow-now-available-new-arm-based-macs/feed/ 1
Algorithmia announces Insights for ML model performance monitoring https://news.deepgeniusai.com/2020/11/05/algorithmia-insights-ml-model-performance-monitoring/ https://news.deepgeniusai.com/2020/11/05/algorithmia-insights-ml-model-performance-monitoring/#comments Thu, 05 Nov 2020 17:09:56 +0000 https://news.deepgeniusai.com/?p=10002 Seattle-based Algorithmia has announced Insights, a solution for monitoring the performance of machine learning models. Algorithmia specialises in artificial intelligence operations and management. The company is backed by Google LLC and focuses on simplifying AI projects for enterprises just To Get Started. Diego Oppenheimer, CEO of Algorithmia, says: “Organisations have specific needs when it comes to... Read more »

The post Algorithmia announces Insights for ML model performance monitoring appeared first on AI News.

]]>
Seattle-based Algorithmia has announced Insights, a solution for monitoring the performance of machine learning models.

Algorithmia specialises in artificial intelligence operations and management. The company is backed by Google LLC and focuses on simplifying AI projects for enterprises just To Get Started.

Diego Oppenheimer, CEO of Algorithmia, says:

“Organisations have specific needs when it comes to ML model monitoring and reporting.

For example, they are concerned with compliance as it pertains to external and internal regulations, model performance for improvement of business outcomes, and reducing the risk of model failure.

Algorithmia Insights helps users overcome these issues while making it easier to monitor model performance in the context of other operational metrics and variables.” 

Insights aims to help enterprises to monitor the performance of their machine learning models. Many organisations currently don’t have that ability, or use a complex variety of tools and/or manual processes.

Operational metrics like execution time and request identification are combined with user-defined metrics such as confidence and accuracy to identify data skews, negative feedback loops, and model drift.

Model drift, in layman’s terms, is the degradation of a model’s prediction power due to changes in the environment—which subsequently impacts the relationship between variables. A far more detailed explanation can be found here for those interested.

Algorithmia teamed up with monitoring service Datadog to allow customers to stream operational – as well as user-defined inference metrics – from Algorithmia, to Kafka, and then into Datadog.

Ilan Rabinovitch, Vice President of Product and Community at Datadog, comments:

“ML models are at the heart of today’s business. Understanding how they perform both statistically and operationally is key to success.

By combining the findings of Algorithmia Insights and Datadog’s deep visibility into code and integration, our mutual customers can drive more accurate and performant outcomes from their ML models.”

Through integration with Datadog and its Metrics API, customers can measure and monitor their ML models to immediately detect data drift, model drift, and model bias.

(Photo by Chris Liverani on Unsplash)

The post Algorithmia announces Insights for ML model performance monitoring appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/05/algorithmia-insights-ml-model-performance-monitoring/feed/ 1
NVIDIA chucks its MLPerf-leading A100 GPU into Amazon’s cloud https://news.deepgeniusai.com/2020/11/03/nvidia-mlperf-a100-gpu-amazon-cloud/ https://news.deepgeniusai.com/2020/11/03/nvidia-mlperf-a100-gpu-amazon-cloud/#comments Tue, 03 Nov 2020 15:55:37 +0000 https://news.deepgeniusai.com/?p=9998 NVIDIA’s A100 set a new record in the MLPerf benchmark last month and now it’s accessible through Amazon’s cloud. Amazon Web Services (AWS) first launched a GPU instance 10 years ago with the NVIDIA M2050. It’s rather poetic that, a decade on, NVIDIA is now providing AWS with the hardware to power the next generation... Read more »

The post NVIDIA chucks its MLPerf-leading A100 GPU into Amazon’s cloud appeared first on AI News.

]]>
NVIDIA’s A100 set a new record in the MLPerf benchmark last month and now it’s accessible through Amazon’s cloud.

Amazon Web Services (AWS) first launched a GPU instance 10 years ago with the NVIDIA M2050. It’s rather poetic that, a decade on, NVIDIA is now providing AWS with the hardware to power the next generation of groundbreaking innovations.

The A100 outperformed CPUs in this year’s MLPerf by up to 237x in data centre inference. A single NVIDIA DGX A100 system – with eight A100 GPUs – provides the same performance as nearly 1,000 dual-socket CPU servers on some AI applications.

“We’re at a tipping point as every industry seeks better ways to apply AI to offer new services and grow their business,” said Ian Buck, Vice President of Accelerated Computing at NVIDIA, following the benchmark results.

Businesses can access the A100 in AWS’ P4d instance. NVIDIA claims the instances reduce the time to train machine learning models by up to 3x with FP16 and up to 6x with TF32 compared to the default FP32 precision.

Each P4d instance features eight NVIDIA A100 GPUs. If even more performance is required, customers are able to access over 4,000 GPUs at a time using AWS’s Elastic Fabric Adaptor (EFA).

Dave Brown, Vice President of EC2 at AWS, said:

“The pace at which our customers have used AWS services to build, train, and deploy machine learning applications has been extraordinary. At the same time, we have heard from those customers that they want an even lower-cost way to train their massive machine learning models.

Now, with EC2 UltraClusters of P4d instances powered by NVIDIA’s latest A100 GPUs and petabit-scale networking, we’re making supercomputing-class performance available to virtually everyone, while reducing the time to train machine learning models by 3x, and lowering the cost to train by up to 60% compared to previous generation instances.”

P4d supports 400Gbps networking and makes use of NVIDIA’s technologies including NVLink, NVSwitch, NCCL, and GPUDirect RDMA to further accelerate deep learning training workloads.

Some of AWS’ customers across various industries have already begun exploring how the P4d instance can help their business.

Karley Yoder, VP & GM of Artificial Intelligence at GE Healthcare, commented:

“Our medical imaging devices generate massive amounts of data that need to be processed by our data scientists. With previous GPU clusters, it would take days to train complex AI models, such as Progressive GANs, for simulations and view the results.

Using the new P4d instances reduced processing time from days to hours. We saw two- to three-times greater speed on training models with various image sizes while achieving better performance with increased batch size and higher productivity with a faster model development cycle.”

For an example from a different industry, the research arm of Toyota is exploring how P4d can improve their existing work in developing self-driving vehicles and groundbreaking new robotics.

“The previous generation P3 instances helped us reduce our time to train machine learning models from days to hours,” explained Mike Garrison, Technical Lead of Infrastructure Engineering at Toyota Research Institute.

“We are looking forward to utilizing P4d instances, as the additional GPU memory and more efficient float formats will allow our machine learning team to train with more complex models at an even faster speed.”

P4d instances are currently available in the US East (N. Virginia) and US West (Oregon) regions. AWS says further availability is planned soon.

You can find out more about P4d instances and how to get started here.

The post NVIDIA chucks its MLPerf-leading A100 GPU into Amazon’s cloud appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/03/nvidia-mlperf-a100-gpu-amazon-cloud/feed/ 2
Microsoft’s new AI auto-captions images for the visually impaired https://news.deepgeniusai.com/2020/10/19/microsoft-new-ai-auto-captions-images-visually-impaired/ https://news.deepgeniusai.com/2020/10/19/microsoft-new-ai-auto-captions-images-visually-impaired/#respond Mon, 19 Oct 2020 11:07:34 +0000 https://news.deepgeniusai.com/?p=9957 A new AI from Microsoft aims to automatically caption images in documents and emails so that software for visual impairments can read it out. Researchers from Microsoft explained their machine learning model in a paper on preprint repository arXiv. The model uses VIsual VOcabulary pre-training (VIVO) which leverages large amounts of paired image-tag data to... Read more »

The post Microsoft’s new AI auto-captions images for the visually impaired appeared first on AI News.

]]>
A new AI from Microsoft aims to automatically caption images in documents and emails so that software for visual impairments can read it out.

Researchers from Microsoft explained their machine learning model in a paper on preprint repository arXiv.

The model uses VIsual VOcabulary pre-training (VIVO) which leverages large amounts of paired image-tag data to learn a visual vocabulary.

A second dataset of properly captioned images is then used to help teach the AI how to best describe the pictures.

“Ideally, everyone would include alt text for all images in documents, on the web, in social media – as this enables people who are blind to access the content and participate in the conversation. But, alas, people don’t,” said Saqib Shaikh, a software engineering manager with Microsoft’s AI platform group.

Overall, the researchers expect the AI to deliver twice the performance of Microsoft’s existing captioning system.

In order to benchmark the performance of their new AI, the researchers entered it into the ‘nocaps’ challenge. As of writing, Microsoft’s AI now ranks first on its leaderboard.

“The nocaps challenge is really how are you able to describe those novel objects that you haven’t seen in your training data?” commented Lijuan Wang, a principal research manager in Microsoft’s research lab.

Developers wanting to get started with building apps using Microsoft’s auto-captioning AI can already do so as it’s available in Azure Cognitive Services’ Computer Vision package.

Microsoft’s impressive SeeingAI application – which uses computer vision to describe an individual’s surroundings for people suffering from vision loss – will be updated with features using the new AI.

“Image captioning is one of the core computer vision capabilities that can enable a broad range of services,” said Xuedong Huang, Microsoft CTO of Azure AI Cognitive Services.

“We’re taking this AI breakthrough to Azure as a platform to serve a broader set of customers,” Huang continued. “It is not just a breakthrough on the research; the time it took to turn that breakthrough into production on Azure is also a breakthrough.”

The improved auto-captioning feature is also expected to be available in Outlook, Word, and PowerPoint later this year.

(Photo by K8 on Unsplash)

The post Microsoft’s new AI auto-captions images for the visually impaired appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/10/19/microsoft-new-ai-auto-captions-images-visually-impaired/feed/ 0
AI is helping mobile operators to cope with pandemic demand https://news.deepgeniusai.com/2020/09/30/ai-helping-telcos-cope-pandemic-demand-surge/ https://news.deepgeniusai.com/2020/09/30/ai-helping-telcos-cope-pandemic-demand-surge/#respond Wed, 30 Sep 2020 08:13:06 +0000 https://news.deepgeniusai.com/?p=9888 Artificial intelligence is helping telecoms operators to boost the RAN capacity of their 4G networks by 15 percent. More people than ever are relying on telecoms networks to work, play, and stay connected during the pandemic. Operators are doing all they can to ensure their existing networks have enough capacity to cope with demand. Gorkem... Read more »

The post AI is helping mobile operators to cope with pandemic demand appeared first on AI News.

]]>
Artificial intelligence is helping telecoms operators to boost the RAN capacity of their 4G networks by 15 percent.

More people than ever are relying on telecoms networks to work, play, and stay connected during the pandemic. Operators are doing all they can to ensure their existing networks have enough capacity to cope with demand.

Gorkem Yigit, Principal Analyst at Analysys Mason, said:

“Video streaming continues to experience high year on year growth and that has been exacerbated by the pandemic and resulting lock-downs,

Yes, 5G grabs the spotlight, but 4G is carrying the brunt of this traffic. So, while investment in 5G infrastructure continues, operators need intelligent ways to maximize and extend existing 4G network capabilities in the short to medium term – keeping their CAPEX to a minimum.”

8 out of 10 of the world’s largest operator groups have deployed traffic management technology from the Openwave subsidiary of Swedish firm Enea. Many of these have since upgraded to include machine learning capabilities.

Openwave claims that, based on its figures, some operators faced a 90 percent surge in peak throughput during lockdowns.

Machine learning is helping to predict and identify congestion in the RAN (Radio Access Network) which resides between user equipment such as wireless devices and an operator’s core network.

John Giere, President of Enea Openwave, commented:

“Conventional mobile data management requires manual configuration and network investment – it is no longer fit for purpose.

Machine Learning has given existing 4G networks the shot in the arm they needed. It can work dynamically without external probes or changes to the RAN, delivering additional capacity at a time that operators most need it.” 

The use of machine learning has increased operators’ 4G RAN capacity by 15 percent in congested locations—providing further evidence of how AI technology can be used to quickly tackle real-world problems.

(Photo by Adrian Schwarz on Unsplash)

The post AI is helping mobile operators to cope with pandemic demand appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/09/30/ai-helping-telcos-cope-pandemic-demand-surge/feed/ 0
AI dominates Gartner’s latest Hype Cycle for emerging technologies https://news.deepgeniusai.com/2020/08/18/ai-gartner-hype-cycle-emerging-technologies/ https://news.deepgeniusai.com/2020/08/18/ai-gartner-hype-cycle-emerging-technologies/#comments Tue, 18 Aug 2020 10:50:06 +0000 https://news.deepgeniusai.com/?p=9814 Gartner’s latest Hype Cycle has a distinct AI flavour, highlighting the technology’s importance over the next decade. Of the 30 emerging technologies featured in Gartner’s latest Hype Cycle, nine are directly related to artificial intelligence: Generative adversarial networks Adaptive machine learning Composite AI Generative AI Responsible AI AI-augmented development Embedded AI Trusted AI AI-augmented design... Read more »

The post AI dominates Gartner’s latest Hype Cycle for emerging technologies appeared first on AI News.

]]>
Gartner’s latest Hype Cycle has a distinct AI flavour, highlighting the technology’s importance over the next decade.

Of the 30 emerging technologies featured in Gartner’s latest Hype Cycle, nine are directly related to artificial intelligence:

  • Generative adversarial networks
  • Adaptive machine learning
  • Composite AI
  • Generative AI
  • Responsible AI
  • AI-augmented development
  • Embedded AI
  • Trusted AI
  • AI-augmented design

Most of the AI technologies are currently in the initial “Innovation Trigger” part of the Hype Cycle, where excitement builds the fastest.

Responsible AI, AI-augmented development, embedded AI, and Trusted AI, have all now reached the “Peak of Inflated Expectations” and will next move into the dreaded “Trough of Disillusionment” as disappointment sets in over what can realistically be achieved.

Only after the trough, which none of the AI technologies have yet reached, do we head into the areas of the Hype Cycle where adoption occurs with realistic expectations and the productivity rewards are reaped.

Gartner’s Hype Cycle covers the next decade. The current placings of most of the AI technologies on the Hype Cycle indicates that Gartner believes it won’t be until towards the end of the decade we’ll see the most benefits.

Brian Burke, VP of research at Gartner, comments:

“Emerging technologies are disruptive by nature, but the competitive advantage they provide is not yet well known or proven in the market. Most will take more than five years, and some more than 10 years, to reach the Plateau of Productivity.

But some technologies on the Hype Cycle will mature in the near term and technology innovation leaders must understand the opportunities for these technologies, particularly those with transformational or high impact.”

Two technologies which Gartner expects to fast-track through the Hype Cycle are health passports and social distancing technologies, due to their necessity amid the COVID-19 pandemic.

You can find the full Gartner report here (paywall)

(Photo by Verena Yunita Yapi on Unsplash)

The post AI dominates Gartner’s latest Hype Cycle for emerging technologies appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/08/18/ai-gartner-hype-cycle-emerging-technologies/feed/ 1
DeepCode provides AI code reviews for over four million developers https://news.deepgeniusai.com/2020/07/21/deepcode-ai-code-reviews-four-million-developers/ https://news.deepgeniusai.com/2020/07/21/deepcode-ai-code-reviews-four-million-developers/#respond Tue, 21 Jul 2020 15:40:27 +0000 https://news.deepgeniusai.com/?p=9759 AI-powered code reviewer DeepCode has announced it’s checked the code of over four million developers. DeepCode’s machine learning-based bot is fluent in JavaScript, TypeScript, Java, C/C++, and Python. “Our data shows that over 50% of repositories have critical issues and every second pull-request has warnings about issues that need to be fixed,” said Boris Paskalev,... Read more »

The post DeepCode provides AI code reviews for over four million developers appeared first on AI News.

]]>
AI-powered code reviewer DeepCode has announced it’s checked the code of over four million developers.

DeepCode’s machine learning-based bot is fluent in JavaScript, TypeScript, Java, C/C++, and Python.

“Our data shows that over 50% of repositories have critical issues and every second pull-request has warnings about issues that need to be fixed,” said Boris Paskalev, CEO and co-founder of DeepCode.

“By using DeepCode, these issues are automatically identified and logically explained as suggestions are made about how to fix them before code is deployed.”

Over the past few months, DeepCode has focused on improving the JavaScript skills of the bot. JavaScript frameworks and libraries such as Vue.js and React are supported. A demo of DeepCode’s analysis of React can be found here.

DeepCode claims its bot is now “up to 50x faster and finding more than double the number of serious bugs over all other tools combined while maintaining over 80% accuracy.”

The bot has been trained using machine learning to analyse hundreds of millions of commits across the vast number of open source projects freely available. DeepCode says it’s able to identify bugs before they happen.

A recent survey by DeepCode found that 85 percent of people want software companies to focus less on new features and more on fixing bugs and security issues.

“Too many software companies still believe that new features are what users want the most,” commented Paskalev. “As this survey shows, what people really want is quality software that is safe to use.”

DeepCode is free for open source software and commercial teams of up to 30 developers. You can start analysing your code by connecting your GitHub, BitBucket, or GitLab account here.

The post DeepCode provides AI code reviews for over four million developers appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/21/deepcode-ai-code-reviews-four-million-developers/feed/ 0
MIT has removed a dataset which leads to misogynistic, racist AI models https://news.deepgeniusai.com/2020/07/02/mit-removed-dataset-misogynistic-racist-ai-models/ https://news.deepgeniusai.com/2020/07/02/mit-removed-dataset-misogynistic-racist-ai-models/#comments Thu, 02 Jul 2020 15:43:05 +0000 https://news.deepgeniusai.com/?p=9728 MIT has apologised for, and taken offline, a dataset which trains AI models with misogynistic and racist tendencies. The dataset in question is called 80 Million Tiny Images and was created in 2008. Designed for training AIs to detect objects, the dataset is a huge collection of pictures which are individually labelled based on what... Read more »

The post MIT has removed a dataset which leads to misogynistic, racist AI models appeared first on AI News.

]]>
MIT has apologised for, and taken offline, a dataset which trains AI models with misogynistic and racist tendencies.

The dataset in question is called 80 Million Tiny Images and was created in 2008. Designed for training AIs to detect objects, the dataset is a huge collection of pictures which are individually labelled based on what they feature.

Machine-learning models are trained using these images and their labels. An image of a street – when fed into an AI trained on such a dataset – could tell you about things it contains such as cars, streetlights, pedestrians, and bikes.

Two researchers – Vinay Prabhu, chief scientist at UnifyID, and Abeba Birhane, a PhD candidate at University College Dublin in Ireland – analysed the images and found thousands of concerning labels.

MIT’s training set was found to label women as “bitches” or “whores,” and people from BAME communities with the kind of derogatory terms I’m sure you don’t need me to write. The Register notes the dataset also contained close-up images of female genitalia labeled with the C-word.

The Register alerted MIT to the concerning issues found by Prabhu and Birhane with the dataset and the college promptly took it offline. MIT went a step further and urged anyone using the dataset to stop using it and delete any copies.

A statement on MIT’s website claims it was unaware of the offensive labels and they were “a consequence of the automated data collection procedure that relied on nouns from WordNet.”

The statement goes on to explain the 80 million images contained in the dataset, with sizes of just 32×32 pixels, means that manual inspection would be almost impossible and cannot guarantee all offensive images will be removed.

“Biases, offensive and prejudicial images, and derogatory terminology alienates an important part of our community – precisely those that we are making efforts to include. It also contributes to harmful biases in AI systems trained on such data,” wrote Antonio Torralba, Rob Fergus, and Bill Freeman from MIT.

“Additionally, the presence of such prejudicial images hurts efforts to foster a culture of inclusivity in the computer vision community. This is extremely unfortunate and runs counter to the values that we strive to uphold.”

You can find a full pre-print copy of Prabhu and Birhane’s paper here (PDF)

(Photo by Clay Banks on Unsplash)

The post MIT has removed a dataset which leads to misogynistic, racist AI models appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/02/mit-removed-dataset-misogynistic-racist-ai-models/feed/ 4