Applications – AI News https://news.deepgeniusai.com Artificial Intelligence News Wed, 09 Dec 2020 14:47:50 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Applications – AI News https://news.deepgeniusai.com 32 32 AWS announces nine major updates for its ML platform SageMaker https://news.deepgeniusai.com/2020/12/09/aws-nine-major-updates-ml-platform-sagemaker/ https://news.deepgeniusai.com/2020/12/09/aws-nine-major-updates-ml-platform-sagemaker/#comments Wed, 09 Dec 2020 14:47:48 +0000 https://news.deepgeniusai.com/?p=10096 Amazon Web Services (AWS) has announced nine major new updates for its cloud-based machine learning platform, SageMaker. SageMaker aims to provide a machine learning service which can be used to build, train, and deploy ML models for virtually any use case. During this year’s re:Invent conference, AWS made several announcements to further improve SageMaker’s capabilities.... Read more »

The post AWS announces nine major updates for its ML platform SageMaker appeared first on AI News.

]]>
Amazon Web Services (AWS) has announced nine major new updates for its cloud-based machine learning platform, SageMaker.

SageMaker aims to provide a machine learning service which can be used to build, train, and deploy ML models for virtually any use case.

During this year’s re:Invent conference, AWS made several announcements to further improve SageMaker’s capabilities.

Swami Sivasubramanian, VP of Amazon Machine Learning at AWS, said:

“Hundreds of thousands of everyday developers and data scientists have used our industry-leading machine learning service, Amazon SageMaker, to remove barriers to building, training, and deploying custom machine learning models. One of the best parts about having such a widely-adopted service like SageMaker is that we get lots of customer suggestions which fuel our next set of deliverables.

Today, we are announcing a set of tools for Amazon SageMaker that makes it much easier for developers to build end-to-end machine learning pipelines to prepare, build, train, explain, inspect, monitor, debug, and run custom machine learning models with greater visibility, explainability, and automation at scale.”

The first announcement is Data Wrangler, a feature which aims to automate the preparation of data for machine learning.

Data Wrangler enables customers to choose the data they want from their various data stores and import it with a single click. Over 300 built-in data transformers are included to help customers normalise, transform, and combine features without having to write any code.

Frank Farrall, Principal of AI Ecosystems and Platforms Leader at Deloitte, comments:

“SageMaker Data Wrangler enables us to hit the ground running to address our data preparation needs with a rich collection of transformation tools that accelerate the process of machine learning data preparation needed to take new products to market.

In turn, our clients benefit from the rate at which we scale deployments, enabling us to deliver measurable, sustainable results that meet the needs of our clients in a matter of days rather than months.”

The second announcement is Feature Store. Amazon SageMaker Feature Store provides a new repository that makes it easy to store, update, retrieve, and share machine learning features for training and inference.

Feature Store aims to overcome the problem of storing features which are mapped to multiple models. A purpose-built feature store helps developers to access and share features that make it much easier to name, organise, find, and share sets of features among teams of developers and data scientists. Because it resides in SageMaker Studio – close to where ML models are run – AWS claims it provides single-digit millisecond inference latency.

Mammad Zadeh, VP of Engineering, Data Platform at Intuit, says:

“We have worked closely with AWS in the lead up to the release of Amazon SageMaker Feature Store, and we are excited by the prospect of a fully managed feature store so that we no longer have to maintain multiple feature repositories across our organization.

Our data scientists will be able to use existing features from a central store and drive both standardisation and reuse of features across teams and models.”

Next up, we have SageMaker Pipelines—which claims to be the first purpose-built, easy-to-use continuous integration and continuous delivery (CI/CD) service for machine learning.

Developers can define each step of an end-to-end machine learning workflow including the data-load steps, transformations from Amazon SageMaker Data Wrangler, features stored in Amazon SageMaker Feature Store, training configuration and algorithm set up, debugging steps, and optimisation steps.

SageMaker Clarify may be one of the most important features being debuted by AWS this week considering ongoing events.

Clarify aims to provide bias detection across the machine learning workflow, enabling developers to build greater fairness and transparency into their ML models. Rather than turn to often time-consuming open-source tools, developers can use the integrated solution to quickly try and counter any bias in models.

Andreas Heyden, Executive VP of Digital Innovations for the DFL Group, says:

“Amazon SageMaker Clarify seamlessly integrates with the rest of the Bundesliga Match Facts digital platform and is a key part of our long-term strategy of standardising our machine learning workflows on Amazon SageMaker.

By using AWS’s innovative technologies, such as machine learning, to deliver more in-depth insights and provide fans with a better understanding of the split-second decisions made on the pitch, Bundesliga Match Facts enables viewers to gain deeper insights into the key decisions in each match.”

Deep Profiling for Amazon SageMaker automatically monitors system resource utilisation and provides alerts where required for any detected training bottlenecks. The feature works across frameworks (PyTorch, Apache MXNet, and TensorFlow) and collects system and training metrics automatically without requiring any code changes in training scripts.

Next up, we have Distributed Training on SageMaker which AWS claims makes it possible to train large, complex deep learning models up to two times faster than current approaches.

Kristóf Szalay, CTO at Turbine, comments:

“We use machine learning to train our in silico human cell model, called Simulated Cell, based on a proprietary network architecture. By accurately predicting various interventions on the molecular level, Simulated Cell helps us to discover new cancer drugs and find combination partners for existing therapies.

Training of our simulation is something we continuously iterate on, but on a single machine each training takes days, hindering our ability to iterate on new ideas quickly.

We are very excited about Distributed Training on Amazon SageMaker, which we are expecting to decrease our training times by 90% and to help us focus on our main task: to write a best-of-the-breed codebase for the cell model training.

Amazon SageMaker ultimately allows us to become more effective in our primary mission: to identify and develop novel cancer drugs for patients.”

SageMaker’s Data Parallelism engine scales training jobs from a single GPU to hundreds or thousands by automatically splitting data across multiple GPUs, improving training time by up to 40 percent.

With edge computing advancements increasing rapidly, AWS is keeping pace with SageMaker Edge Manager.

Edge Manager helps developers to optimise, secure, monitor, and maintain ML models deployed on fleets of edge devices. In addition to helping optimise ML models and manage edge devices, Edge Manager also provides the ability to cryptographically sign models, upload prediction data from devices to SageMaker for monitoring and analysis, and view a dashboard which tracks and provided a visual report on the operation of the deployed models within the SageMaker console.

Igor Bergman, VP of Cloud and Software of PCs and Smart Devices at Lenovo, comments:

“SageMaker Edge Manager will help eliminate the manual effort required to optimise, monitor, and continuously improve the models after deployment. With it, we expect our models will run faster and consume less memory than with other comparable machine-learning platforms.

As we extend AI to new applications across the Lenovo services portfolio, we will continue to require a high-performance pipeline that is flexible and scalable both in the cloud and on millions of edge devices. That’s why we selected the Amazon SageMaker platform. With its rich edge-to-cloud and CI/CD workflow capabilities, we can effectively bring our machine learning models to any device workflow for much higher productivity.”

Finally, SageMaker JumpStart aims to make it easier for developers which have little experience with machine learning deployments to get started.

JumpStart provides developers with an easy-to-use, searchable interface to find best-in-class solutions, algorithms, and sample notebooks. Developers can select from several end-to-end machine learning templates(e.g. fraud detection, customer churn prediction, or forecasting) and deploy them directly into their SageMaker Studio environments.

AWS has been on a roll with SageMaker improvements—delivering more than 50 new capabilities over the past year. After this bumper feature drop, we probably shouldn’t expect any more until we’ve put 2020 behind us.

You can find coverage of AWS’ more cloud-focused announcements via our sister publication CloudTech here.

The post AWS announces nine major updates for its ML platform SageMaker appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/09/aws-nine-major-updates-ml-platform-sagemaker/feed/ 1
Salesforce-backed AI project SharkEye aims to protect beachgoers https://news.deepgeniusai.com/2020/11/24/salesforce-ai-project-sharkeye-protect-beachgoers/ https://news.deepgeniusai.com/2020/11/24/salesforce-ai-project-sharkeye-protect-beachgoers/#comments Tue, 24 Nov 2020 13:32:04 +0000 https://news.deepgeniusai.com/?p=10050 Salesforce is backing an AI project called SharkEye which aims to save the lives of beachgoers from one of the sea’s deadliest predators. Shark attacks are, fortunately, quite rare. However, they do happen and most cases are either fatal or cause life-changing injuries. Just last week, a fatal shark attack in Australia marked the eighth... Read more »

The post Salesforce-backed AI project SharkEye aims to protect beachgoers appeared first on AI News.

]]>
Salesforce is backing an AI project called SharkEye which aims to save the lives of beachgoers from one of the sea’s deadliest predators.

Shark attacks are, fortunately, quite rare. However, they do happen and most cases are either fatal or cause life-changing injuries.

Just last week, a fatal shark attack in Australia marked the eighth of the year—an almost 100-year record for the highest annual death toll. Once rare sightings in Southern California beaches are now becoming increasingly common as sharks are preferring the warmer waters close to shore.

Academics from the University of California and San Diego State University have teamed up with AI researchers from Salesforce to create software which can spot when sharks are swimming around popular beach destinations.

Sharks are currently tracked – when at all – by either keeping tabs of tagged animals online or by someone on a paddleboard keeping an eye out. It’s an inefficient system ripe for some AI innovation.

SharkEye uses drones to spot sharks from above. The drones fly preprogrammed paths at a height of around 120 feet to cover large areas of the ocean while preventing marine life from being disturbed.

If a shark is spotted, a message can be sent instantly to people including lifeguards, surf instructors, and beachside homeowners to take necessary action. Future alerts could also be sent directly to beachgoers who’ve signed up for them or pushed via social channels.

The drone footage is helping to feed further research into movement patterns. The researchers hope that by combining with data like ocean temperature, and the movement of other marine life, an AI will be able to predict when and where sharks are most likely to be in areas which may pose a danger to people.

SharkEye is still considered to be in its pilot stage but has been tested for the past two summers at Padaro Beach in Santa Barbara County.

A shark is suspected to have bitten a woman at Padaro Beach over summer when the team wasn’t flying a drone due to the coronavirus shutdown. Fortunately, her injuries were minor. However, a 26-year-old man was killed in a shark attack a few hours north in Santa Cruz just eight days later.

Attacks can lead to sharks also being killed or injured in a bid to save human life. Using AI to help find safer ways for sharks and humans to share the water can only be a good thing.

(Photo by Laura College on Unsplash)

The post Salesforce-backed AI project SharkEye aims to protect beachgoers appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/24/salesforce-ai-project-sharkeye-protect-beachgoers/feed/ 1
Microsoft’s new AI auto-captions images for the visually impaired https://news.deepgeniusai.com/2020/10/19/microsoft-new-ai-auto-captions-images-visually-impaired/ https://news.deepgeniusai.com/2020/10/19/microsoft-new-ai-auto-captions-images-visually-impaired/#respond Mon, 19 Oct 2020 11:07:34 +0000 https://news.deepgeniusai.com/?p=9957 A new AI from Microsoft aims to automatically caption images in documents and emails so that software for visual impairments can read it out. Researchers from Microsoft explained their machine learning model in a paper on preprint repository arXiv. The model uses VIsual VOcabulary pre-training (VIVO) which leverages large amounts of paired image-tag data to... Read more »

The post Microsoft’s new AI auto-captions images for the visually impaired appeared first on AI News.

]]>
A new AI from Microsoft aims to automatically caption images in documents and emails so that software for visual impairments can read it out.

Researchers from Microsoft explained their machine learning model in a paper on preprint repository arXiv.

The model uses VIsual VOcabulary pre-training (VIVO) which leverages large amounts of paired image-tag data to learn a visual vocabulary.

A second dataset of properly captioned images is then used to help teach the AI how to best describe the pictures.

“Ideally, everyone would include alt text for all images in documents, on the web, in social media – as this enables people who are blind to access the content and participate in the conversation. But, alas, people don’t,” said Saqib Shaikh, a software engineering manager with Microsoft’s AI platform group.

Overall, the researchers expect the AI to deliver twice the performance of Microsoft’s existing captioning system.

In order to benchmark the performance of their new AI, the researchers entered it into the ‘nocaps’ challenge. As of writing, Microsoft’s AI now ranks first on its leaderboard.

“The nocaps challenge is really how are you able to describe those novel objects that you haven’t seen in your training data?” commented Lijuan Wang, a principal research manager in Microsoft’s research lab.

Developers wanting to get started with building apps using Microsoft’s auto-captioning AI can already do so as it’s available in Azure Cognitive Services’ Computer Vision package.

Microsoft’s impressive SeeingAI application – which uses computer vision to describe an individual’s surroundings for people suffering from vision loss – will be updated with features using the new AI.

“Image captioning is one of the core computer vision capabilities that can enable a broad range of services,” said Xuedong Huang, Microsoft CTO of Azure AI Cognitive Services.

“We’re taking this AI breakthrough to Azure as a platform to serve a broader set of customers,” Huang continued. “It is not just a breakthrough on the research; the time it took to turn that breakthrough into production on Azure is also a breakthrough.”

The improved auto-captioning feature is also expected to be available in Outlook, Word, and PowerPoint later this year.

(Photo by K8 on Unsplash)

The post Microsoft’s new AI auto-captions images for the visually impaired appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/10/19/microsoft-new-ai-auto-captions-images-visually-impaired/feed/ 0
Nvidia and ARM will open ‘world-class’ AI centre in Cambridge https://news.deepgeniusai.com/2020/09/14/nvidia-arm-world-class-ai-centre-cambridge/ https://news.deepgeniusai.com/2020/09/14/nvidia-arm-world-class-ai-centre-cambridge/#respond Mon, 14 Sep 2020 12:52:49 +0000 https://news.deepgeniusai.com/?p=9848 Nvidia is already putting its $40 billion ARM acquisition to good use by opening a “world-class” AI centre in Cambridge. British chip designer ARM’s technology is at the heart of most mobile devices. Meanwhile, Nvidia’s GPUs are increasingly being used for AI computation in servers, desktops, and even things like self-driving vehicles. However, Nvidia was... Read more »

The post Nvidia and ARM will open ‘world-class’ AI centre in Cambridge appeared first on AI News.

]]>
Nvidia is already putting its $40 billion ARM acquisition to good use by opening a “world-class” AI centre in Cambridge.

British chip designer ARM’s technology is at the heart of most mobile devices. Meanwhile, Nvidia’s GPUs are increasingly being used for AI computation in servers, desktops, and even things like self-driving vehicles.

However, Nvidia was most interested in ARM’s presence in edge devices—which it estimates to be in the region of 180 billion.

Jensen Huang, CEO of Nvidia, said:

“ARM is an incredible company and it employs some of the greatest engineering minds in the world. But we believe we can make ARM even more incredible and take it to even higher levels.

We want to propel it — and the UK — to global AI leadership.”

There were concerns Nvidia’s acquisition would lead to job losses, but the company has promised to keep the business in the UK. The company says it’s planning to hire more staff and retain ARM’s iconic brand.

Nvidia is going further in its commitment to the UK by opening a new AI centre in Cambridge, which is home to an increasing number of exciting startups in the field such as FiveAI, Prowler.io, Fetch.ai, and Darktrace.

“We will create an open centre of excellence in the area once home to giants like Isaac Newton and Alan Turing, for whom key NVIDIA technologies are named.

Here, leading scientists, engineers and researchers from the UK and around the world will come to develop their ideas, collaborate and conduct their ground-breaking work in areas like healthcare, life sciences, self-driving cars, and other fields.”

The new centre will have five key features when it opens:

  • ARM/Nvidia-based supercomputer – set to be one of the most powerful AI supercomputers in the world.
  • Research Fellowships and Partnerships – Nvidia will use the centre to establish new UK-based research partnerships, expanding on successful relationships already established with King’s College and Oxford.
  • AI Training – Nvidia will make its AI curriculum available across the UK to help create job opportunities and prepare “the next generation of UK developers for AI leadership”
  • Startup Accelerator – With so many of the world’s most exciting AI companies launching in the UK, the Nvidia Inception accelerator will help startups succeed by providing access to the aforementioned supercomputer, connections to researchers from NVIDIA and partners, technical training, and marketing promotion.
  • Industry Collaboration – AI is still in its infancy but will impact every industry to some extent. Nvidia says its new research facility will be an open hub for industry collaboration, building on the company’s existing relationships with the likes of GSK, Oxford Nanopore, and other leaders in their fields.

The UK is Europe’s leader in AI and the British government is investing heavily in ensuring it maintains its pole position. Beyond funding, the UK is also aiming to ensure it’s among the best places to run an AI company.

Current EU rules, especially around data, are often seen as limiting the development of European AI companies when compared to elsewhere in the world. While the UK will have to avoid being accused of doing a so-called “bonfire of regulations” post-Brexit, data collection regulations is likely an area which will be relaxed.

In the UK’s historic trade deal signed with Japan last week, several enhancements were made over the blanket EU-Japan deal signed earlier this year. Among the perceived improvements is the “free flow of data” by not enforcing localisation requirements, and that algorithms can remain private.

UK trade secretary Liz Truss said: “The agreement we have negotiated – in record time and in challenging circumstances – goes far beyond the existing EU deal, as it secures new wins for British businesses in our great manufacturing, food and drink, and tech industries.”

Japan and the UK, as two global tech giants, are expected to deepen their collaboration in the coming years—building on the trade deal signed last week.

Shigeki Ishizuka, Chairman of the Japan Electronics and Information Technology Industries Association, said: “We are confident that this mutual relationship will be further strengthened as an ambitious agreement that will contribute to the promotion of cooperation in research and development, the promotion of innovation, and the further expansion of inter-company collaboration.”

Nvidia’s investment shows that it has confidence in the UK’s strong AI foundations continuing to gain momentum in the coming years.

(Photo by A Perry on Unsplash)

The post Nvidia and ARM will open ‘world-class’ AI centre in Cambridge appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/09/14/nvidia-arm-world-class-ai-centre-cambridge/feed/ 0
The White House is set to boost AI funding by 30 percent https://news.deepgeniusai.com/2020/08/19/white-house-boost-ai-funding-30-percent/ https://news.deepgeniusai.com/2020/08/19/white-house-boost-ai-funding-30-percent/#comments Wed, 19 Aug 2020 16:11:48 +0000 https://news.deepgeniusai.com/?p=9824 A budget proposal from the White House would boost funding for AI by around 30 percent as the US aims to retain its technological supremacy. Countries around the world are vastly increasing their budgets for AI, and with good reason. Just look at Gartner’s Hype Cycle released yesterday to see how important the technology is... Read more »

The post The White House is set to boost AI funding by 30 percent appeared first on AI News.

]]>
A budget proposal from the White House would boost funding for AI by around 30 percent as the US aims to retain its technological supremacy.

Countries around the world are vastly increasing their budgets for AI, and with good reason. Just look at Gartner’s Hype Cycle released yesterday to see how important the technology is expected to be over the next decade.

Russian president Vladimir Putin famously said back in 2017 that the nation which leads in AI “will become the ruler of the world”. Putin said that AI offers unprecedented power, including military power, to any government that leads in the field.

China, the third global superpower, has also embarked on a major national AI strategy. In July 2017, The State Council of China released the “New Generation Artificial Intelligence Development Plan” to build a domestic AI industry worth around $150 billion over the next few years and to become the leading AI power by 2030.

Naturally, the US isn’t going to give that top podium spot to China without a fight.

The White House has proposed (PDF) a 30 percent hike in spending on AI and quantum computing. Around $1.5 billion would be allocated to AI funding and $699 million to quantum technology.

According to a report published by US national security think tank Center for a New American Security (CNAS), Chinese officials see an AI ‘arms race’ as a threat to global peace.

The fear of the CNAS is that integrating AI into military resources and communications may breach current international norms and lead to conflict-by-accident.

China and the US have been vying to become the top destination for AI investments. Figures published by ABI Research at the end of last year suggested that the US reclaimed the top spot for AI investments back from China, which overtook the Americans the year prior. ABI expects the US to reach a 70 percent share of global AI investments.

Lian Jye Su, Principal Analyst at ABI Research, said: 

“The United States is reaping the rewards from its diversified AI investment strategy. 

Top AI startups in the United States come from various sectors, including self-driving cars, industrial manufacturing, robotics process automation, data analytics, and cybersecurity.”

The UK, unable to match the levels of funding allocated to AI research as the likes of the US and China, is taking a different approach.

An index compiled by Oxford Insights last year ranked the UK number one for AI readiness in Europe and only second on the world stage behind Singapore. The US is in fourth place, while China only just makes the top 20.

The UK has focused on AI policy and harnessing the talent from its world-leading universities to ensure the country is ready to embrace the technology’s opportunities.

A dedicated AI council in the UK features:

  • Ocado’s Chief Technology Officer, Paul Clarke
  • Dame Patricia Hodgson, Board Member of the Centre for Data Ethics and Innovation 
  • The Alan Turing Institute Chief Executive, Professor Adrian Smith
  • AI for good founder Kriti Sharma
  • UKRI chief executive Mark Walport
  • Founding Director of the Edinburgh Centre for Robotics, Professor David Lane

British Digital Secretary Jeremy Wright stated: “Britain is already a leading authority in AI. We are home to some of the world’s finest academic institutions, landing record levels of investment to the sector, and attracting the best global tech talent. But we must not be complacent.”

Growing cooperation between the UK and US in a number of technological endeavours could help to harness the strengths of both nations if similarly applied to AI, helping to maintain the countries’ leaderships in the field.

(Photo by Louis Velazquez on Unsplash)

The post The White House is set to boost AI funding by 30 percent appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/08/19/white-house-boost-ai-funding-30-percent/feed/ 1
AI tool detects child abuse images with 99% accuracy https://news.deepgeniusai.com/2020/07/31/ai-tool-detect-child-abuse-images-accuracy/ https://news.deepgeniusai.com/2020/07/31/ai-tool-detect-child-abuse-images-accuracy/#respond Fri, 31 Jul 2020 16:08:31 +0000 https://news.deepgeniusai.com/?p=9789 A new AI-powered tool claims to detect child abuse images with around 99 percent accuracy. The tool, called Safer, is developed by non-profit Thorn to assist businesses which do not have in-house filtering systems to detect and remove such images. According to the Internet Watch Foundation in the UK, reports of child abuse images surged... Read more »

The post AI tool detects child abuse images with 99% accuracy appeared first on AI News.

]]>
A new AI-powered tool claims to detect child abuse images with around 99 percent accuracy.

The tool, called Safer, is developed by non-profit Thorn to assist businesses which do not have in-house filtering systems to detect and remove such images.

According to the Internet Watch Foundation in the UK, reports of child abuse images surged 50 percent during the COVID-19 lockdown. In the 11 weeks starting on 23rd March, its hotline logged 44,809 reports of images compared with 29,698 last year. Many of these images are from children who’ve spent more time online and been coerced into releasing images of themselves.

Andy Burrows, head of child safety online at the NSPCC, recently told the BBC: “Harm could have been lessened if social networks had done a better job of investing in technology, investing in safer design features heading into this crisis.”

Safer is one tool which could help with quickly flagging child abuse content to limit the harm caused.

The detection services of Safer include:

  • Image Hash Matching: The flagship service that generates cryptographic and perceptual hashes for images and compares those hashes to known CSAM hashes. At the time of publishing, the database includes 5.9M hashes. Hashing happens in the client’s infrastructure to maintain user privacy.
  • CSAM Image Classifier: Machine learning classification model developed by Thorn and leveraged within Safer that returns a prediction for whether a file is CSAM. The classifier has been trained on datasets totaling hundreds of thousands images including adult pornography, CSAM, and various benign imagery and can aid in the identification of potentially new and unknown CSAM.
  • Video Hash Matching: Service that generates cryptographic and perceptual hashes for video scenes and compares them to hashes representing scenes of suspected CSAM. At the time of publishing, the database includes over 650k hashes of suspected CSAM scenes.
  • SaferList for Detection: Service for Safer customers to leverage the knowledge of the broader Safer community by matching against hash sets contributed by other Safer customers to broaden detection efforts. Customers can customise what hash sets they would like to include.

However, the problem doesn’t stop with flagging content. It’s been documented that moderators for social media platforms often require therapy or even commit suicide after being exposed day-in, day-out to some of the most disturbing content posted online.

Thorn claims Safer is built with the wellness of moderators in mind. To this end, content is automatically blurred (the company says this currently only works for images.)

Safer has APIs available for developers that “are built to broaden the shared knowledge of child abuse content by contributing hashes, scanning against other industry hashes, and sending feedback on false positives.”

One of Thorn’s most high-profile clients so far is Flickr. Using Safer, Flickr found an image of child abuse hosted on its platform which – following a law enforcement investigation – led to the recovery of 21 children ranging from 18 months to 14 years old, and the arrest of the perpetrator.

Safer is currently available for any company operating in the US. Thorn plans to expand to other countries next year after customising for each country’s national reporting requirements.

You can find out more about the tool and how to get started here.

The post AI tool detects child abuse images with 99% accuracy appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/31/ai-tool-detect-child-abuse-images-accuracy/feed/ 0
DeepCode provides AI code reviews for over four million developers https://news.deepgeniusai.com/2020/07/21/deepcode-ai-code-reviews-four-million-developers/ https://news.deepgeniusai.com/2020/07/21/deepcode-ai-code-reviews-four-million-developers/#respond Tue, 21 Jul 2020 15:40:27 +0000 https://news.deepgeniusai.com/?p=9759 AI-powered code reviewer DeepCode has announced it’s checked the code of over four million developers. DeepCode’s machine learning-based bot is fluent in JavaScript, TypeScript, Java, C/C++, and Python. “Our data shows that over 50% of repositories have critical issues and every second pull-request has warnings about issues that need to be fixed,” said Boris Paskalev,... Read more »

The post DeepCode provides AI code reviews for over four million developers appeared first on AI News.

]]>
AI-powered code reviewer DeepCode has announced it’s checked the code of over four million developers.

DeepCode’s machine learning-based bot is fluent in JavaScript, TypeScript, Java, C/C++, and Python.

“Our data shows that over 50% of repositories have critical issues and every second pull-request has warnings about issues that need to be fixed,” said Boris Paskalev, CEO and co-founder of DeepCode.

“By using DeepCode, these issues are automatically identified and logically explained as suggestions are made about how to fix them before code is deployed.”

Over the past few months, DeepCode has focused on improving the JavaScript skills of the bot. JavaScript frameworks and libraries such as Vue.js and React are supported. A demo of DeepCode’s analysis of React can be found here.

DeepCode claims its bot is now “up to 50x faster and finding more than double the number of serious bugs over all other tools combined while maintaining over 80% accuracy.”

The bot has been trained using machine learning to analyse hundreds of millions of commits across the vast number of open source projects freely available. DeepCode says it’s able to identify bugs before they happen.

A recent survey by DeepCode found that 85 percent of people want software companies to focus less on new features and more on fixing bugs and security issues.

“Too many software companies still believe that new features are what users want the most,” commented Paskalev. “As this survey shows, what people really want is quality software that is safe to use.”

DeepCode is free for open source software and commercial teams of up to 30 developers. You can start analysing your code by connecting your GitHub, BitBucket, or GitLab account here.

The post DeepCode provides AI code reviews for over four million developers appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/21/deepcode-ai-code-reviews-four-million-developers/feed/ 0
Amazon uses AI-powered displays to enforce social distancing in warehouses https://news.deepgeniusai.com/2020/06/17/amazon-ai-displays-enforce-social-distancing-warehouses/ https://news.deepgeniusai.com/2020/06/17/amazon-ai-displays-enforce-social-distancing-warehouses/#respond Wed, 17 Jun 2020 15:43:00 +0000 https://news.deepgeniusai.com/?p=9696 Amazon has turned to an AI-powered solution to help maintain social distancing in its vast warehouses. Companies around the world are having to look at new ways of safely continuing business as we adapt to the “new normal” of life with the coronavirus. Amazon has used its AI expertise to create what it calls the... Read more »

The post Amazon uses AI-powered displays to enforce social distancing in warehouses appeared first on AI News.

]]>
Amazon has turned to an AI-powered solution to help maintain social distancing in its vast warehouses.

Companies around the world are having to look at new ways of safely continuing business as we adapt to the “new normal” of life with the coronavirus.

Amazon has used its AI expertise to create what it calls the Distance Assistant. Using a time-of-flight sensor, often found in modern smartphones, the AI measures the distance between employees.

The AI is used to differentiate people from their background and what it sees is displayed on a 50-inch screen for workers to quickly see whether they’re adhering to keeping a safe distance.

Augmented reality is used to overlay either a green or red circle underneath each employee. As you can probably guess – a green circle means that the employee is a safe distance from others, while a red circle indicates that person needs to give others some personal space.

The whole solution is run locally and does not require access to the cloud to function. Amazon says it’s only deployed Distance Assistant in a handful of facilities so far but plans to roll out “hundreds” more “over the next few weeks.”

While the solution appears rather draconian, it’s a clever – and arguably necessary – way of helping to keep people safe until a vaccine for the virus is hopefully found. However, it will strengthen concerns that the coronavirus will be used to normalise increased surveillance and erode privacy.

Amazon claims it will be making Distance Assistant open-source to help other companies adapt to the coronavirus pandemic and keep their employees safe.

The post Amazon uses AI-powered displays to enforce social distancing in warehouses appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/17/amazon-ai-displays-enforce-social-distancing-warehouses/feed/ 0
Deepfake app puts your face on GIFs while limiting data collection https://news.deepgeniusai.com/2020/01/14/deepfake-app-face-gifs-data-collection/ https://news.deepgeniusai.com/2020/01/14/deepfake-app-face-gifs-data-collection/#comments Tue, 14 Jan 2020 15:11:41 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6356 A new app called Doublicat allows users to superimpose their face into popular GIFs using deep learning technology. In the name of research, here’s one I made earlier: Doublicat uses a Generative Adversarial Network (GAN) to do its magic. The GAN is called RefaceAI and is developed by a company of the same name. RefaceAI... Read more »

The post Deepfake app puts your face on GIFs while limiting data collection appeared first on AI News.

]]>
A new app called Doublicat allows users to superimpose their face into popular GIFs using deep learning technology.

In the name of research, here’s one I made earlier:

Doublicat uses a Generative Adversarial Network (GAN) to do its magic. The GAN is called RefaceAI and is developed by a company of the same name.

RefaceAI was previously used in a face swapping app called Reflect. Elon Musk once used Reflect to put his face on Dwayne Johnson’s body. 

The app is a lot of fun, but – after concerns about viral Russian app FaceApp – many will be wondering just how much data is being collected in return.

Doublicat’s developers are upfront with asking for consent to store your photos upon first opening the app and this is confirmed in their privacy policy: “We may collect the photos, that you take with your camera while using our application.”

However, Doublicat says that photos are only stored on their server for 24 hours before they’re deleted. “The rest of the time your photos used in Doublicat application are stored locally on your mobile device and may be removed any time by either deleting these photos from your mobile device’s file system.”

The app also collects data about facial features but only the vector representations of each person’s face is stored. Doublicat assures that the facial recognition data collected “is not biometric data” and is deleted from their servers within 30 calendar days.

“In no way will Doublicat use your uploaded content for face recognition as Doublicat does not introduce the face recognition technologies or other technical means for processing biometric data for the unique identification or authentication of a user.”

The amount of data Doublicat can collect is limited compared to some alternatives. Apps such as Zao require users to 3D model their face whereas Doublicat only takes a front-facing picture.

RefaceAI is now looking to launch an app which can make deepfake videos rather than just GIFs. The move is likely to be controversial given the concerns around deepfakes and how such videos could be used for things such as political manipulation.

A fake video of Nancy Pelosi, Speaker of the United States House of Representatives, went viral last year after purportedly showing her slurring her words as if she was intoxicated. The clip shows how even a relatively unsophisticated video (it wasn’t an actual deepfake in this case) could be used to cause reputational damage and even swing votes.

A report from the NYU Stern Center for Business and Human Rights last September, covered by our sister publication MarketingTech, highlighted the various ways disinformation could be used ahead of this year’s presidential elections. One of the eight predictions is that deepfake videos will be used “to portray candidates saying and doing things they never said or did”.

Earlier this month, Facebook announced new policies around deepfakes. Any deepfake video that is designed to be misleading will be banned. The problem with the rules is they don’t cover videos altered for parody or those edited “solely to omit or change the order of words,” which will not sound encouraging to anyone wanting a firm stance against manipulation.

Doublicat is available for Android and iOS.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Deepfake app puts your face on GIFs while limiting data collection appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/01/14/deepfake-app-face-gifs-data-collection/feed/ 2
LG ThinQ: Our experience with AI so far and what’s next for the industry https://news.deepgeniusai.com/2019/11/18/lg-thinq-experience-ai-whats-next-industry/ https://news.deepgeniusai.com/2019/11/18/lg-thinq-experience-ai-whats-next-industry/#respond Mon, 18 Nov 2019 13:56:11 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6208 AI News spoke with LG corporate vice president Samuel Chang about ThinQ, the company’s brand for products and services incorporating AI. Chang played a major role in this year’s AI & Big Data Expo in Santa Clara last week, taking part in both a solo presentation on “Process Automation from IoT Data” and a panel... Read more »

The post LG ThinQ: Our experience with AI so far and what’s next for the industry appeared first on AI News.

]]>
AI News spoke with LG corporate vice president Samuel Chang about ThinQ, the company’s brand for products and services incorporating AI.

Chang played a major role in this year’s AI & Big Data Expo in Santa Clara last week, taking part in both a solo presentation on “Process Automation from IoT Data” and a panel discussion on “Data and the Customer”.

Following the event, AI News decided to catch up with Chang to discuss LG’s current experience in artificial intelligence and where the industry is heading next.

How important is AI becoming to differentiate from competing products?

With artificial intelligence gaining traction across a range of industries, it’s becoming important to allow differentiation among competing products. LG’s pursuit of “Evolve, Connect, Open” when it comes to AI and ThinQ offers enhanced convenience for users with diverse needs and tech preferences in their homes through personalised, proactive, easy, and efficient solutions. This approach allows the company to align with and offer the most expansive list of smart integrations on the market – whether with Google Assistant, Amazon Alexa, or future partnerships.

Specifically, with our open approach, LG is progressing toward incorporating IoT technology throughout the entire journey – including the product design/concept, the launch of new helpful tools, and proactive OTA (Over-The-Air) updates. LG is working with world-class cloud partners from Amazon, Google, and Microsoft, and combining with in-house development to develop, customise, and operate our IoT platforms.

AI is key to differentiating your products on the market by offering more robust opportunities and offerings for the larger consumer base.

What challenges has LG faced moving into AI and how did it overcome them?

The biggest challenge we faced as a whole moving into AI was creating truly meaningful products that will help our customer base with their daily routines and truly improve their lives. Through listening to what they want, we were able to produce a line of help innovations that do just that. 

LG ThinQ products use voice control to interact with users, as well as sensor data and diverse features such as product recognition and learning engine technologies to enhance their performance.

Do you think the smart home is meeting expectations?  

The smart home is a growing trend within the AI industry thanks to the influx of helpful tools introduced across the field. While many advancements have been made in such a short timeframe, we are still in the first inning of this technology adoption.

LG is focused on delivering better consumer benefits with advanced technology that will continue to improve over time. We aim to make the home more proactive and personalised for users based on our behaviour data collected across the products servicing their many needs. Knowing more about our user and their life leads to enhanced performance and, ultimately, to a better life.

Will distributed ledger technologies be important for smart devices and are there any plans at LG ThinQ to use them? 

While we are actively looking at distributed ledgers, including blockchain technology, there is no specific timetable for implementation. We believe this would be useful to enable an open ecosystem where various companies and contributors can participate. 

Is there a specific area of the market you and the LG ThinQ team are excited about?

LG is committed to producing helpful innovations for consumers. One example is our new AI-infused customer service solution, Proactive Customer Care. It’s a new paradigm in customer satisfaction, one that ensures greater value and peace-of-mind for owners of our smart home appliances.  

When it rolls out in the United States next year, Proactive Customer Care will leverage ThinQ AI to provide personalised support, alerting users to issues with their LG appliance, and offering helpful tips and solutions to maximise performance and long product life. Using the latest in AI technology, LG Proactive Customer Care is designed to inform LG smart appliance owners of potential problems before they even occur – it can expedite technician visits, if needed, and offer guidance on how to keep LG’s products functioning optimally.

What did you speak about during this year’s AI Expo North America?

I was thrilled to take part in my first AI & Big Data Expo North America in Santa Clara. As part of my activities onsite, I welcomed interested attendees to visit the panel discussion on “Data and The Customer” where I was joined by fellow industry insiders to discuss in the relationship between the data collected from companies and its customers.  

I also conducted a solo presentation on the topic of “Process Automation from IoT Data.” I helped to define process automation for IoT data and how it is key to use not only the right platform for your business, but also one that allows true process integration and automation for optimisation. This is essential to our work at LG and helps shape our future product launches.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post LG ThinQ: Our experience with AI so far and what’s next for the industry appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/11/18/lg-thinq-experience-ai-whats-next-industry/feed/ 0