computer vision – AI News https://news.deepgeniusai.com Artificial Intelligence News Tue, 24 Nov 2020 13:32:06 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png computer vision – AI News https://news.deepgeniusai.com 32 32 Salesforce-backed AI project SharkEye aims to protect beachgoers https://news.deepgeniusai.com/2020/11/24/salesforce-ai-project-sharkeye-protect-beachgoers/ https://news.deepgeniusai.com/2020/11/24/salesforce-ai-project-sharkeye-protect-beachgoers/#comments Tue, 24 Nov 2020 13:32:04 +0000 https://news.deepgeniusai.com/?p=10050 Salesforce is backing an AI project called SharkEye which aims to save the lives of beachgoers from one of the sea’s deadliest predators. Shark attacks are, fortunately, quite rare. However, they do happen and most cases are either fatal or cause life-changing injuries. Just last week, a fatal shark attack in Australia marked the eighth... Read more »

The post Salesforce-backed AI project SharkEye aims to protect beachgoers appeared first on AI News.

]]>
Salesforce is backing an AI project called SharkEye which aims to save the lives of beachgoers from one of the sea’s deadliest predators.

Shark attacks are, fortunately, quite rare. However, they do happen and most cases are either fatal or cause life-changing injuries.

Just last week, a fatal shark attack in Australia marked the eighth of the year—an almost 100-year record for the highest annual death toll. Once rare sightings in Southern California beaches are now becoming increasingly common as sharks are preferring the warmer waters close to shore.

Academics from the University of California and San Diego State University have teamed up with AI researchers from Salesforce to create software which can spot when sharks are swimming around popular beach destinations.

Sharks are currently tracked – when at all – by either keeping tabs of tagged animals online or by someone on a paddleboard keeping an eye out. It’s an inefficient system ripe for some AI innovation.

SharkEye uses drones to spot sharks from above. The drones fly preprogrammed paths at a height of around 120 feet to cover large areas of the ocean while preventing marine life from being disturbed.

If a shark is spotted, a message can be sent instantly to people including lifeguards, surf instructors, and beachside homeowners to take necessary action. Future alerts could also be sent directly to beachgoers who’ve signed up for them or pushed via social channels.

The drone footage is helping to feed further research into movement patterns. The researchers hope that by combining with data like ocean temperature, and the movement of other marine life, an AI will be able to predict when and where sharks are most likely to be in areas which may pose a danger to people.

SharkEye is still considered to be in its pilot stage but has been tested for the past two summers at Padaro Beach in Santa Barbara County.

A shark is suspected to have bitten a woman at Padaro Beach over summer when the team wasn’t flying a drone due to the coronavirus shutdown. Fortunately, her injuries were minor. However, a 26-year-old man was killed in a shark attack a few hours north in Santa Cruz just eight days later.

Attacks can lead to sharks also being killed or injured in a bid to save human life. Using AI to help find safer ways for sharks and humans to share the water can only be a good thing.

(Photo by Laura College on Unsplash)

The post Salesforce-backed AI project SharkEye aims to protect beachgoers appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/24/salesforce-ai-project-sharkeye-protect-beachgoers/feed/ 1
Microsoft’s new AI auto-captions images for the visually impaired https://news.deepgeniusai.com/2020/10/19/microsoft-new-ai-auto-captions-images-visually-impaired/ https://news.deepgeniusai.com/2020/10/19/microsoft-new-ai-auto-captions-images-visually-impaired/#respond Mon, 19 Oct 2020 11:07:34 +0000 https://news.deepgeniusai.com/?p=9957 A new AI from Microsoft aims to automatically caption images in documents and emails so that software for visual impairments can read it out. Researchers from Microsoft explained their machine learning model in a paper on preprint repository arXiv. The model uses VIsual VOcabulary pre-training (VIVO) which leverages large amounts of paired image-tag data to... Read more »

The post Microsoft’s new AI auto-captions images for the visually impaired appeared first on AI News.

]]>
A new AI from Microsoft aims to automatically caption images in documents and emails so that software for visual impairments can read it out.

Researchers from Microsoft explained their machine learning model in a paper on preprint repository arXiv.

The model uses VIsual VOcabulary pre-training (VIVO) which leverages large amounts of paired image-tag data to learn a visual vocabulary.

A second dataset of properly captioned images is then used to help teach the AI how to best describe the pictures.

“Ideally, everyone would include alt text for all images in documents, on the web, in social media – as this enables people who are blind to access the content and participate in the conversation. But, alas, people don’t,” said Saqib Shaikh, a software engineering manager with Microsoft’s AI platform group.

Overall, the researchers expect the AI to deliver twice the performance of Microsoft’s existing captioning system.

In order to benchmark the performance of their new AI, the researchers entered it into the ‘nocaps’ challenge. As of writing, Microsoft’s AI now ranks first on its leaderboard.

“The nocaps challenge is really how are you able to describe those novel objects that you haven’t seen in your training data?” commented Lijuan Wang, a principal research manager in Microsoft’s research lab.

Developers wanting to get started with building apps using Microsoft’s auto-captioning AI can already do so as it’s available in Azure Cognitive Services’ Computer Vision package.

Microsoft’s impressive SeeingAI application – which uses computer vision to describe an individual’s surroundings for people suffering from vision loss – will be updated with features using the new AI.

“Image captioning is one of the core computer vision capabilities that can enable a broad range of services,” said Xuedong Huang, Microsoft CTO of Azure AI Cognitive Services.

“We’re taking this AI breakthrough to Azure as a platform to serve a broader set of customers,” Huang continued. “It is not just a breakthrough on the research; the time it took to turn that breakthrough into production on Azure is also a breakthrough.”

The improved auto-captioning feature is also expected to be available in Outlook, Word, and PowerPoint later this year.

(Photo by K8 on Unsplash)

The post Microsoft’s new AI auto-captions images for the visually impaired appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/10/19/microsoft-new-ai-auto-captions-images-visually-impaired/feed/ 0
Microsoft and Qualcomm debut their Vision AI Developer Kit https://news.deepgeniusai.com/2019/09/04/microsoft-qualcomm-vision-ai-developer-kit/ https://news.deepgeniusai.com/2019/09/04/microsoft-qualcomm-vision-ai-developer-kit/#respond Wed, 04 Sep 2019 09:46:58 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5986 First announced at BUILD 2018, Microsoft and Qualcomm have debuted their Vision AI Developer Kit for building computer vision applications. The kit is built on Qualcomm’s Vision Intelligence 300 Platform and can run AI models locally or in the cloud using Microsoft’s Azure ML and Azure IoT Edge platforms. eInfochips manufactures the Vision AI Developer... Read more »

The post Microsoft and Qualcomm debut their Vision AI Developer Kit appeared first on AI News.

]]>
First announced at BUILD 2018, Microsoft and Qualcomm have debuted their Vision AI Developer Kit for building computer vision applications.

The kit is built on Qualcomm’s Vision Intelligence 300 Platform and can run AI models locally or in the cloud using Microsoft’s Azure ML and Azure IoT Edge platforms.

eInfochips manufactures the Vision AI Developer Kit which features both a camera and the software needed to develop intelligent computer vision apps.

The hardware runs Yocto Linux, uses a Qualcomm Snapdragon 603 chip, has 4GB of LDDR4X memory, and 64GB of storage. The camera is 8-megapixel, records in 4K, and captures audio using an array of four microphones.

In terms of connectivity, the Vision AI Developer Kit features WiFi (802.11b/g/n, 2.4GHz, 5GHz) and has an HDMI out, USB-C, Micro SD slot, and audio in/out ports.

An SDK combining Visual Studio Code, a module which can recognise in excess of 183 unique objects, prebuilt Azure IoT deployment configurations, Python modules, and a Vision AI Developer kit extension for Visual Studio is available on GitHub.

Microsoft claims vision models can be deployed in minutes ”regardless of your current machine learning skill level”.

In a blog post, Microsoft principal project manager Anne Yang wrote:

“Artificial intelligence workloads include megabytes of data and potentially billions of calculations. With advancements in hardware, it is now possible to run time-sensitive AI workloads on the edge while also sending outputs to the cloud for downstream applications.

AI scenarios processed on the edge can facilitate important business scenarios, such as verifying if every person on a construction site is wearing a hardhat, or detecting whether items are out-of-stock on a store shelf.”

According to Yang, the Snapdragon Neural Processing Engine — which features in Qualcomm’s Vision Intelligence 300 platform — powers on-device execution of containerised Azure services. This capability makes the Vision AI Developer Kit the first “fully accelerated” platform supported end-to-end by Azure.

The Vision AI Developer Kit is available now for $249 from Arrow Electronics.

The post Microsoft and Qualcomm debut their Vision AI Developer Kit appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/09/04/microsoft-qualcomm-vision-ai-developer-kit/feed/ 0