paper – AI News https://news.deepgeniusai.com Artificial Intelligence News Thu, 24 Dec 2020 10:09:18 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png paper – AI News https://news.deepgeniusai.com 32 32 Google is telling its scientists to give AI a ‘positive’ spin https://news.deepgeniusai.com/2020/12/24/google-telling-scientists-give-ai-positive-spin/ https://news.deepgeniusai.com/2020/12/24/google-telling-scientists-give-ai-positive-spin/#respond Thu, 24 Dec 2020 10:09:16 +0000 https://news.deepgeniusai.com/?p=10136 Google has reportedly been telling its scientists to give AI a “positive” spin in research papers. Documents obtained by Reuters suggest that, in at least three cases, Google’s researchers were requested to refrain from being critical of AI technology. A “sensitive topics” review was established by Google earlier this year to catch papers which cast... Read more »

The post Google is telling its scientists to give AI a ‘positive’ spin appeared first on AI News.

]]>
Google has reportedly been telling its scientists to give AI a “positive” spin in research papers.

Documents obtained by Reuters suggest that, in at least three cases, Google’s researchers were requested to refrain from being critical of AI technology.

A “sensitive topics” review was established by Google earlier this year to catch papers which cast a negative light on AI ahead of their publication.

Google asks its scientists to consult with legal, policy, and public relations teams prior to publishing anything on topics which could be deemed sensitive like sentiment analysis and categorisations of people based on race and/or political affiliation.

The new review means that papers from Google’s expert researchers which raise questions about AI developments may never be published. Reuters says four staff researchers believe Google is interfering with studies into potential technology harms.

Google recently faced scrutiny after firing leading AI ethics researcher Timnit Gebru.

Gebru is considered a pioneer in the field and researched the risks and inequalities found in large language models. She claims to have been fired by Google over an unpublished paper and sending an email critical of the company’s practices.

In an internal email countering Gebru’s claims, Head of Google Research Jeff Dean wrote:

“We’ve approved dozens of papers that Timnit and/or the other Googlers have authored and then published, but as you know, papers often require changes during the internal review process (or are even deemed unsuitable for submission). 

Unfortunately, this particular paper was only shared with a day’s notice before its deadline — we require two weeks for this sort of review — and then instead of awaiting reviewer feedback, it was approved for submission and submitted.

A cross-functional team then reviewed the paper as part of our regular process and the authors were informed that it didn’t meet our bar for publication and were given feedback about why.”

While it’s one word against another, it’s not a great look for Google.

“Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues,” Reuters reported one of Google’s documents as saying.

On its public-facing website, Google says that its scientists have “substantial” freedom—but that’s increasingly appearing like it’s not the case.

(Photo by Mitchell Luo on Unsplash)

The post Google is telling its scientists to give AI a ‘positive’ spin appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/24/google-telling-scientists-give-ai-positive-spin/feed/ 0
AI helps patients to get more rest while reducing staff workload https://news.deepgeniusai.com/2020/11/17/ai-patients-more-rest-reducing-staff-workload/ https://news.deepgeniusai.com/2020/11/17/ai-patients-more-rest-reducing-staff-workload/#comments Tue, 17 Nov 2020 15:17:04 +0000 https://news.deepgeniusai.com/?p=10028 A team from Feinstein Institutes for Research thinks AI could be key to helping patients get more rest while reducing the burden on healthcare staff. Everyone knows how important adequate sleep is for recovery. However, patients in pain – or just insomniacs like me – can struggle to get the sleep they need. “Rest is... Read more »

The post AI helps patients to get more rest while reducing staff workload appeared first on AI News.

]]>
A team from Feinstein Institutes for Research thinks AI could be key to helping patients get more rest while reducing the burden on healthcare staff.

Everyone knows how important adequate sleep is for recovery. However, patients in pain – or just insomniacs like me – can struggle to get the sleep they need.

“Rest is a critical element to a patient’s care, and it has been well-documented that disrupted sleep is a common complaint that could delay discharge and recovery,” said Theodoros Zanos, Assistant Professor at Feinstein Institutes’ Institute of Bioelectronic Medicine.

When a patient finally gets some shut-eye, the last thing they want is to be woken up to have their vitals checked—but such measurements are, well, vital.

In a paper published in Nature Partner Journals, the researchers detailed how they developed a deep-learning predictive tool which predicts a patient’s stability overnight. This prevents multiple unnecessary checks being carried out.

Vital sign measurements from 2.13 million patient visits at Northwell Health hospitals in New York between 2012 and 2019 were used to train the AI. Data included heart rate, systolic blood pressure, body temperature, respiratory rate, and age. A total of 24.3 million vital signs were used.

When tested, the AI misdiagnosed just two of 10,000 patients in overnight stays. The researchers noted how nurses on their usual rounds would be able to account for the two misdiagnosed cases.

According to the paper, around 20-35 percent of a nurse’s time is spent keeping records of patients’ vitals. Around 10 percent of their time is spent collecting vitals. On average, a nurse currently has to collect a patient’s vitals every four to five hours.

With that in mind, it’s little wonder medical staff feel so overburdened and stressed. These people want to provide the best care they can but only have two hands. Using AI to free up more time for their heroic duties while simultaneously improving patient care can only be a good thing.

The AI tool is being rolled out across several of Northwell Health’s hospitals.

The post AI helps patients to get more rest while reducing staff workload appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/17/ai-patients-more-rest-reducing-staff-workload/feed/ 1
Microsoft’s new AI auto-captions images for the visually impaired https://news.deepgeniusai.com/2020/10/19/microsoft-new-ai-auto-captions-images-visually-impaired/ https://news.deepgeniusai.com/2020/10/19/microsoft-new-ai-auto-captions-images-visually-impaired/#respond Mon, 19 Oct 2020 11:07:34 +0000 https://news.deepgeniusai.com/?p=9957 A new AI from Microsoft aims to automatically caption images in documents and emails so that software for visual impairments can read it out. Researchers from Microsoft explained their machine learning model in a paper on preprint repository arXiv. The model uses VIsual VOcabulary pre-training (VIVO) which leverages large amounts of paired image-tag data to... Read more »

The post Microsoft’s new AI auto-captions images for the visually impaired appeared first on AI News.

]]>
A new AI from Microsoft aims to automatically caption images in documents and emails so that software for visual impairments can read it out.

Researchers from Microsoft explained their machine learning model in a paper on preprint repository arXiv.

The model uses VIsual VOcabulary pre-training (VIVO) which leverages large amounts of paired image-tag data to learn a visual vocabulary.

A second dataset of properly captioned images is then used to help teach the AI how to best describe the pictures.

“Ideally, everyone would include alt text for all images in documents, on the web, in social media – as this enables people who are blind to access the content and participate in the conversation. But, alas, people don’t,” said Saqib Shaikh, a software engineering manager with Microsoft’s AI platform group.

Overall, the researchers expect the AI to deliver twice the performance of Microsoft’s existing captioning system.

In order to benchmark the performance of their new AI, the researchers entered it into the ‘nocaps’ challenge. As of writing, Microsoft’s AI now ranks first on its leaderboard.

“The nocaps challenge is really how are you able to describe those novel objects that you haven’t seen in your training data?” commented Lijuan Wang, a principal research manager in Microsoft’s research lab.

Developers wanting to get started with building apps using Microsoft’s auto-captioning AI can already do so as it’s available in Azure Cognitive Services’ Computer Vision package.

Microsoft’s impressive SeeingAI application – which uses computer vision to describe an individual’s surroundings for people suffering from vision loss – will be updated with features using the new AI.

“Image captioning is one of the core computer vision capabilities that can enable a broad range of services,” said Xuedong Huang, Microsoft CTO of Azure AI Cognitive Services.

“We’re taking this AI breakthrough to Azure as a platform to serve a broader set of customers,” Huang continued. “It is not just a breakthrough on the research; the time it took to turn that breakthrough into production on Azure is also a breakthrough.”

The improved auto-captioning feature is also expected to be available in Outlook, Word, and PowerPoint later this year.

(Photo by K8 on Unsplash)

The post Microsoft’s new AI auto-captions images for the visually impaired appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/10/19/microsoft-new-ai-auto-captions-images-visually-impaired/feed/ 0
Meena is Google’s first truly conversational AI https://news.deepgeniusai.com/2020/01/29/meena-google-truly-conversational-ai/ https://news.deepgeniusai.com/2020/01/29/meena-google-truly-conversational-ai/#respond Wed, 29 Jan 2020 14:59:17 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6387 Google is attempting to build the first digital assistant that can truly hold a conversation with an AI project called Meena. Digital assistants like Alexa and Siri are programmed to pick up keywords and provide scripted responses. Google has previously demonstrated its work towards a more natural conversation with its Duplex project but Meena should... Read more »

The post Meena is Google’s first truly conversational AI appeared first on AI News.

]]>
Google is attempting to build the first digital assistant that can truly hold a conversation with an AI project called Meena.

Digital assistants like Alexa and Siri are programmed to pick up keywords and provide scripted responses. Google has previously demonstrated its work towards a more natural conversation with its Duplex project but Meena should offer another leap forward.

Meena is a neural network with 2.6 billion parameters. Google claims Meena is able to handle multiple turns in a conversation (everyone has that friend who goes off on multiple tangents during the same conversation, right?)

Google published its work on e-print repository arXiv on Monday in a paper called “Towards a Human-like Open Domain Chatbot”.

A neural network architecture called Transformer was released by Google in 2017 which is widely acknowledged to be among the best language models available. A variation of Transformer, along with a mere 40 billion English words, was used to train Meena.

Google also debuted a metric alongside Meena called Sensibleness and Specificity Average (SSA) which measures the ability of agents to maintain a conversation.

Meena scores 79 percent using the new SSA metric. For comparison, Mitsuku – a Loebner Prize-winning AI agent developed by Pandora Bots – scored 56 percent.

The result of Meena brings its conversational ability close to that of humans. On average, humans score around 86 percent using the SSA metric.

We don’t yet know when Google intends to debut Meena’s technology in its products but, as the digital assistant war heats up, we’re sure the company is as eager to release it as we are to use it.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Meena is Google’s first truly conversational AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/01/29/meena-google-truly-conversational-ai/feed/ 0
Speech and facial recognition combine to boost AI emotion detection https://news.deepgeniusai.com/2019/01/17/speech-facial-recognition-ai-emotion-detection/ https://news.deepgeniusai.com/2019/01/17/speech-facial-recognition-ai-emotion-detection/#respond Thu, 17 Jan 2019 13:02:48 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4463 Researchers have combined speech and facial recognition data to improve the emotion detection abilities of AIs. The ability to recognise emotions is a longstanding goal of AI researchers. Accurate recognition enables things such as detecting tiredness at the wheel, anger which could lead to a crime being committed, or perhaps even signs of sadness/depression at... Read more »

The post Speech and facial recognition combine to boost AI emotion detection appeared first on AI News.

]]>
Researchers have combined speech and facial recognition data to improve the emotion detection abilities of AIs.

The ability to recognise emotions is a longstanding goal of AI researchers. Accurate recognition enables things such as detecting tiredness at the wheel, anger which could lead to a crime being committed, or perhaps even signs of sadness/depression at suicide hotspots.

Nuances in how people speak and move their facial muscles to express moods have presented a challenge. Detailed in a paper (PDF) on Arxiv, researchers at the University of Science and Technology of China in Hefei have made some progress.

In the paper, the researchers wrote:

“Automatic emotion recognition (AER) is a challenging task due to the abstract concept and multiple expressions of emotion.

Inspired by this cognitive process in human beings, it’s natural to simultaneously utilize audio and visual information in AER … The whole pipeline can be completed in a neural network.”

Breaking down the process as much as I can, the system is made of two parts: one for visual, and one for audio.

For the video system, frames of faces run through a further two computational layers: a basic face detection algorithm, and three facial recognition networks that are ‘emotion-relevant’ optimised.

As for the audio system, algorithms which process sound are input with speech spectrograms to help the AI model focus on areas most relevant to emotion.

Things such as measurable characteristics are extracted from the four facial recognition algorithms from the video system and matched with speech from the audio counterpart to capture associations between them for a final emotion prediction.

A database known as AFEW8.0 contains film and television shows that were used for a subchallenge of EmotiW2018. The AI was fed with 653 video and corresponding audio clips from the database.

In the challenge, the researchers AI performed admirably – it correctly determined the emotions ‘angry,’ ‘disgust,’ ‘fear,’ ‘happy,’ ‘neutral,’ ‘sad,’ and ‘surprise’ about 62.48 percent of the time.

Overall, the AI performed better on emotions like ‘angry,’ ‘happy,’ and ‘neutral,’ which have obvious characteristics. Those which are more nuanced – like ‘disgust’ and ‘surprise’ – it struggled more with.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Speech and facial recognition combine to boost AI emotion detection appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/01/17/speech-facial-recognition-ai-emotion-detection/feed/ 0