Google – AI News https://news.deepgeniusai.com Artificial Intelligence News Thu, 24 Dec 2020 10:09:18 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Google – AI News https://news.deepgeniusai.com 32 32 Google is telling its scientists to give AI a ‘positive’ spin https://news.deepgeniusai.com/2020/12/24/google-telling-scientists-give-ai-positive-spin/ https://news.deepgeniusai.com/2020/12/24/google-telling-scientists-give-ai-positive-spin/#respond Thu, 24 Dec 2020 10:09:16 +0000 https://news.deepgeniusai.com/?p=10136 Google has reportedly been telling its scientists to give AI a “positive” spin in research papers. Documents obtained by Reuters suggest that, in at least three cases, Google’s researchers were requested to refrain from being critical of AI technology. A “sensitive topics” review was established by Google earlier this year to catch papers which cast... Read more »

The post Google is telling its scientists to give AI a ‘positive’ spin appeared first on AI News.

]]>
Google has reportedly been telling its scientists to give AI a “positive” spin in research papers.

Documents obtained by Reuters suggest that, in at least three cases, Google’s researchers were requested to refrain from being critical of AI technology.

A “sensitive topics” review was established by Google earlier this year to catch papers which cast a negative light on AI ahead of their publication.

Google asks its scientists to consult with legal, policy, and public relations teams prior to publishing anything on topics which could be deemed sensitive like sentiment analysis and categorisations of people based on race and/or political affiliation.

The new review means that papers from Google’s expert researchers which raise questions about AI developments may never be published. Reuters says four staff researchers believe Google is interfering with studies into potential technology harms.

Google recently faced scrutiny after firing leading AI ethics researcher Timnit Gebru.

Gebru is considered a pioneer in the field and researched the risks and inequalities found in large language models. She claims to have been fired by Google over an unpublished paper and sending an email critical of the company’s practices.

In an internal email countering Gebru’s claims, Head of Google Research Jeff Dean wrote:

“We’ve approved dozens of papers that Timnit and/or the other Googlers have authored and then published, but as you know, papers often require changes during the internal review process (or are even deemed unsuitable for submission). 

Unfortunately, this particular paper was only shared with a day’s notice before its deadline — we require two weeks for this sort of review — and then instead of awaiting reviewer feedback, it was approved for submission and submitted.

A cross-functional team then reviewed the paper as part of our regular process and the authors were informed that it didn’t meet our bar for publication and were given feedback about why.”

While it’s one word against another, it’s not a great look for Google.

“Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues,” Reuters reported one of Google’s documents as saying.

On its public-facing website, Google says that its scientists have “substantial” freedom—but that’s increasingly appearing like it’s not the case.

(Photo by Mitchell Luo on Unsplash)

The post Google is telling its scientists to give AI a ‘positive’ spin appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/24/google-telling-scientists-give-ai-positive-spin/feed/ 0
Google fires ethical AI researcher Timnit Gebru after critical email https://news.deepgeniusai.com/2020/12/04/google-fires-ethical-ai-researcher-timnit-gebru-email/ https://news.deepgeniusai.com/2020/12/04/google-fires-ethical-ai-researcher-timnit-gebru-email/#comments Fri, 04 Dec 2020 16:18:56 +0000 https://news.deepgeniusai.com/?p=10062 A leading figure in ethical AI development has been fired by Google after criticising the company. Timnit Gebru is considered a pioneer in the field and researched the risks and inequalities found in large language models. Gebru claims she was fired by Google over an unpublished paper and sending an email critical of the company’s... Read more »

The post Google fires ethical AI researcher Timnit Gebru after critical email appeared first on AI News.

]]>
A leading figure in ethical AI development has been fired by Google after criticising the company.

Timnit Gebru is considered a pioneer in the field and researched the risks and inequalities found in large language models.

Gebru claims she was fired by Google over an unpublished paper and sending an email critical of the company’s practices.

The paper questions whether language models can be too big, who benefits from them, and whether they can increase prejudice and inequalities. Some recent cases validate her claims about large models and datasets in general.

For example, MIT was forced to remove a large dataset earlier this year called 80 Million Tiny Images. The dataset is popular for training AIs but was found to contain images labelled with racist, misogynistic, and other unacceptable terms.

A statement on MIT’s website claims it was unaware of the offensive labels and they were “a consequence of the automated data collection procedure that relied on nouns from WordNet.”

The statement goes on to explain the 80 million images contained in the dataset – with sizes of just 32×32 pixels – meant that manual inspection would be almost impossible and couldn’t guarantee all offensive images would be removed.

Gebru reportedly sent an email to the Google Brain Women and Allies listserv that is “inconsistent with the expectations of a Google manager.”

In the email, Gebru expressed her frustration with a perceived lack of progress at Google in hiring women at Google. Gebru claimed she was also told not to publish a piece of research and advised employees to stop filling out diversity paperwork because it didn’t matter.

On top of the questionable reasons for her firing, Gebru says her former colleagues were emailed saying she offered her resignation—which she claims was not the case:

Platformer obtained an email from Jeff Dean, Head of Google Research, which was sent to employees and offers his take on Gebru’s claims:

“We’ve approved dozens of papers that Timnit and/or the other Googlers have authored and then published, but as you know, papers often require changes during the internal review process (or are even deemed unsuitable for submission). Unfortunately, this particular paper was only shared with a day’s notice before its deadline — we require two weeks for this sort of review — and then instead of awaiting reviewer feedback, it was approved for submission and submitted.

A cross functional team then reviewed the paper as part of our regular process and the authors were informed that it didn’t meet our bar for publication and were given feedback about why. It ignored too much relevant research — for example, it talked about the environmental impact of large models, but disregarded subsequent research showing much greater efficiencies. Similarly, it raised concerns about bias in language models, but didn’t take into account recent research to mitigate these issues.”

Dean goes on to claim Gebru made demands which included revealing the identities of the individuals he and Google Research VP of Engineering Megan Kacholia consulted with as part of the paper’s review. If the demands weren’t met, Gebru reportedly said she would leave the company.

It’s a case of one word against another, but – for a company already in the spotlight from both the public and regulators over questionable practices – being seen to fire an ethics researcher for calling out problems is not going to be good PR.

(Image Credit: Timnit Gebru by Kimberly White/Getty Images for TechCrunch under CC BY 2.0 license)

The post Google fires ethical AI researcher Timnit Gebru after critical email appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/04/google-fires-ethical-ai-researcher-timnit-gebru-email/feed/ 2
What happens when Google’s chatty bot chats with a chatbot? https://news.deepgeniusai.com/2020/09/25/what-happens-google-bot-chats-with-chatbot/ https://news.deepgeniusai.com/2020/09/25/what-happens-google-bot-chats-with-chatbot/#respond Fri, 25 Sep 2020 15:51:35 +0000 https://news.deepgeniusai.com/?p=9877 Google Duplex impressed and scared the world in equal parts when it was unveiled, and now we’ve seen how a conversation goes with another chatbot. Duplex, for a quick primer, is Google’s AI-powered voice bot which can call businesses on a person’s behalf for things such as booking hair appointments. It’s so realistic that everyone... Read more »

The post What happens when Google’s chatty bot chats with a chatbot? appeared first on AI News.

]]>
Google Duplex impressed and scared the world in equal parts when it was unveiled, and now we’ve seen how a conversation goes with another chatbot.

Duplex, for a quick primer, is Google’s AI-powered voice bot which can call businesses on a person’s behalf for things such as booking hair appointments. It’s so realistic that everyone has decided that bots must declare themselves as such before chatting with a human.

A company known as PolyAI – which specialises in “enterprise-ready voice assistants” – has posted an account of what happened when Duplex called one of its restaurant assistants.

Duplex was calling businesses over the summer to update opening hours on Google Maps. This is how the conversation went:

Nikola Mrkšić, Co-Founder and CEO of PolyAI, wrote in a blog post:

“As far as we’re aware, this is the first naturally-occurring conversation between AI voice assistants in the wild.

I have never seen anything like this before, and I’m incredibly proud that PolyAI is sharing this moment in computing history with our friends from Google.”

Mrkšić humbly admits that Duplex sounds far more human-like than PolyAI’s assistant. However, he also makes a valid reference to the “uncanny valley” theory.

The uncanny valley theory suggests that people are more positive towards something which sounds like a human, up until a point. When it sounds too much like a human then it becomes creepy—a sentiment which many have certainly shared about Duplex.

(Photo by Jeffery Ho on Unsplash)

The post What happens when Google’s chatty bot chats with a chatbot? appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/09/25/what-happens-google-bot-chats-with-chatbot/feed/ 0
Google returns to using human YouTube moderators after AI errors https://news.deepgeniusai.com/2020/09/21/google-human-youtube-moderators-ai-errors/ https://news.deepgeniusai.com/2020/09/21/google-human-youtube-moderators-ai-errors/#respond Mon, 21 Sep 2020 17:05:18 +0000 https://news.deepgeniusai.com/?p=9865 Google is returning to using humans for YouTube moderation after repeated errors with its AI system. Moderating a large network like YouTube is no easy task. Aside from the sheer volume of content uploaded every day, moderators are subjected to the worst of humanity and often end up requiring therapy. They’re the unsung heroes. AI... Read more »

The post Google returns to using human YouTube moderators after AI errors appeared first on AI News.

]]>
Google is returning to using humans for YouTube moderation after repeated errors with its AI system.

Moderating a large network like YouTube is no easy task. Aside from the sheer volume of content uploaded every day, moderators are subjected to the worst of humanity and often end up requiring therapy. They’re the unsung heroes.

AI has been hailed as helping to deal with some of the aforementioned issues. Either by automating the moderation process entirely or by offering a helping hand to humans.

Google was left with little choice but to give more power to its AI moderators as the COVID-19 pandemic took hold… but it hasn’t been smooth sailing.

In late August, YouTube said that it had removed 11.4 million videos over the three months prior–the most since the site launched in 2005.

That figure alone should raise a few eyebrows. If a team of humans were removing that many videos, they probably deserve quite the pay rise.

Of course, most of the video removals weren’t done by humans. Many of the videos didn’t even violate the guidelines.

Neal Mohan, chief product officer at YouTube, told the Financial Times:

“One of the decisions we made [at the beginning of the COVID-19 pandemic] when it came to machines who couldn’t be as precise as humans, we were going to err on the side of making sure that our users were protected, even though that might have resulted in [a] slightly higher number of videos coming down.”

Some of the removals left content creators bewildered, angry, and out of pocket in some cases.

Around 320,000 of videos taken down were appealed, and half of the appealed videos were reinstated.

Deciding what content to ultimately remove feels like one of the many tasks which needs human involvement. Humans are much better at detecting nuances and things like sarcasm.

However, the sheer scale of content needing to be moderated also requires an AI to help automate some of that process.

“Over 50 percent of those 11 million videos were removed without a single view by an actual YouTube user and over 80 percent were removed with less than 10 views,” Mohan said. “That’s the power of machines.”

AIs can also help to protect humans from the worst of the content. Content detection systems are being built to automatically blur things like child abuse enough so that human moderators know what it is to remove it—but to limit their psychological impact.

Some believe AIs are better in helping to determine what content should be removed simply using logic rather than a human’s natural biases like their political-leaning, but we know human biases seep into algorithms.

In May, YouTube admitted to deleting messages critical of the Chinese Communist Party (CCP). YouTube later blamed an “error with our enforcement systems” for the mistakes. Senator Josh Hawley even wrote (PDF) to Google CEO Sundar Pichai seeking answers to “troubling reports that your company has resumed its long pattern of censorship at the behest of the Chinese Communist Party.”

Google appears to have quickly realised that replacing humans entirely with AI is rarely a good idea. The company says many of the human moderators who were “put offline” during the pandemic are now coming back.

(Photo by Rachit Tank on Unsplash)

The post Google returns to using human YouTube moderators after AI errors appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/09/21/google-human-youtube-moderators-ai-errors/feed/ 0
Google’s Model Card Toolkit aims to bring transparency to AI https://news.deepgeniusai.com/2020/07/30/google-model-card-toolkit-ai/ https://news.deepgeniusai.com/2020/07/30/google-model-card-toolkit-ai/#respond Thu, 30 Jul 2020 16:02:21 +0000 https://news.deepgeniusai.com/?p=9782 Google has released a toolkit which it hopes will bring some transparency to AI models. People are wary of big tech companies like Google. People are also concerned about AI. Combine the two and you’ve got a general distrust which can hinder important advancements. Model Card Toolkit aims to step in and facilitate AI model... Read more »

The post Google’s Model Card Toolkit aims to bring transparency to AI appeared first on AI News.

]]>
Google has released a toolkit which it hopes will bring some transparency to AI models.

People are wary of big tech companies like Google. People are also concerned about AI. Combine the two and you’ve got a general distrust which can hinder important advancements.

Model Card Toolkit aims to step in and facilitate AI model transparency reporting for developers, regulators, and downstream users.

Google launched Model Cards itself over the past year, something that the company first conceptualised in an October 2018 whitepaper.

Model Cards provide a structured framework for reporting on ML model provenance, usage, and ethics-informed evaluation and give a detailed overview of a model’s suggested uses and limitations. 

So far, Google has released Model Cards for open source models built on its MediaPipe platform as well as its commercial Cloud Vision API Face Detection and Object Detection services.

Google’s new toolkit for Model Cards will simplify the process of creating them for third parties by compiling the data and helping build interfaces orientated for specific audiences.

Here’s an example of a Model Card:

MediaPipe has published their Model Cards for each of their open-source models in their GitHub repository.

To demonstrate how the Model Cards Toolkit can be used in practice, Google has released a Colab tutorial that builds a Model Card for a simple classification model trained on the UCI Census Income dataset.

If you just want to dive right in, you can access the Model Cards Toolkit here.

(Photo by Marc Schulte on Unsplash)

The post Google’s Model Card Toolkit aims to bring transparency to AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/30/google-model-card-toolkit-ai/feed/ 0
Google pledges to no longer build AIs for the fossil fuel industry https://news.deepgeniusai.com/2020/05/22/google-no-longer-build-ai-fossil-fuel-industry/ https://news.deepgeniusai.com/2020/05/22/google-no-longer-build-ai-fossil-fuel-industry/#respond Fri, 22 May 2020 15:45:52 +0000 https://news.deepgeniusai.com/?p=9614 Google has pledged to no longer build AIs for the fossil fuel industry as it further distances itself from controversial developments. A report from Greenpeace earlier this month exposed Google as being one of the top three developers of AI tools for the fossil fuel industry. Greenpeace found AI technologies boost production levels by as... Read more »

The post Google pledges to no longer build AIs for the fossil fuel industry appeared first on AI News.

]]>
Google has pledged to no longer build AIs for the fossil fuel industry as it further distances itself from controversial developments.

A report from Greenpeace earlier this month exposed Google as being one of the top three developers of AI tools for the fossil fuel industry. Greenpeace found AI technologies boost production levels by as much as five percent.

In an interview with CUBE’s John Furrier, the leader of Google’s CTO office, Will Grannis, said that Google will “no longer develop artificial intelligence (AI) software and tools for oil and gas drilling operations.”

The pledge from Google Cloud is welcome, but it must be taken in a wider context.

In 2019, Google Cloud’s revenue from oil and gas was approximately $65 million. A hefty sum, but less than one percent of all Google Cloud revenues. Furthermore, Google Cloud’s revenue from oil and gas decreased by about 11 percent despite overall revenue growing by 53 percent.

While Google Cloud’s revenue from the oil and gas industry was declining, the public’s intolerance towards big polluters is increasing. The reputational damage caused to Google of continuing its relationship with polluters would likely have been more costly over the long-term.

This isn’t the first time Google has cut-off an AI-related relationship with a controversial industry to preserve its reputation.

Back in 2018, Google was forced into ending a contract with the Pentagon called Project Maven to build AI technologies for drones. Over 4,000 Google employees signed a petition demanding their management cease the project and never again “build warfare technology.”

Following the Project Maven backlash, Google CEO Sundar Pichai promised in a blog post the company will not develop technologies or weapons that cause harm, or anything which can be used for surveillance violating “internationally accepted norms” or “widely accepted principles of international law and human rights”.

Back in January, Pichai called for sensible AI regulation that does not limit the potential societal benefits.

PAX, a Dutch NGO, ranked Google among the safest companies developing AI while slamming rivals such as Amazon and Microsoft for being among the “highest risk” tech firms in the world.

(Photo by Zbynek Burival on Unsplash)

The post Google pledges to no longer build AIs for the fossil fuel industry appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/05/22/google-no-longer-build-ai-fossil-fuel-industry/feed/ 0
Leading AI researchers propose ‘toolbox’ for verifying ethics claims https://news.deepgeniusai.com/2020/04/20/ai-researchers-toolbox-verifying-ethics-claims/ https://news.deepgeniusai.com/2020/04/20/ai-researchers-toolbox-verifying-ethics-claims/#comments Mon, 20 Apr 2020 14:23:30 +0000 https://news.deepgeniusai.com/?p=9558 Researchers from OpenAI, Google Brain, Intel, and 28 other leading organisations have published a paper which proposes a ‘toolbox’ for verifying AI ethics claims. With concerns around AI spanning from dangerous indifference to innovation-halting scaremongering; it’s clear there’s a need for a system to achieve a healthy balance. “AI systems have been developed in ways... Read more »

The post Leading AI researchers propose ‘toolbox’ for verifying ethics claims appeared first on AI News.

]]>
Researchers from OpenAI, Google Brain, Intel, and 28 other leading organisations have published a paper which proposes a ‘toolbox’ for verifying AI ethics claims.

With concerns around AI spanning from dangerous indifference to innovation-halting scaremongering; it’s clear there’s a need for a system to achieve a healthy balance.

“AI systems have been developed in ways that are inconsistent with the stated values of those developing them,” the researchers wrote. “This has led to a rise in concern, research, and activism relating to the impacts of AI systems.”

The researchers note that significant work has gone into articulating ethical principles by many players involved with AI development, but the claims are meaningless without some way to verify them.

“People who get on airplanes don’t trust an airline manufacturer because of its PR campaigns about the importance of safety – they trust it because of the accompanying infrastructure of technologies, norms, laws, and institutions for ensuring airline safety.”

Among the core ideas put forward is to pay developers for discovering bias in algorithms. Such a practice is already widespread in cybersecurity; with many companies offering up bounties to find bugs in their software.

“Bias and safety bounties would extend the bug bounty concept to AI and could complement existing efforts to better document data sets and models for their performance limitations and other properties,” the authors wrote.

“We focus here on bounties for discovering bias and safety issues in AI systems as a starting point for analysis and experimentation but note that bounties for other properties (such as security, privacy protection, or interpretability) could also be explored.”

Another potential avenue is so-called “red teaming,” the creation of a dedicated team which adopts the mindset of a possible attacker to find flaws and vulnerabilities in a plan, organisation, or technical system.

“Knowledge that a lab has a red team can potentially improve the trustworthiness of an organization with respect to their safety and security claims.”

A red team alone is unlikely to give too much confidence; but combined with other measures can go a long way. Verification from parties outside an organisation itself will be key to instil trust in that company’s AI developments.

“Third party auditing is a form of auditing conducted by an external and independent auditor, rather than the organization being audited, and can help address concerns about the incentives for accuracy in self-reporting.”

“Provided that they have sufficient information about the activities of an AI system, independent auditors with strong reputational and professional incentives for truthfulness can help verify claims about AI development.”

The researchers highlight that a current roadblock with third-party auditing is that there’s yet to be any techniques or best practices established specifically for AI. Frameworks, such as Claims-Arguments-Evidence (CAE) and Goal Structuring Notation (GSN), may provide a starting place as they’re already widely-used for safety-critical auditing.

Audit trails, covering all steps of the AI development process, are also recommended to become the norm. The researchers again point to commercial aircraft, as a safety-critical system, and their use of flight data recorders to capture multiple types of data every second and provide a full log.

“Standards setting bodies should work with academia and industry to develop audit trail requirements for safety-critical applications of AI systems.”

The final suggestion for software-oriented methods of verifying AI ethics claims is the use of privacy-preserving machine learning (PPML).

Privacy-preserving machine learning aims to protect the privacy of data or models used in machine learning, at training or evaluation time, and during deployment.

Three established types of PPML are covered in the paper: Federated learning, differential privacy, and encrypted computation.

“Where possible, AI developers should contribute to, use, and otherwise support the work of open-source communities working on PPML, such as OpenMined, Microsoft SEAL, tf-encrypted, tf-federated, and nGraph-HE.”

The researchers, representing some of the most renowned institutions in the world, have come up with a comprehensive package of ways any organisation involved with AI development can provide assurance to governance and the wider public to ensure the industry can reach its full potential responsibly.

You can find the full preprint paper on arXiv here (PDF)

(Photo by Alexander Sinn on Unsplash)

The post Leading AI researchers propose ‘toolbox’ for verifying ethics claims appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/04/20/ai-researchers-toolbox-verifying-ethics-claims/feed/ 1
Google’s chatty Duplex AI expands to the UK, Canada, and Australia https://news.deepgeniusai.com/2020/04/09/google-chatty-duplex-ai-uk-canada-australia/ https://news.deepgeniusai.com/2020/04/09/google-chatty-duplex-ai-uk-canada-australia/#respond Thu, 09 Apr 2020 15:21:17 +0000 https://news.deepgeniusai.com/?p=9540 Google’s conversational Duplex AI has begun expanding outside the US and New Zealand to the UK, Canada, and Australia. Duplex probably needs little introduction as it caused a bit of a stir when it debuted at I/O in late 2018 (when conferences were things you could still physically attend.) The human-sounding AI could perform actions... Read more »

The post Google’s chatty Duplex AI expands to the UK, Canada, and Australia appeared first on AI News.

]]>
Google’s conversational Duplex AI has begun expanding outside the US and New Zealand to the UK, Canada, and Australia.

Duplex probably needs little introduction as it caused a bit of a stir when it debuted at I/O in late 2018 (when conferences were things you could still physically attend.)

The human-sounding AI could perform actions like calling a business on a person’s behalf and booking in things such as hair appointments or table reservations.

Duplex is undeniably impressive, but it prompted a debate over whether AIs should have to state they’re not human before imitating one. Google has since decided to add disclosures at the beginning of calls and give businesses the option to opt-out of being called by an AI.

Humans haven’t been completely replaced by Duplex. Google says around a quarter of Duplex calls are started by humans, and 15 percent start with an AI but are later intervened by a human if issues arise or the person receiving the call opts not to speak with an AI.

In terms of devices, the rollout of Duplex started on Pixel phones (obviously) before making the slightly odd decision to launch on iOS devices. More Android phones then began joining the party.

(Photo by Quino Al on Unsplash)

The post Google’s chatty Duplex AI expands to the UK, Canada, and Australia appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/04/09/google-chatty-duplex-ai-uk-canada-australia/feed/ 0
Google’s latest AI could prevent deaths caused by incorrect prescriptions https://news.deepgeniusai.com/2020/04/03/google-latest-ai-prevent-deaths-incorrect-prescriptions/ https://news.deepgeniusai.com/2020/04/03/google-latest-ai-prevent-deaths-incorrect-prescriptions/#comments Fri, 03 Apr 2020 13:08:23 +0000 https://news.deepgeniusai.com/?p=9507 A new AI system developed by researchers from Google and the University of California could prevent deaths caused by incorrect prescriptions. While quite rare, prescriptions that are incorrect – or react badly to a patient’s existing medications – can result in hospitalisation or even death. In a blog post today, Alvin Rajkomar MD, Research Scientist... Read more »

The post Google’s latest AI could prevent deaths caused by incorrect prescriptions appeared first on AI News.

]]>
A new AI system developed by researchers from Google and the University of California could prevent deaths caused by incorrect prescriptions.

While quite rare, prescriptions that are incorrect – or react badly to a patient’s existing medications – can result in hospitalisation or even death.

In a blog post today, Alvin Rajkomar MD, Research Scientist and Eyal Oren PhD, Product Manager, Google AI, set out their work on using AI for medical predictions.

The AI is able to predict which conditions a patient is being treated for based on certain parameters. “For example, if a doctor prescribed ceftriaxone and doxycycline for a patient with an elevated temperature, fever and cough, the model could identify these as signals that the patient was being treated for pneumonia,” the researchers wrote.

In the future, an AI could step in if a medication that’s being prescribed looks incorrect for a patient with a specific condition in their current situation.

“While no doctor, nurse, or pharmacist wants to make a mistake that harms a patient, research shows that 2% of hospitalized patients experience serious preventable medication-related incidents that can be life-threatening, cause permanent harm, or result in death,” the researchers wrote.

“However, determining which medications are appropriate for any given patient at any given time is complex — doctors and pharmacists train for years before acquiring the skill.”

The AI was trained on an anonymised data set featuring around three million records of medications issued from over 100,000 hospitalisations.

In their paper, the researchers wrote:

“Patient records vary significantly in length and density of data points (e.g., vital sign measurements in an intensive care unit vs outpatient clinic), so we formulated three deep learning neural network model architectures that take advantage of such data in different ways: one based on recurrent neural networks (long short-term memory (LSTM)), one on an attention-based TANN, and one on a neural network with boosted time-based decision stumps.

We trained each architecture (three different ones) on each task (four tasks) and multiple time points (e.g., before admission, at admission, 24 h after admission and at discharge), but the results of each architecture were combined using ensembling.”

You can find the full paper in science journal Nature here.

The post Google’s latest AI could prevent deaths caused by incorrect prescriptions appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/04/03/google-latest-ai-prevent-deaths-incorrect-prescriptions/feed/ 1
Meena is Google’s first truly conversational AI https://news.deepgeniusai.com/2020/01/29/meena-google-truly-conversational-ai/ https://news.deepgeniusai.com/2020/01/29/meena-google-truly-conversational-ai/#respond Wed, 29 Jan 2020 14:59:17 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6387 Google is attempting to build the first digital assistant that can truly hold a conversation with an AI project called Meena. Digital assistants like Alexa and Siri are programmed to pick up keywords and provide scripted responses. Google has previously demonstrated its work towards a more natural conversation with its Duplex project but Meena should... Read more »

The post Meena is Google’s first truly conversational AI appeared first on AI News.

]]>
Google is attempting to build the first digital assistant that can truly hold a conversation with an AI project called Meena.

Digital assistants like Alexa and Siri are programmed to pick up keywords and provide scripted responses. Google has previously demonstrated its work towards a more natural conversation with its Duplex project but Meena should offer another leap forward.

Meena is a neural network with 2.6 billion parameters. Google claims Meena is able to handle multiple turns in a conversation (everyone has that friend who goes off on multiple tangents during the same conversation, right?)

Google published its work on e-print repository arXiv on Monday in a paper called “Towards a Human-like Open Domain Chatbot”.

A neural network architecture called Transformer was released by Google in 2017 which is widely acknowledged to be among the best language models available. A variation of Transformer, along with a mere 40 billion English words, was used to train Meena.

Google also debuted a metric alongside Meena called Sensibleness and Specificity Average (SSA) which measures the ability of agents to maintain a conversation.

Meena scores 79 percent using the new SSA metric. For comparison, Mitsuku – a Loebner Prize-winning AI agent developed by Pandora Bots – scored 56 percent.

The result of Meena brings its conversational ability close to that of humans. On average, humans score around 86 percent using the SSA metric.

We don’t yet know when Google intends to debut Meena’s technology in its products but, as the digital assistant war heats up, we’re sure the company is as eager to release it as we are to use it.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Meena is Google’s first truly conversational AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/01/29/meena-google-truly-conversational-ai/feed/ 0