Bots – AI News https://news.deepgeniusai.com Artificial Intelligence News Fri, 30 Oct 2020 09:15:28 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Bots – AI News https://news.deepgeniusai.com 32 32 IBM study highlights rapid uptake and satisfaction with AI chatbots https://news.deepgeniusai.com/2020/10/27/ibm-study-uptake-satisfaction-ai-chatbots/ https://news.deepgeniusai.com/2020/10/27/ibm-study-uptake-satisfaction-ai-chatbots/#respond Tue, 27 Oct 2020 11:03:20 +0000 https://news.deepgeniusai.com/?p=9975 A study by IBM released this week highlights the rapid uptake of AI chatbots in addition to increasing customer satisfaction. Most of us are hardwired to hate not speaking directly to a human when we have a problem—following years of irritating voicemail systems. However, perhaps the only thing worse is being on hold for an... Read more »

The post IBM study highlights rapid uptake and satisfaction with AI chatbots appeared first on AI News.

]]>
A study by IBM released this week highlights the rapid uptake of AI chatbots in addition to increasing customer satisfaction.

Most of us are hardwired to hate not speaking directly to a human when we have a problem—following years of irritating voicemail systems. However, perhaps the only thing worse is being on hold for an uncertain amount of time due to overwhelmed call centres.

Chatbots have come a long way and can now quickly handle most queries within minutes. Where a human is required, the reduced demand through using virtual agent technology (VAT) means customers can get the assistance they need more quickly.

The COVID-19 pandemic has greatly increased the adoption of VAT as businesses seek to maintain customer service through such a challenging time.

According to IBM’s study, 99 percent of organisations reported increased customer satisfaction by integrating virtual agents. Human agents also report increased satisfaction and IBM says those “who feel valued and empowered with the proper tools and support are more likely to deliver a better experience to customers.”

68 percent of leaders cite improving the human agent experience as being among their key reasons for adopting VAT. There’s also economic incentive, with the cost of replacing a dissatisfied agent who leaves a business estimated at as much as 33 percent of the exiting employee’s salary.

IBM claims that VAT performance in the past has only been studied through individual case studies. The company set out, alongside Oxford Economics, to change that by surveying 1,005 respondents from companies using VAT daily.

Businesses wondering whether virtual assistants are worth the investment may be interested to know that 96 percent of the respondents “exceeded, achieved, or expect to achieve” their anticipated return.

On average, companies which have implemented VAT have increased their revenue by three percent.

IBM is one of the leading providers of chatbots through its Watson Assistant solution. While there’s little reason to doubt the claims made in the report, it’s worth keeping in mind that it’s not entirely unbiased.

Watson Assistant has gone from strength-to-strength and appears to have been among the few things which benefited from the pandemic. Between February and August, Watson Assistant usage increased by 65 percent.

You can download a full copy of IBM’s report here.

(Photo by Volodymyr Hryshchenko on Unsplash)

The post IBM study highlights rapid uptake and satisfaction with AI chatbots appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/10/27/ibm-study-uptake-satisfaction-ai-chatbots/feed/ 0
What happens when Google’s chatty bot chats with a chatbot? https://news.deepgeniusai.com/2020/09/25/what-happens-google-bot-chats-with-chatbot/ https://news.deepgeniusai.com/2020/09/25/what-happens-google-bot-chats-with-chatbot/#respond Fri, 25 Sep 2020 15:51:35 +0000 https://news.deepgeniusai.com/?p=9877 Google Duplex impressed and scared the world in equal parts when it was unveiled, and now we’ve seen how a conversation goes with another chatbot. Duplex, for a quick primer, is Google’s AI-powered voice bot which can call businesses on a person’s behalf for things such as booking hair appointments. It’s so realistic that everyone... Read more »

The post What happens when Google’s chatty bot chats with a chatbot? appeared first on AI News.

]]>
Google Duplex impressed and scared the world in equal parts when it was unveiled, and now we’ve seen how a conversation goes with another chatbot.

Duplex, for a quick primer, is Google’s AI-powered voice bot which can call businesses on a person’s behalf for things such as booking hair appointments. It’s so realistic that everyone has decided that bots must declare themselves as such before chatting with a human.

A company known as PolyAI – which specialises in “enterprise-ready voice assistants” – has posted an account of what happened when Duplex called one of its restaurant assistants.

Duplex was calling businesses over the summer to update opening hours on Google Maps. This is how the conversation went:

Nikola Mrkšić, Co-Founder and CEO of PolyAI, wrote in a blog post:

“As far as we’re aware, this is the first naturally-occurring conversation between AI voice assistants in the wild.

I have never seen anything like this before, and I’m incredibly proud that PolyAI is sharing this moment in computing history with our friends from Google.”

Mrkšić humbly admits that Duplex sounds far more human-like than PolyAI’s assistant. However, he also makes a valid reference to the “uncanny valley” theory.

The uncanny valley theory suggests that people are more positive towards something which sounds like a human, up until a point. When it sounds too much like a human then it becomes creepy—a sentiment which many have certainly shared about Duplex.

(Photo by Jeffery Ho on Unsplash)

The post What happens when Google’s chatty bot chats with a chatbot? appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/09/25/what-happens-google-bot-chats-with-chatbot/feed/ 0
Researchers create AI bot to protect the identities of BLM protesters https://news.deepgeniusai.com/2020/07/29/researchers-create-ai-bot-protect-identities-blm-protesters/ https://news.deepgeniusai.com/2020/07/29/researchers-create-ai-bot-protect-identities-blm-protesters/#respond Wed, 29 Jul 2020 16:09:37 +0000 https://news.deepgeniusai.com/?p=9776 Researchers from Stanford have created an AI-powered bot to automatically cover up the faces of Black Lives Matter protesters in photos. Everyone should have the right to protest. And, if done legally, to do so without fear of having things like their future job prospects ruined because they’ve been snapped at a demonstration – from... Read more »

The post Researchers create AI bot to protect the identities of BLM protesters appeared first on AI News.

]]>
Researchers from Stanford have created an AI-powered bot to automatically cover up the faces of Black Lives Matter protesters in photos.

Everyone should have the right to protest. And, if done legally, to do so without fear of having things like their future job prospects ruined because they’ve been snapped at a demonstration – from which a select few may have gone on to do criminal acts such as arson and looting.

With images from the protests being widely shared on social media to raise awareness, police have been using the opportunity to add the people featured within them to facial recognition databases.

“Over the past weeks, we have seen an increasing number of arrests at BLM protests, with images circulating around the web enabling automatic identification of those individuals and subsequent arrests to hamper protest activity,” the researchers explain.

Software has been available for some time to blur faces, but recent AI advancements have proved that it’s possible to deblur such images.

Researchers from Stanford Machine Learning set out to develop an automated tool which prevents the real identity of those in an image from being revealed.

The result of their work is BLMPrivacyBot:

Rather than blur the faces, the bot automatically covers them up with the black fist emoji which has become synonymous with the Black Lives Matter movement. The researchers hope such a solution will be built-in to social media platforms, but admit it’s unlikely.

The researchers trained the model for their AI bot on a dataset consisting of around 1.2 million people called QNRF. However, they warn it’s not foolproof as an individual could be identified through other means such as what clothing they’re wearing.

To use the BLMPrivacyBot, you can either send an image to its Twitter handle or upload a photo to the web interface here. The open source repo is available if you want to look at the inner workings.

The post Researchers create AI bot to protect the identities of BLM protesters appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/29/researchers-create-ai-bot-protect-identities-blm-protesters/feed/ 0
Babylon Health lashes out at doctor who raised AI chatbot safety concerns https://news.deepgeniusai.com/2020/02/26/babylon-health-doctor-ai-chatbot-safety-concerns/ https://news.deepgeniusai.com/2020/02/26/babylon-health-doctor-ai-chatbot-safety-concerns/#respond Wed, 26 Feb 2020 17:24:08 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6433 Controversial healthcare app maker Babylon Health has criticised the doctor who first raised concerns about the safety of their AI chatbot. Babylon Health’s chatbot is available in the company’s GP at Hand app, a digital healthcare solution championed by health secretary Matt Hancock that was also integrated into Samsung Health since last year. The chatbot... Read more »

The post Babylon Health lashes out at doctor who raised AI chatbot safety concerns appeared first on AI News.

]]>
Controversial healthcare app maker Babylon Health has criticised the doctor who first raised concerns about the safety of their AI chatbot.

Babylon Health’s chatbot is available in the company’s GP at Hand app, a digital healthcare solution championed by health secretary Matt Hancock that was also integrated into Samsung Health since last year.

The chatbot aims to reduce the burden on GPs and A&E departments by automating the triage process to determine whether someone can treat themselves at home, should book an online or in-person GP appointment, or go straight to a hospital.

A Twitter user under the pseudonym of Dr Murphy first reached out to us back in 2018 alleging that Babylon Health’s chatbot was giving unsafe advice. Dr Murphy recently unveiled himself as Dr David Watkins and went public with his findings at The Royal Society of Medicine’s “Recent developments in AI and digital health 2020“ event in addition to appearing on a BBC Newsnight report.

Over the past couple of years, Dr Watkins has provided many examples of the chatbot giving dangerous advice. In one example, an obese 48-year-old heavy smoker patient who presented himself with chest pains was suggested to book a consultation “in the next few hours”. Anyone with any common sense would have told you to dial an emergency number straight away.

This particular issue has since been rectified but Dr Watkins has highlighted many further examples over the years which show, very clearly, there are serious safety issues.

In a press release (PDF) on Monday, Babylon Health calls Dr Watkins a “troll” who has “targeted members of our staff, partners, clients, regulators and journalists and tweeted defamatory content about us”.

According to the release, Dr Watkins has conducted 2,400 tests of the chatbot in a bid to discredit the service while raising “fewer than 100 test results which he considered concerning”.

Babylon Health claims that in just 20 cases did Dr Watkins find genuine errors while others were “misrepresentations” or “mistakes,” according to Babylon’s own “panel of senior clinicians” who remain unnamed.

Speaking to TechCrunch, Dr Watkins called Babylon’s claims “utterly nonsense” and questions where the startup got its figures from as “there are certainly not 2,400 completed triage assessments”.

Dr Watkins estimates he has conducted between 800 and 900 full triages, some of which were repeat tests to see whether Babylon Health had fixed the issues he previously highlighted.

The doctor acknowledges Babylon Health’s chatbot has improved and has issues around the rate of around one in three instances. In 2018, when Dr Watkins first reached out to us and other outlets, he says this rate was “one in one”.

While it’s one account versus the other, the evidence shows that Babylon Health’s chatbot has issued dangerous advice on a number of occasions. Dr Watkins has dedicated many hours to highlighting these issues to Babylon Health in order to improve patient safety.

Rather than welcome his efforts and work with Dr Watkins to improve their service, it seems Babylon Health has decided to go on the offensive and “try and discredit someone raising patient safety concerns”.

In their press release, Babylon accuses Watkins of posting “over 6,000” misleading attacks but without giving details of where. Dr Watkins primarily uses Twitter to post his findings. His account, as of writing, has tweeted a total of 3,925 times and not just about Babylon’s service.

This isn’t the first time Babylon Health’s figures have come into question. Back in June 2018, Babylon Health held an event where it boasted its AI beat trainee GPs at the MRCGP exam used for testing their ability to diagnose medical problems. The average pass mark is 72 percent. “How did Babylon Health do?” said Dr Mobasher Butt at the event, a director at Babylon Health. “It got 82 percent.”

Given the number of dangerous suggestions to trivial ailments the chatbot has given, especially at the time, it’s hard to imagine the claim that it beats trainee GPs as being correct. Intriguingly, the video of the event has since been deleted from Babylon Health’s YouTube account and the company removed all links to coverage of it from the “Babylon in the news” part of its website.

When asked why it deleted the content, Babylon Health said in a statement: “As a fast-paced and dynamic health-tech company, Babylon is constantly refreshing the website with new information about our products and services. As such, older content is often removed to make way for the new.”

AI solutions like those offered by Babylon Health will help to reduce the demand on health services and ensure people have access to the right information and care whenever and wherever they need it. However, patient safety must come first.

Mistakes are less forgivable in healthcare due to the risk of potentially fatal or lifechanging consequences. The usual “move fast and break things” ethos in tech can’t apply here. 

There’s a general acceptance that rarely is a new technology going to be without its problems, but people want to see that best efforts are being made to limit and address those issues. Instead of welcoming those pointing out issues with their service before it leads to a serious incident, it seems Babylon Health would rather blame everyone else for its faults.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Babylon Health lashes out at doctor who raised AI chatbot safety concerns appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/02/26/babylon-health-doctor-ai-chatbot-safety-concerns/feed/ 0
Google’s Duplex booking AI often relies on humans for backup https://news.deepgeniusai.com/2019/05/23/google-duplex-booking-ai-humans-backup/ https://news.deepgeniusai.com/2019/05/23/google-duplex-booking-ai-humans-backup/#respond Thu, 23 May 2019 14:21:29 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5679 Google Duplex often calls on humans for backup when making reservations on behalf of users, and that should be welcomed. Duplex caused a stir when it debuted at Google’s I/O developer conference last year. The AI was shown calling a hair salon to make a booking and did so complete with human-like “ums” and “ahs”.... Read more »

The post Google’s Duplex booking AI often relies on humans for backup appeared first on AI News.

]]>
Google Duplex often calls on humans for backup when making reservations on behalf of users, and that should be welcomed.

Duplex caused a stir when it debuted at Google’s I/O developer conference last year. The AI was shown calling a hair salon to make a booking and did so complete with human-like “ums” and “ahs”.

The use of such human mannerisms goes to show Google’s intention was for the human to be unaware they’re in conversation with an AI. Following some outcry, Google and other tech giants have pledged to make it clear to humans if they’re not speaking to another person.

Duplex is slowly rolling out and is available for Pixel smartphone owners in the US. Currently, it turns out Duplex bookings are often being carried out by humans in call centres.

Google confirmed to the New York Times that about 25 percent of the Assistant-based calls start with a human in a call centre, while 15 percent require human intervention. Times reporters Brian Chen and Cade Metz made four sample reservations and just one was completed start to finish by the AI.

The practice of using humans as a backup should always be praised. Making this standard practice helps increase trust, reduces concerns about human workers being replaced, and provides some accountability when things go awry.

Only so much can go wrong when booking a hair appointment, but setting expectations now will help to guide developments further down the line.

AI is being increasingly used in a military capacity, and most will sleep better at night knowing a human is behind any final decision rather than complete automation. Just imagine if Soviet officer Stanislav Yevgrafovich Petrov decided to launch retaliatory nuclear missiles after his early warning system falsely reported the launch of missiles from the US back in 1983.

According to the Times, Google isn’t in a rush to replace the human callers, and that should be welcomed.

Related: Watch our interview with UNICRI AI and Robotics Centre head Irakli Beridze discussing issues like weaponisation and the impact on jobs:

The post Google’s Duplex booking AI often relies on humans for backup appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/05/23/google-duplex-booking-ai-humans-backup/feed/ 0
Microsoft acquires conversational AI company XOXCO https://news.deepgeniusai.com/2018/11/16/microsoft-conversational-ai-xoxco/ https://news.deepgeniusai.com/2018/11/16/microsoft-conversational-ai-xoxco/#comments Fri, 16 Nov 2018 13:42:03 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4207 Microsoft has announced the acquisition of Texas-based conversational AI firm XOXCO to amplify the company’s work in the field. XOXCO have been in operation since 2013 and gained renown for creating Howdy, the first commercially available bot for Slack. Microsoft believes that bots will become a key way that businesses engage with employees and customers.... Read more »

The post Microsoft acquires conversational AI company XOXCO appeared first on AI News.

]]>
Microsoft has announced the acquisition of Texas-based conversational AI firm XOXCO to amplify the company’s work in the field.

XOXCO have been in operation since 2013 and gained renown for creating Howdy, the first commercially available bot for Slack.

Microsoft believes that bots will become a key way that businesses engage with employees and customers. The company has undertaken many projects with the aim of unlocking their potential, to various success.

Tay, for example, has become an infamous example of a bot gone wrong after the wonderful denizens of the internet taught Microsoft’s creation to be a “Hitler-loving, incestual sex-promoting, ‘Bush did 9/11’-proclaiming robot”.

However, the company has grand ideas about inter-communicating bots which could be groundbreaking. Asking Cortana to order a pizza could invoke a bot by Dominos, or to book a flight may call on KAYAK.

The Microsoft Bot Framework already supports over 360,000 developers. With the acquisition of XOXCO, the company hopes to further democratise AI development, conversation and dialog, and the integration of conversational experiences where people communicate.

Lili Cheng, Corporate Vice President of Conversational AI at Microsoft, wrote in a post:

“Our goal is to make AI accessible and valuable to every individual and organization, amplifying human ingenuity with intelligent technology.

To do this, Microsoft is infusing intelligence across all its products and services to extend individuals’ and organizations’ capabilities and make them more productive, providing a powerful platform of AI services and tools that makes innovation by developers and partners faster and more accessible, and helping transform business by enabling breakthroughs to current approaches and entirely new scenarios that leverage the power of intelligent technology.”

Microsoft has made several related acquisitions this year, demonstrating how important AI and bots are to the company.

  • May – Microsoft bought Semantic Machines, another company working on conversational AI.
  • July – Bonsai was acquired, a firm combining machine teaching, reinforcement learning, and simulation.
  • September – Lobe came under Microsoft’s wing, a company aiming to make AI and deep learning development easier.

Gartner backs Microsoft’s belief in bots, recently predicting: “By 2020, conversational artificial intelligence will be a supported user experience for more than 50 percent of large, consumer-centric enterprises.”

Microsoft is setting itself up to be in one of the best positions to capitalise on the growth of conversational AIs, and it looks set to pay off.

 AI & >.

The post Microsoft acquires conversational AI company XOXCO appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/11/16/microsoft-conversational-ai-xoxco/feed/ 1
Samsung partners with Babylon Health to offer AI consultations https://news.deepgeniusai.com/2018/05/31/samsung-babylon-health-ai-consultations/ https://news.deepgeniusai.com/2018/05/31/samsung-babylon-health-ai-consultations/#comments Thu, 31 May 2018 11:28:34 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3217 Samsung has partnered with Babylon Health to offer AI-powered medical consultations to its smartphone users via the ‘GP at Hand’ service. AI News first covered GP at Hand in November last year. “GP at Hand is a window into what the NHS of the future will look like,” said Dr Howard Freeman MBE, senior GP.... Read more »

The post Samsung partners with Babylon Health to offer AI consultations appeared first on AI News.

]]>
Samsung has partnered with Babylon Health to offer AI-powered medical consultations to its smartphone users via the ‘GP at Hand’ service.

AI News first covered GP at Hand in November last year.

“GP at Hand is a window into what the NHS of the future will look like,” said Dr Howard Freeman MBE, senior GP. “When innovative NHS GPs embrace Babylon’s technology to make life better for their patients, the sky is the limit.”

The service uses AI to determine if a patient’s symptoms require further attention before putting them in touch with a GP using video chat if necessary.

Where appropriate, prescriptions can be sent automatically to a pharmacy of choice — or a patient can be booked in for a physical examination at a practice.

Dr. Ali Parsa, Babylon’s Founder & CEO, says:

“Babylon’s mission is to make healthcare accessible and affordable and to put it into the hands of everyone on Earth. Samsung’s vision for empowering individuals and transforming healthcare, partnered with the company’s illustrious history of technological innovation, constant focus on customer satisfaction and truly global reach makes it a perfect fit with our values and mission.

It’s very exciting to know that millions of Samsung users will soon be able to better manage their health using Babylon’s services as we deliver personal health assessments and treatment advice via their Samsung Galaxy devices.”

Samsung will be integrating Babylon Health’s service into the built-in Samsung Health app on compatible Galaxy devices. The service will not be free: users can decide between a £50 per year subscription, or pay £25 for a one-off appointment.

Kyle Brown, Head of Technology and Services at Samsung UK, adds:

“We’re excited to be welcoming ‘Ask an Expert, powered by Babylon’ to the Samsung Health app. Now our customers will be able to look after their health from wherever they are – whether it’s checking a symptom or talking to a doctor – all within a few simple taps.

The availability of the Babylon service within the app is another milestone for Samsung as we move towards a more connected, healthy world.”

Health startup Babylon has been expanding rapidly as people look for alternatives to overburdened traditional health services.

Dame Barbara Hakin, Former GP and National Director in NHS England, comments:

“I know just how difficult times are for GPs these days and how busy they are. GP at Hand, in addition to being very convenient for patients, can help the service given the recruitment crisis we know is facing us.

This technology can take more of the strain and ensure the best information and insight is available ahead of consultations which will then relieve some of the pressure on hard-pressed clinicians.”

While the service is launching in the UK, Babylon Health is looking to expand its partnership with Samsung worldwide.

Babylon Health recently signed a deal with social giant WeChat in China to offer its services in the country; showing its desire to make healthcare more accessible to everyone around the world.

One day, it’s not hard to imagine a subscription to a service like GP at Hand being able to quickly connect patients with local doctors for advice and treatment even while travelling in other countries. That could offer a lot of peace of mind.

What are your thoughts on the partnership?

  & ac

The post Samsung partners with Babylon Health to offer AI consultations appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/05/31/samsung-babylon-health-ai-consultations/feed/ 1
Bill forcing AI bots to reveal themselves faces EFF opposition https://news.deepgeniusai.com/2018/05/24/bill-ai-bot-reveal-eff/ https://news.deepgeniusai.com/2018/05/24/bill-ai-bot-reveal-eff/#comments Thu, 24 May 2018 13:58:39 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3175 A bill that would force AI bots to reveal themselves as not being human is facing opposition from the EFF over free speech concerns. Many were slightly disturbed by Google’s demo of its Duplex AI conducting a phone call and the other participant being unaware they weren’t speaking to a human. Less than a month... Read more »

The post Bill forcing AI bots to reveal themselves faces EFF opposition appeared first on AI News.

]]>
A bill that would force AI bots to reveal themselves as not being human is facing opposition from the EFF over free speech concerns.

Many were slightly disturbed by Google’s demo of its Duplex AI conducting a phone call and the other participant being unaware they weren’t speaking to a human. Less than a month later, Microsoft demonstrated it also had the same capabilities.

There are clearly big changes ahead in how we interact, and not everyone is going to be happy speaking to a robot without being aware. The B.O.T. Act (SB 1001) intends to make it illegal for a computer to speak to someone in California without revealing it’s not human.

The summary of the bill reads:

“This bill would make it unlawful for any person to use a bot, as defined, to communicate or interact with natural persons in California online with the intention of misleading and would provide that a person using a bot is presumed to act with the intent to mislead unless the person discloses that the bot is not a natural person.

The bill would require an online platform to enable users to report violations of this prohibition, to respond to the reports, and to provide the Attorney General with specified related information.”

Google and Microsoft have both said their respective AIs would reveal themselves not to be human regardless of legislation.

The B.O.T. Act is facing stiff opposition from the Electronic Freedom Foundation (EFF) who appear to be setting themselves up as champions of rights for machines.

In a post, the EFF wrote: “Why does it matter that a bot (instead of a human) is speaking such that we should have a government mandate to force disclosure?”

The non-profit for digital privacy argues the law raises ‘significant free speech concerns’ and could represent the start of what’s going to be a long debate over what rights machines should have.

Do you think AIs should be forced to reveal themselves as not human?

 

The post Bill forcing AI bots to reveal themselves faces EFF opposition appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/05/24/bill-ai-bot-reveal-eff/feed/ 1
Watch out Google Duplex, Microsoft also has a chatty AI https://news.deepgeniusai.com/2018/05/23/google-duplex-microsoft-chatty-ai/ https://news.deepgeniusai.com/2018/05/23/google-duplex-microsoft-chatty-ai/#respond Wed, 23 May 2018 10:45:31 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3148 Not content with being outdone by Google’s impressive (yet creepy) Duplex demo, Microsoft has shown it also has an AI capable of making human-like phone calls. The company first launched its XiaoIce project back in August 2017. In April, Microsoft said it had achieved full duplexing — the ability to speak and listen at the... Read more »

The post Watch out Google Duplex, Microsoft also has a chatty AI appeared first on AI News.

]]>
Not content with being outdone by Google’s impressive (yet creepy) Duplex demo, Microsoft has shown it also has an AI capable of making human-like phone calls.

The company first launched its XiaoIce project back in August 2017. In April, Microsoft said it had achieved full duplexing — the ability to speak and listen at the same time, similar to humans.

Microsoft’s announcement was made before Google’s demonstration earlier this month but, unlike Google, the company had nothing to show at the time.

XiaoIce has now been demonstrated in action during a London event:

The chatbot is only available in China at this time, but it’s become incredibly popular with more than 500 million users.

XiaoIce also features over 230 skills and has been used to perform things such as creating news and hosting radio programs as part of its ‘Content Creation Platform’.

In a blog post, Microsoft VP of AI Harry Shum revealed that more than 600,000 people have spoken on the phone with XiaoIce since it launched in August.

“Most intelligent agents today like Alexa or Siri focus on IQ or task completion, providing basic information like weather or traffic,” wrote Shum. “But we need agents and bots to balance the smarts of IQ with EQ – our emotional intelligence.”

“When we communicate, we use tone of voice, word play, and humour, things that are very difficult for computers to understand. However, Xiaoice has the ability to have human-like verbal conversations, which the industry calls full duplex.”

As many have called for since the Duplex demo, and Google has promised, Microsoft ensures a human participant is aware they’re speaking to an AI.

One thing we’d love to see is a conversation between XiaoIce and Google Duplex to see how well they each hold up. However, let’s keep our hands on the kill switch in case world domination becomes a topic.

What are your thoughts on conversational AIs like XiaoIce and Duplex?

 

The post Watch out Google Duplex, Microsoft also has a chatty AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/05/23/google-duplex-microsoft-chatty-ai/feed/ 0
Huawei wants to develop the first digital assistant with emotions https://news.deepgeniusai.com/2018/04/23/huawei-first-digital-assistant-emotions/ https://news.deepgeniusai.com/2018/04/23/huawei-first-digital-assistant-emotions/#respond Mon, 23 Apr 2018 15:32:45 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3015 Technology giant Huawei wants to develop the first digital assistant which evokes an emotional bond with the user to offer a more personal experience. “We want to introduce emotional interactions,” said Felix Zhang, VP of Software Engineering at Huawei, in an interview with CNBC. “We believe that in the future all of our end users... Read more »

The post Huawei wants to develop the first digital assistant with emotions appeared first on AI News.

]]>
Technology giant Huawei wants to develop the first digital assistant which evokes an emotional bond with the user to offer a more personal experience.

“We want to introduce emotional interactions,” said Felix Zhang, VP of Software Engineering at Huawei, in an interview with CNBC. “We believe that in the future all of our end users will want to interact with the system more passionately.”

If the movie ‘Her’ comes to mind when hearing about Huawei’s plans, executives said they were inspired by the film. The protagonist in Her falls in love with his digital assistant who adapts to his emotional needs.

Today’s interactions with digital assistants like Siri are quick but emotionless and scripted experiences. Huawei wants their future assistant to be able to continue a conversation longer for a more natural and personal discussion.

“Huawei’s new digital assistant, powered by artificial intelligence, will try to continue the talks as long as possible so that the user does not feel he is alone,” said Editor Lu, Director of AI at Huawei’s consumer business group.

The company’s priority continues to be improving the intelligence of its assistant to ensure it’s able to carry out tasks without a user having to touch their devices in many cases.

“The first step is to give your assistant a high IQ, and then you have to give him a high percentage of EQ emotions,” continues Lu.

Prioritising intelligence makes sense, nobody wants a chatty assistant — digital or otherwise — who ultimately cannot do their job.

Do you think adding emotions to digital assistants is a good idea?

 

The post Huawei wants to develop the first digital assistant with emotions appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/04/23/huawei-first-digital-assistant-emotions/feed/ 0