Virtual Assistants – AI News https://news.deepgeniusai.com Artificial Intelligence News Fri, 30 Oct 2020 09:15:28 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Virtual Assistants – AI News https://news.deepgeniusai.com 32 32 IBM study highlights rapid uptake and satisfaction with AI chatbots https://news.deepgeniusai.com/2020/10/27/ibm-study-uptake-satisfaction-ai-chatbots/ https://news.deepgeniusai.com/2020/10/27/ibm-study-uptake-satisfaction-ai-chatbots/#respond Tue, 27 Oct 2020 11:03:20 +0000 https://news.deepgeniusai.com/?p=9975 A study by IBM released this week highlights the rapid uptake of AI chatbots in addition to increasing customer satisfaction. Most of us are hardwired to hate not speaking directly to a human when we have a problem—following years of irritating voicemail systems. However, perhaps the only thing worse is being on hold for an... Read more »

The post IBM study highlights rapid uptake and satisfaction with AI chatbots appeared first on AI News.

]]>
A study by IBM released this week highlights the rapid uptake of AI chatbots in addition to increasing customer satisfaction.

Most of us are hardwired to hate not speaking directly to a human when we have a problem—following years of irritating voicemail systems. However, perhaps the only thing worse is being on hold for an uncertain amount of time due to overwhelmed call centres.

Chatbots have come a long way and can now quickly handle most queries within minutes. Where a human is required, the reduced demand through using virtual agent technology (VAT) means customers can get the assistance they need more quickly.

The COVID-19 pandemic has greatly increased the adoption of VAT as businesses seek to maintain customer service through such a challenging time.

According to IBM’s study, 99 percent of organisations reported increased customer satisfaction by integrating virtual agents. Human agents also report increased satisfaction and IBM says those “who feel valued and empowered with the proper tools and support are more likely to deliver a better experience to customers.”

68 percent of leaders cite improving the human agent experience as being among their key reasons for adopting VAT. There’s also economic incentive, with the cost of replacing a dissatisfied agent who leaves a business estimated at as much as 33 percent of the exiting employee’s salary.

IBM claims that VAT performance in the past has only been studied through individual case studies. The company set out, alongside Oxford Economics, to change that by surveying 1,005 respondents from companies using VAT daily.

Businesses wondering whether virtual assistants are worth the investment may be interested to know that 96 percent of the respondents “exceeded, achieved, or expect to achieve” their anticipated return.

On average, companies which have implemented VAT have increased their revenue by three percent.

IBM is one of the leading providers of chatbots through its Watson Assistant solution. While there’s little reason to doubt the claims made in the report, it’s worth keeping in mind that it’s not entirely unbiased.

Watson Assistant has gone from strength-to-strength and appears to have been among the few things which benefited from the pandemic. Between February and August, Watson Assistant usage increased by 65 percent.

You can download a full copy of IBM’s report here.

(Photo by Volodymyr Hryshchenko on Unsplash)

The post IBM study highlights rapid uptake and satisfaction with AI chatbots appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/10/27/ibm-study-uptake-satisfaction-ai-chatbots/feed/ 0
The BBC’s virtual assistant is now available for testing in the UK https://news.deepgeniusai.com/2020/06/03/bbc-virtual-assistant-tested-in-uk/ https://news.deepgeniusai.com/2020/06/03/bbc-virtual-assistant-tested-in-uk/#respond Wed, 03 Jun 2020 15:49:57 +0000 https://news.deepgeniusai.com/?p=9668 A virtual assistant from the BBC which aims to cater for Britain’s many dialects is now available for testing. Even as a Brit, it can often feel like a translation app is needed between Bristolian, Geordie, Mancunian, Brummie, Scottish, Irish, or any of the other regional dialects in the country. For a geographically small country,... Read more »

The post The BBC’s virtual assistant is now available for testing in the UK appeared first on AI News.

]]>
A virtual assistant from the BBC which aims to cater for Britain’s many dialects is now available for testing.

Even as a Brit, it can often feel like a translation app is needed between Bristolian, Geordie, Mancunian, Brummie, Scottish, Irish, or any of the other regional dialects in the country. For a geographically small country, we’re a diverse bunch – and US-made voice assistants often struggle with even the slightest accent.

The BBC thinks it can do a better job than the incumbents and first announced its plans to build a voice assistant called ‘Beeb’ in August last year.

Beeb is being trained using the BBC’s staff from around the country. As a public service, the institution aims to offer as wide representation as possible which is reflected in its employees.

The broadcaster also believes that Beeb addresses public concerns about voice assistants; primarily that they collect vast amounts of data for commercial purposes. As a taxpayer-funded service, the BBC does not rely on things like advertising.

“People know and trust the BBC,” a spokesperson told The Guardian last year, “so it will use its role as public service innovator in technology to ensure everyone – not just the tech-elite – can benefit from accessing content and new experiences in this new way.”

An early version of Beeb is now available for testing by UK participants of the Windows Insider program. Microsoft is heavily involved in the Beeb assistant as the company’s Azure AI services are being used by the BBC.

The first version of Beeb allows users to do virtual assistant norms like getting weather updates and the news, access radio and podcasts, and even grab a few jokes from BBC Comedy writers and facts from QI host Sandi Toksvig.

According to the broadcaster, Beeb won’t launch on dedicated hardware but instead will be designed to eventually be implemented in smart speakers, TVs, and mobiles.

While it still has a long way to go to take on the capabilities of Google, Alexa, Siri, and others, Beeb may offer a compelling alternative for accent-heavy Brits that struggle with American voice assistants.

Grab the Beeb app from the Microsoft Store here.

The post The BBC’s virtual assistant is now available for testing in the UK appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/03/bbc-virtual-assistant-tested-in-uk/feed/ 0
Google’s chatty Duplex AI expands to the UK, Canada, and Australia https://news.deepgeniusai.com/2020/04/09/google-chatty-duplex-ai-uk-canada-australia/ https://news.deepgeniusai.com/2020/04/09/google-chatty-duplex-ai-uk-canada-australia/#respond Thu, 09 Apr 2020 15:21:17 +0000 https://news.deepgeniusai.com/?p=9540 Google’s conversational Duplex AI has begun expanding outside the US and New Zealand to the UK, Canada, and Australia. Duplex probably needs little introduction as it caused a bit of a stir when it debuted at I/O in late 2018 (when conferences were things you could still physically attend.) The human-sounding AI could perform actions... Read more »

The post Google’s chatty Duplex AI expands to the UK, Canada, and Australia appeared first on AI News.

]]>
Google’s conversational Duplex AI has begun expanding outside the US and New Zealand to the UK, Canada, and Australia.

Duplex probably needs little introduction as it caused a bit of a stir when it debuted at I/O in late 2018 (when conferences were things you could still physically attend.)

The human-sounding AI could perform actions like calling a business on a person’s behalf and booking in things such as hair appointments or table reservations.

Duplex is undeniably impressive, but it prompted a debate over whether AIs should have to state they’re not human before imitating one. Google has since decided to add disclosures at the beginning of calls and give businesses the option to opt-out of being called by an AI.

Humans haven’t been completely replaced by Duplex. Google says around a quarter of Duplex calls are started by humans, and 15 percent start with an AI but are later intervened by a human if issues arise or the person receiving the call opts not to speak with an AI.

In terms of devices, the rollout of Duplex started on Pixel phones (obviously) before making the slightly odd decision to launch on iOS devices. More Android phones then began joining the party.

(Photo by Quino Al on Unsplash)

The post Google’s chatty Duplex AI expands to the UK, Canada, and Australia appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/04/09/google-chatty-duplex-ai-uk-canada-australia/feed/ 0
Apple acquires Voysis to boost Siri’s language skills https://news.deepgeniusai.com/2020/04/09/apple-acquires-voysis-to-boost-siris-language-skills/ https://news.deepgeniusai.com/2020/04/09/apple-acquires-voysis-to-boost-siris-language-skills/#respond Thu, 09 Apr 2020 10:59:16 +0000 https://news.deepgeniusai.com/?p=9537 Apple has reportedly acquired Irish AI startup Voysis, with the move set to help enhance Siri’s skill at natural language understanding. Voysis specialises in improving digital assistants inside online shopping applications wherein the software could respond more precisely to voice commands. According to the start-up, its technology could narrow product search results by processing shopping... Read more »

The post Apple acquires Voysis to boost Siri’s language skills appeared first on AI News.

]]>
Apple has reportedly acquired Irish AI startup Voysis, with the move set to help enhance Siri’s skill at natural language understanding.

Voysis specialises in improving digital assistants inside online shopping applications wherein the software could respond more precisely to voice commands. According to the start-up, its technology could narrow product search results by processing shopping phrases like “I need a new LED TV” and “My budget is $1,000.” This AI was provided to other companies for use in their own applications and voice assistants.

The acquisition of Voysis would give Siri an edge to perform better than the Google Assistant, which many in the industry say has a notable lead in natural language comprehension and processing.

Voysis uses an AI-based method called WaveNets to create more human-like computer speech. In 2018, co-founder Peter Cahill had said that his company managed to shrink its system to the point where, once the AI is trained, the software uses as little as 25MB of memory, which makes it easier to run on smartphones without an internet connection.

This is not the first instance where Apple has made a big bet in this space. In January, Apple acquired Seattle-based edge AI specialist Xnor.ai for approximately $200m. Xnor.ai is the same company that once powered the person-detection feature on Wyze’s popular cameras. The Cupertino-based tech giant has been on something of an AI acquisition spree in recent years. A report by CBInsights found that Apple acquired more AI firms (20) than any other leading tech company in 2019.

This time last year, Apple acquired AI company Laserlike to add to its growing roster of in-house talent. Laserlike is known for its AI-powered app which makes it easier for users to follow news topics. Most notably, it was founded by former Google engineers.

Photo by Medhat Dawoud on Unsplash

? Attend the co-located AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Apple acquires Voysis to boost Siri’s language skills appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/04/09/apple-acquires-voysis-to-boost-siris-language-skills/feed/ 0
Meena is Google’s first truly conversational AI https://news.deepgeniusai.com/2020/01/29/meena-google-truly-conversational-ai/ https://news.deepgeniusai.com/2020/01/29/meena-google-truly-conversational-ai/#respond Wed, 29 Jan 2020 14:59:17 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6387 Google is attempting to build the first digital assistant that can truly hold a conversation with an AI project called Meena. Digital assistants like Alexa and Siri are programmed to pick up keywords and provide scripted responses. Google has previously demonstrated its work towards a more natural conversation with its Duplex project but Meena should... Read more »

The post Meena is Google’s first truly conversational AI appeared first on AI News.

]]>
Google is attempting to build the first digital assistant that can truly hold a conversation with an AI project called Meena.

Digital assistants like Alexa and Siri are programmed to pick up keywords and provide scripted responses. Google has previously demonstrated its work towards a more natural conversation with its Duplex project but Meena should offer another leap forward.

Meena is a neural network with 2.6 billion parameters. Google claims Meena is able to handle multiple turns in a conversation (everyone has that friend who goes off on multiple tangents during the same conversation, right?)

Google published its work on e-print repository arXiv on Monday in a paper called “Towards a Human-like Open Domain Chatbot”.

A neural network architecture called Transformer was released by Google in 2017 which is widely acknowledged to be among the best language models available. A variation of Transformer, along with a mere 40 billion English words, was used to train Meena.

Google also debuted a metric alongside Meena called Sensibleness and Specificity Average (SSA) which measures the ability of agents to maintain a conversation.

Meena scores 79 percent using the new SSA metric. For comparison, Mitsuku – a Loebner Prize-winning AI agent developed by Pandora Bots – scored 56 percent.

The result of Meena brings its conversational ability close to that of humans. On average, humans score around 86 percent using the SSA metric.

We don’t yet know when Google intends to debut Meena’s technology in its products but, as the digital assistant war heats up, we’re sure the company is as eager to release it as we are to use it.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Meena is Google’s first truly conversational AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/01/29/meena-google-truly-conversational-ai/feed/ 0
Huawei announces its own AI assistant as it prepares for Google-less life https://news.deepgeniusai.com/2019/09/19/huawei-announces-ai-assistant-google/ https://news.deepgeniusai.com/2019/09/19/huawei-announces-ai-assistant-google/#respond Thu, 19 Sep 2019 15:13:10 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6033 Huawei has announced its own AI-powered assistant during a launch event in Munich as it prepares for life without Google’s services. Due to US trade restrictions, Huawei is losing access to Google’s services. The new Mate 30 smartphones announced in Munich today will launch with the open-source Android, but it will not feature the Play... Read more »

The post Huawei announces its own AI assistant as it prepares for Google-less life appeared first on AI News.

]]>
Huawei has announced its own AI-powered assistant during a launch event in Munich as it prepares for life without Google’s services.

Due to US trade restrictions, Huawei is losing access to Google’s services. The new Mate 30 smartphones announced in Munich today will launch with the open-source Android, but it will not feature the Play Store, Gmail, YouTube, Google Pay, or the many other services which Western consumers are used to.

Among the features that will be missing from the Mate 30 onwards is Google Assistant. Huawei is quickly working to fill the gaps left without access to Google’s services with its own and is launching the Huawei Assistant as a replacement for Mountain View’s virtual assistant.

Walter Ji, Director of Business, HUAWEI Consumer Business Group Western Europe, said:

“With our focus on user experience, we bring AI into mobile services so we can proactively identify user needs and thus improve their smartphone experience.

Huawei Assistant is a product that intelligently fulfils user needs at the same time as offering partners an opportunity to provide their services to users through a globally-available distribution platform.”

Huawei Assistant will launch with basic functionality compared to Google’s version, but the company is promising to expand it.

By swiping to the right of the homescreen, much like accessing Google Assistant today, users can begin interacting with Huawei Assistant. The service is powered by Huawei Ability Gallery, a service distribution platform.

There are four key features of the Huawei Assistant:

  • Newsfeed – Today’s Google Assistant provides some personalised articles when you swipe to it on an Android device. The newsfeed feature is Huawei Assistant’s alternative but users can decide whether to receive custom recommendations or to select from news agencies to fill their feed with “up-to-the-minute” articles.
  • Search – Users can search for information on their smartphone using Huawei Assistant. The assistant will surface things such as installed apps, memos, emails, and calendar entries, while also providing an online search feature using the default browser.
  • Instant Access – Four shortcuts to a user’s choice of applications can be selected for quick access. In the future, Huawei says this can make use of AI so the shortcuts are intelligently-selected based on what the user may want at that moment.
  • SmarterCare – Real-time information will be provided using AI. At launch, this will mean things such as the weather forecast, missed calls, and schedule reminders. Future planned functionality will enable more powerful abilities like booking restaurants, flights, taxis, and hotels.

The new assistant from Huawei will be pre-installed on Mate 30 series devices but it will also be downloadable from the company’s App Gallery.

The post Huawei announces its own AI assistant as it prepares for Google-less life appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/09/19/huawei-announces-ai-assistant-google/feed/ 0
Google Assistant wins IQ test, but Alexa and Siri are catching up https://news.deepgeniusai.com/2019/08/19/google-assistant-iq-test-alexa-siri/ https://news.deepgeniusai.com/2019/08/19/google-assistant-iq-test-alexa-siri/#respond Mon, 19 Aug 2019 11:34:42 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5948 Google Assistant continues to lead the virtual assistant pack, but its rivals are close behind according to a new IQ study by Loup Ventures. Loup Ventures asked each of the three main virtual assistants – Google Assistant, Alexa, and Siri – a total of 800 questions. The assistants understood almost every question, even if not... Read more »

The post Google Assistant wins IQ test, but Alexa and Siri are catching up appeared first on AI News.

]]>
Google Assistant continues to lead the virtual assistant pack, but its rivals are close behind according to a new IQ study by Loup Ventures.

Loup Ventures asked each of the three main virtual assistants – Google Assistant, Alexa, and Siri – a total of 800 questions. The assistants understood almost every question, even if not all of the responses were correct/sufficient.

In terms of understanding the questions, these are the results:

  • Google Assistant – 100 percent
  • Alexa – 99.9 percent
  • Siri – 99.8 percent

Loup Ventures’ say their question set it designed to comprehensively test a virtual assistant’s ability and utility. Questions are broken down into five categories:

  1. Local – Where is the nearest coffee shop?
  2. Commerce – Order me more paper towels.
  3. Navigation – How do I get to Uptown on the bus?
  4. Information – Who do the Twins play tonight?
  5. Command – Remind me to call Jerome at 2 pm today.

This is the percentage of questions each assistant answered correctly:

  • Google Assistant – 92.9 percent
  • Siri – 83.1 percent
  • Alexa –  79.8 percent

The results are a huge improvement over Assistant, Alexa, and Siri’s results last year.

In 2018, Loup Ventures found Google Assistant answered the most questions with an 86 percent success rate. This was followed by Siri at 79 percent, while Alexa trailed behind at just 61 percent.

Alexa’s jump in answering the question correctly from 61 percent last year to almost 80 percent this year is the most commendable performance improvement, even if Amazon’s assistant is still in last place overall.

The researchers explained that they’ve stopped including Cortana in their tests due to a strategy change from Microsoft earlier this year.

Microsoft said in January that it’s no longer attempting to compete with Alexa or Google Assistant in areas like smart speakers, but instead is repositioning Cortana more like a skill that can be embedded in services where she can be of assistance.

? , , , AI &

The post Google Assistant wins IQ test, but Alexa and Siri are catching up appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/08/19/google-assistant-iq-test-alexa-siri/feed/ 0
Amazon patent envisions Alexa listening to everything 24/7 https://news.deepgeniusai.com/2019/05/29/amazon-patent-alexa-listening-everything/ https://news.deepgeniusai.com/2019/05/29/amazon-patent-alexa-listening-everything/#respond Wed, 29 May 2019 14:07:41 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5691 A patent filed by Amazon envisions a future where Alexa listens to users 24/7 without the need for a wakeword. Current digital assistants listen for a wakeword such as “Ok, Google” or “Alexa,” before recording speech for processing. Especially for companies such as Google and Amazon which thrive on knowing everything about users, this helps... Read more »

The post Amazon patent envisions Alexa listening to everything 24/7 appeared first on AI News.

]]>
A patent filed by Amazon envisions a future where Alexa listens to users 24/7 without the need for a wakeword.

Current digital assistants listen for a wakeword such as “Ok, Google” or “Alexa,” before recording speech for processing. Especially for companies such as Google and Amazon which thrive on knowing everything about users, this helps to quell privacy concerns.

There are some drawbacks from this approach, mainly context. Future AI assistants will be able to provide more help when armed with information leading up to the request.

For example, say you were discussing booking a seat at your favourite restaurant next Tuesday. After asking, “Alexa, do I have anything on my schedule next Tuesday?” it could respond: “No, would you like me to book a seat at the restaurant you were discussing and add it to your calendar?”

Today, such a task would require three separate requests.

Amazon’s patent isn’t quite as complex just yet. The example provided in the filing envisions allowing the user to say things such as “Play ‘And Your Bird Can Sing’ Alexa, by the Beatles,” (Note the wakeword after the play song command.)

David Emm, Principal Security Researcher at Kaspersky Lab, said:

“Many Amazon Alexa users will likely be alarmed by today’s news that the company’s latest patent would allow the devices – commonplace in homes across the UK – to record everything a person says before even being given a command. Whilst the patent doesn’t suggest it will be installed in future Alexa-enabled devices, this still signals an alarming development in the further surrender of our personal privacy.

Given the amount of sensitive information exchanged in the comfort of people’s homes, Amazon would be able to access a huge volume of personal information – information that would be of great value to cybercriminals and threat actors. If the data isn’t secured effectively, a successful breach of Amazon’s systems could have a severe knock-on effect on the data security and privacy of huge numbers of people.

If this patent comes into effect, consumers need to be made very aware of the ramifications of this – and to be fully briefed on what data is being collected, how it is being used, and how they can opt out of this collection. Amazon may argue that analysing stored data will make their devices smarter for Alexa owners – but in today’s digital era, such information could be used nefariously, even by trusted parties. For instance, as we saw with Cambridge Analytica, public sector bodies could target election campaigns at those discussing politics.

There’s a world of difference between temporary local storage of sentences, to determine if the command word has been used, and bulk retention of data for long periods, or permanently – even if the listening process is legitimate and consumers have opted in. There have already been criticisms of Amazon for not making it clear what is being recorded and stored – and we are concerned that this latest development shows the company moving in the wrong direction – away from data visibility, privacy, and consent.”

There’s a joke about Uber that society used to tell you not to get into cars with strangers, and now you’re encouraged to order one from your phone. Lyft has been able to ride in Uber’s wake relatively negative PR free.

Getting the balance right between innovation and safety can be a difficult task. Pioneers often do things first and face the backlash before it actually becomes somewhat normal. That’s not advocating Amazon’s possible approach, but we’ve got to be careful outrage doesn’t halt progress while remaining vigilant of actual dangers.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Amazon patent envisions Alexa listening to everything 24/7 appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/05/29/amazon-patent-alexa-listening-everything/feed/ 0
Google’s Duplex booking AI often relies on humans for backup https://news.deepgeniusai.com/2019/05/23/google-duplex-booking-ai-humans-backup/ https://news.deepgeniusai.com/2019/05/23/google-duplex-booking-ai-humans-backup/#respond Thu, 23 May 2019 14:21:29 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5679 Google Duplex often calls on humans for backup when making reservations on behalf of users, and that should be welcomed. Duplex caused a stir when it debuted at Google’s I/O developer conference last year. The AI was shown calling a hair salon to make a booking and did so complete with human-like “ums” and “ahs”.... Read more »

The post Google’s Duplex booking AI often relies on humans for backup appeared first on AI News.

]]>
Google Duplex often calls on humans for backup when making reservations on behalf of users, and that should be welcomed.

Duplex caused a stir when it debuted at Google’s I/O developer conference last year. The AI was shown calling a hair salon to make a booking and did so complete with human-like “ums” and “ahs”.

The use of such human mannerisms goes to show Google’s intention was for the human to be unaware they’re in conversation with an AI. Following some outcry, Google and other tech giants have pledged to make it clear to humans if they’re not speaking to another person.

Duplex is slowly rolling out and is available for Pixel smartphone owners in the US. Currently, it turns out Duplex bookings are often being carried out by humans in call centres.

Google confirmed to the New York Times that about 25 percent of the Assistant-based calls start with a human in a call centre, while 15 percent require human intervention. Times reporters Brian Chen and Cade Metz made four sample reservations and just one was completed start to finish by the AI.

The practice of using humans as a backup should always be praised. Making this standard practice helps increase trust, reduces concerns about human workers being replaced, and provides some accountability when things go awry.

Only so much can go wrong when booking a hair appointment, but setting expectations now will help to guide developments further down the line.

AI is being increasingly used in a military capacity, and most will sleep better at night knowing a human is behind any final decision rather than complete automation. Just imagine if Soviet officer Stanislav Yevgrafovich Petrov decided to launch retaliatory nuclear missiles after his early warning system falsely reported the launch of missiles from the US back in 1983.

According to the Times, Google isn’t in a rush to replace the human callers, and that should be welcomed.

Related: Watch our interview with UNICRI AI and Robotics Centre head Irakli Beridze discussing issues like weaponisation and the impact on jobs:

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Google’s Duplex booking AI often relies on humans for backup appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/05/23/google-duplex-booking-ai-humans-backup/feed/ 0
Huawei discusses AI strategy with us at the Mate 20 launch https://news.deepgeniusai.com/2018/10/22/huawei-ai-strategy-mate-20/ https://news.deepgeniusai.com/2018/10/22/huawei-ai-strategy-mate-20/#respond Mon, 22 Oct 2018 16:58:20 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4109 During last week’s Mate 20 Pro launch, AI News discussed Huawei’s AI strategy with the company’s president of software engineering. Dr Chenglu Wang (left in picture) has been with Huawei for over four years and has overseen the integration of AI with the company’s products. HiAI is Huawei’s mobile AI open platform which consists of... Read more »

The post Huawei discusses AI strategy with us at the Mate 20 launch appeared first on AI News.

]]>
During last week’s Mate 20 Pro launch, AI News discussed Huawei’s AI strategy with the company’s president of software engineering.

Dr Chenglu Wang (left in picture) has been with Huawei for over four years and has overseen the integration of AI with the company’s products.

HiAI is Huawei’s mobile AI open platform which consists of three layers:

  • Application – Focuses on enabling AI for apps to make them more intelligent and powerful.
  • Chip – Aims to achieve optimal performance with heterogeneous scheduling and NPU acceleration.
  • Service – Represents the company’s cloud-based services.

Together, they offer the following capabilities:

  • Computer Vision (CV) Engine – CV is the capabilities by which computers simulate the human visual system to sense the ambient environment, and determine, recognise, and understand the composition of space. The capabilities include image super-resolution, facial recognition, and object recognition.
  • Automatic Speech Recognition (ASR) Engine – ASR converts human voice into text, to facilitate further parsing and understanding by computers. The capabilities include speech recognition, speech conversion, and text-to-speech (TTS).
  • Natural Language Understanding (NLU) Engine – NLU is combined with the ASR engine, to enable apps to understand human voice or text, thereby performing communication or natural actions. The capabilities include word segmentation, text entity recognition, emotive tendency analysis, and machine translation.

According to Wang, the adoption of the HiAI platform is meeting Huawei’s expectations. However, some features – such as ASR and NLU – are still locked to China.

When asked when more of HiAI’s features will expand to other regions, Wang responded:

“Huawei’s consumer cloud is not so popular globally. However, this year we will launch some consumer services in Europe so maybe we can see more deployed globally… maybe we can get some alignment with China.”

Last year, we saw Huawei debut the world’s first smartphone AI chipset – the Kirin 970 – in the Mate 10. The AI chip provided things such as limited automatic camera scene selection, improved background noise reduction in calls, and pixel quality enhancement when taking pictures of documents.

Huawei’s next flagship, the P20 Pro, improved on the automatic camera scene selection to recognise 500+ scenarios across 19 categories. The company also introduced AIS (AI Image Stabilisation) which uses machine learning algorithms to predict and counteract shaky movements on a frame-by-frame basis.

This year, with the Mate 20, Huawei has debuted the Kirin 980 which boasts the world’s first dual-NPU (Neural Processing Unit). Huawei claims it offers an incredible 226 percent improvement over its predecessor.

Our first question to Wang was if the Kirin 980’s extra performance has allowed Huawei to do anything it couldn’t with the 970. Wang couldn’t provide any examples and even said: “It’s almost the same”.

When asked if that means Mate 20’s AI features will be coming to last year’s model, Wang said they will be.

However, a slide provided by the company provides more detail about the benefits of switching from a single NPU to a dual:


As mentioned in our video review of the Mate 20 Pro’s AI features at the bottom of this article, and confirmed by the above slide, real-time video processing takes a lot of power. It will be interesting to see how the Kirin 970 handles things such as the real-time AI colour video effect if it’s truly coming to Kirin 970 devices.

In recent weeks, Huawei pushed an update which switched off the ‘Master AI’ automatic camera scene recognition feature that debuted in the Mate 10. The feature can be re-enabled in the settings but is now off by default.

Master AI was a focal point of both the Mate 10 and P20 launches and we always felt it served as a great example of how AI can make life simpler. The feature provided better results when taking a picture unless the individual has time, and know-how, to manually change settings on a scene-by-scene basis.

When asked why Huawei took the decision to switch off such a prominent feature, Wang responded:

“Master AI is Huawei’s first try to use an AI-enabled camera. After we launched this functionality, they don’t like the phone… so we’re changing strategy. We give the basic capability and give this feature as an option, not just automatically.”

The explanation makes some amount of sense. As a techie, it can sometimes be difficult to put yourself in the view of a standard consumer. The average person, however, often just wants a phone with a camera that works as they expect.

Back in April, Huawei VP of Software Engineering Felix Zhang said the company wants to introduce the first digital assistant with ‘emotional interactions’.

Many industry leaders are working towards such a landmark moment but Zhang provided no timeline as to when Huawei expects to launch its own. We asked Wang when he expects such a digital assistant to become available.

“From a software view, it’s still a very big gap,” he said. “Maybe two or three years if the industry can work together.”

You can find our video showing the Mate 20’s AI features below:

 AI & >

The post Huawei discusses AI strategy with us at the Mate 20 launch appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/10/22/huawei-ai-strategy-mate-20/feed/ 0