digital assistant – AI News https://news.deepgeniusai.com Artificial Intelligence News Wed, 25 Mar 2020 05:30:41 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png digital assistant – AI News https://news.deepgeniusai.com 32 32 Meena is Google’s first truly conversational AI https://news.deepgeniusai.com/2020/01/29/meena-google-truly-conversational-ai/ https://news.deepgeniusai.com/2020/01/29/meena-google-truly-conversational-ai/#respond Wed, 29 Jan 2020 14:59:17 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6387 Google is attempting to build the first digital assistant that can truly hold a conversation with an AI project called Meena. Digital assistants like Alexa and Siri are programmed to pick up keywords and provide scripted responses. Google has previously demonstrated its work towards a more natural conversation with its Duplex project but Meena should... Read more »

The post Meena is Google’s first truly conversational AI appeared first on AI News.

]]>
Google is attempting to build the first digital assistant that can truly hold a conversation with an AI project called Meena.

Digital assistants like Alexa and Siri are programmed to pick up keywords and provide scripted responses. Google has previously demonstrated its work towards a more natural conversation with its Duplex project but Meena should offer another leap forward.

Meena is a neural network with 2.6 billion parameters. Google claims Meena is able to handle multiple turns in a conversation (everyone has that friend who goes off on multiple tangents during the same conversation, right?)

Google published its work on e-print repository arXiv on Monday in a paper called “Towards a Human-like Open Domain Chatbot”.

A neural network architecture called Transformer was released by Google in 2017 which is widely acknowledged to be among the best language models available. A variation of Transformer, along with a mere 40 billion English words, was used to train Meena.

Google also debuted a metric alongside Meena called Sensibleness and Specificity Average (SSA) which measures the ability of agents to maintain a conversation.

Meena scores 79 percent using the new SSA metric. For comparison, Mitsuku – a Loebner Prize-winning AI agent developed by Pandora Bots – scored 56 percent.

The result of Meena brings its conversational ability close to that of humans. On average, humans score around 86 percent using the SSA metric.

We don’t yet know when Google intends to debut Meena’s technology in its products but, as the digital assistant war heats up, we’re sure the company is as eager to release it as we are to use it.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Meena is Google’s first truly conversational AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/01/29/meena-google-truly-conversational-ai/feed/ 0
Huawei discusses AI strategy with us at the Mate 20 launch https://news.deepgeniusai.com/2018/10/22/huawei-ai-strategy-mate-20/ https://news.deepgeniusai.com/2018/10/22/huawei-ai-strategy-mate-20/#respond Mon, 22 Oct 2018 16:58:20 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4109 During last week’s Mate 20 Pro launch, AI News discussed Huawei’s AI strategy with the company’s president of software engineering. Dr Chenglu Wang (left in picture) has been with Huawei for over four years and has overseen the integration of AI with the company’s products. HiAI is Huawei’s mobile AI open platform which consists of... Read more »

The post Huawei discusses AI strategy with us at the Mate 20 launch appeared first on AI News.

]]>
During last week’s Mate 20 Pro launch, AI News discussed Huawei’s AI strategy with the company’s president of software engineering.

Dr Chenglu Wang (left in picture) has been with Huawei for over four years and has overseen the integration of AI with the company’s products.

HiAI is Huawei’s mobile AI open platform which consists of three layers:

  • Application – Focuses on enabling AI for apps to make them more intelligent and powerful.
  • Chip – Aims to achieve optimal performance with heterogeneous scheduling and NPU acceleration.
  • Service – Represents the company’s cloud-based services.

Together, they offer the following capabilities:

  • Computer Vision (CV) Engine – CV is the capabilities by which computers simulate the human visual system to sense the ambient environment, and determine, recognise, and understand the composition of space. The capabilities include image super-resolution, facial recognition, and object recognition.
  • Automatic Speech Recognition (ASR) Engine – ASR converts human voice into text, to facilitate further parsing and understanding by computers. The capabilities include speech recognition, speech conversion, and text-to-speech (TTS).
  • Natural Language Understanding (NLU) Engine – NLU is combined with the ASR engine, to enable apps to understand human voice or text, thereby performing communication or natural actions. The capabilities include word segmentation, text entity recognition, emotive tendency analysis, and machine translation.

According to Wang, the adoption of the HiAI platform is meeting Huawei’s expectations. However, some features – such as ASR and NLU – are still locked to China.

When asked when more of HiAI’s features will expand to other regions, Wang responded:

“Huawei’s consumer cloud is not so popular globally. However, this year we will launch some consumer services in Europe so maybe we can see more deployed globally… maybe we can get some alignment with China.”

Last year, we saw Huawei debut the world’s first smartphone AI chipset – the Kirin 970 – in the Mate 10. The AI chip provided things such as limited automatic camera scene selection, improved background noise reduction in calls, and pixel quality enhancement when taking pictures of documents.

Huawei’s next flagship, the P20 Pro, improved on the automatic camera scene selection to recognise 500+ scenarios across 19 categories. The company also introduced AIS (AI Image Stabilisation) which uses machine learning algorithms to predict and counteract shaky movements on a frame-by-frame basis.

This year, with the Mate 20, Huawei has debuted the Kirin 980 which boasts the world’s first dual-NPU (Neural Processing Unit). Huawei claims it offers an incredible 226 percent improvement over its predecessor.

Our first question to Wang was if the Kirin 980’s extra performance has allowed Huawei to do anything it couldn’t with the 970. Wang couldn’t provide any examples and even said: “It’s almost the same”.

When asked if that means Mate 20’s AI features will be coming to last year’s model, Wang said they will be.

However, a slide provided by the company provides more detail about the benefits of switching from a single NPU to a dual:


As mentioned in our video review of the Mate 20 Pro’s AI features at the bottom of this article, and confirmed by the above slide, real-time video processing takes a lot of power. It will be interesting to see how the Kirin 970 handles things such as the real-time AI colour video effect if it’s truly coming to Kirin 970 devices.

In recent weeks, Huawei pushed an update which switched off the ‘Master AI’ automatic camera scene recognition feature that debuted in the Mate 10. The feature can be re-enabled in the settings but is now off by default.

Master AI was a focal point of both the Mate 10 and P20 launches and we always felt it served as a great example of how AI can make life simpler. The feature provided better results when taking a picture unless the individual has time, and know-how, to manually change settings on a scene-by-scene basis.

When asked why Huawei took the decision to switch off such a prominent feature, Wang responded:

“Master AI is Huawei’s first try to use an AI-enabled camera. After we launched this functionality, they don’t like the phone… so we’re changing strategy. We give the basic capability and give this feature as an option, not just automatically.”

The explanation makes some amount of sense. As a techie, it can sometimes be difficult to put yourself in the view of a standard consumer. The average person, however, often just wants a phone with a camera that works as they expect.

Back in April, Huawei VP of Software Engineering Felix Zhang said the company wants to introduce the first digital assistant with ‘emotional interactions’.

Many industry leaders are working towards such a landmark moment but Zhang provided no timeline as to when Huawei expects to launch its own. We asked Wang when he expects such a digital assistant to become available.

“From a software view, it’s still a very big gap,” he said. “Maybe two or three years if the industry can work together.”

You can find our video showing the Mate 20’s AI features below:

 AI & >

The post Huawei discusses AI strategy with us at the Mate 20 launch appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/10/22/huawei-ai-strategy-mate-20/feed/ 0
Huawei wants to develop the first digital assistant with emotions https://news.deepgeniusai.com/2018/04/23/huawei-first-digital-assistant-emotions/ https://news.deepgeniusai.com/2018/04/23/huawei-first-digital-assistant-emotions/#respond Mon, 23 Apr 2018 15:32:45 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3015 Technology giant Huawei wants to develop the first digital assistant which evokes an emotional bond with the user to offer a more personal experience. “We want to introduce emotional interactions,” said Felix Zhang, VP of Software Engineering at Huawei, in an interview with CNBC. “We believe that in the future all of our end users... Read more »

The post Huawei wants to develop the first digital assistant with emotions appeared first on AI News.

]]>
Technology giant Huawei wants to develop the first digital assistant which evokes an emotional bond with the user to offer a more personal experience.

“We want to introduce emotional interactions,” said Felix Zhang, VP of Software Engineering at Huawei, in an interview with CNBC. “We believe that in the future all of our end users will want to interact with the system more passionately.”

If the movie ‘Her’ comes to mind when hearing about Huawei’s plans, executives said they were inspired by the film. The protagonist in Her falls in love with his digital assistant who adapts to his emotional needs.

Today’s interactions with digital assistants like Siri are quick but emotionless and scripted experiences. Huawei wants their future assistant to be able to continue a conversation longer for a more natural and personal discussion.

“Huawei’s new digital assistant, powered by artificial intelligence, will try to continue the talks as long as possible so that the user does not feel he is alone,” said Editor Lu, Director of AI at Huawei’s consumer business group.

The company’s priority continues to be improving the intelligence of its assistant to ensure it’s able to carry out tasks without a user having to touch their devices in many cases.

“The first step is to give your assistant a high IQ, and then you have to give him a high percentage of EQ emotions,” continues Lu.

Prioritising intelligence makes sense, nobody wants a chatty assistant — digital or otherwise — who ultimately cannot do their job.

Do you think adding emotions to digital assistants is a good idea?

 

The post Huawei wants to develop the first digital assistant with emotions appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/04/23/huawei-first-digital-assistant-emotions/feed/ 0