microsoft ai – AI News https://news.deepgeniusai.com Artificial Intelligence News Wed, 25 Mar 2020 05:30:41 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png microsoft ai – AI News https://news.deepgeniusai.com 32 32 Watch out Google Duplex, Microsoft also has a chatty AI https://news.deepgeniusai.com/2018/05/23/google-duplex-microsoft-chatty-ai/ https://news.deepgeniusai.com/2018/05/23/google-duplex-microsoft-chatty-ai/#respond Wed, 23 May 2018 10:45:31 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3148 Not content with being outdone by Google’s impressive (yet creepy) Duplex demo, Microsoft has shown it also has an AI capable of making human-like phone calls. The company first launched its XiaoIce project back in August 2017. In April, Microsoft said it had achieved full duplexing — the ability to speak and listen at the... Read more »

The post Watch out Google Duplex, Microsoft also has a chatty AI appeared first on AI News.

]]>
Not content with being outdone by Google’s impressive (yet creepy) Duplex demo, Microsoft has shown it also has an AI capable of making human-like phone calls.

The company first launched its XiaoIce project back in August 2017. In April, Microsoft said it had achieved full duplexing — the ability to speak and listen at the same time, similar to humans.

Microsoft’s announcement was made before Google’s demonstration earlier this month but, unlike Google, the company had nothing to show at the time.

XiaoIce has now been demonstrated in action during a London event:

The chatbot is only available in China at this time, but it’s become incredibly popular with more than 500 million users.

XiaoIce also features over 230 skills and has been used to perform things such as creating news and hosting radio programs as part of its ‘Content Creation Platform’.

In a blog post, Microsoft VP of AI Harry Shum revealed that more than 600,000 people have spoken on the phone with XiaoIce since it launched in August.

“Most intelligent agents today like Alexa or Siri focus on IQ or task completion, providing basic information like weather or traffic,” wrote Shum. “But we need agents and bots to balance the smarts of IQ with EQ – our emotional intelligence.”

“When we communicate, we use tone of voice, word play, and humour, things that are very difficult for computers to understand. However, Xiaoice has the ability to have human-like verbal conversations, which the industry calls full duplex.”

As many have called for since the Duplex demo, and Google has promised, Microsoft ensures a human participant is aware they’re speaking to an AI.

One thing we’d love to see is a conversation between XiaoIce and Google Duplex to see how well they each hold up. However, let’s keep our hands on the kill switch in case world domination becomes a topic.

What are your thoughts on conversational AIs like XiaoIce and Duplex?

 

The post Watch out Google Duplex, Microsoft also has a chatty AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/05/23/google-duplex-microsoft-chatty-ai/feed/ 0
Microsoft dropped some potential deals over AI ethical concerns https://news.deepgeniusai.com/2018/04/10/microsoft-ai-ethical-concerns/ https://news.deepgeniusai.com/2018/04/10/microsoft-ai-ethical-concerns/#respond Tue, 10 Apr 2018 15:34:54 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3000 According to a director at Microsoft Research Labs, the company has dropped some potential deals with customers over ethical concerns their AI technology may be misused. Speaking at the Carnegie Mellon University – K&L Gates Conference on Ethics and AI in Pittsburgh, Eric Horvitz made the revelation. He says the group at Microsoft looking into... Read more »

The post Microsoft dropped some potential deals over AI ethical concerns appeared first on AI News.

]]>
According to a director at Microsoft Research Labs, the company has dropped some potential deals with customers over ethical concerns their AI technology may be misused.

Speaking at the Carnegie Mellon University – K&L Gates Conference on Ethics and AI in Pittsburgh, Eric Horvitz made the revelation. He says the group at Microsoft looking into possible misuse on a case-by-case basis is the Aether Committee (“Aether” stands for AI and Ethics in Engineering and Research.)

“Significant sales have been cut off,” Horvitz said. “And in other sales, various specific limitations were written down in terms of usage, including ‘may not use data-driven pattern recognition for use in face recognition or predictions of this type.’”

Horvitz, of course, did not reveal the specific companies of which Microsoft decided not to formulate a deal with. However, it’s pleasing to hear the company putting ethics above money when it comes to artificial intelligence. Abuses will be widely covered and hamper its potential.

Amidst the fallout of the Cambridge Analytica and Facebook scandal, and the use of this stolen data to target voters during the 2016 U.S. presidential campaign, people are naturally more wary of anything which involves mass data analysis.

Manipulating votes is one of the key concerns Horvitz raises for the abuse of AI, along with human rights violations, increasing the risk of physical harm, or preventing access to critical services and resources.

In reverse, we’ve already seen how AI itself can be manipulated — from Microsoft itself, no less. The company’s now infamous ‘Tay’ chatbot was taught by people online to spew racist comments. “It’s a great example of things going awry,” Horvitz acknowledged.

Rather than replace humans, Horvitz wants AI to be complementarity and not a replacement — often as more of a backstop for human decisions. However, it could still be used when invoked for tasks where a human would not be as effective.

For example, Horvitz highlights a program from Microsoft AI that helps caregivers to identify patients most at risk of being readmitted to a hospital within 30 days. Scholars who assessed the program determined that it could reduce rehospitalisations by 18 percent while cutting a hospital’s costs by nearly 4 percent.

The comments made by Horvitz once again highlight the need for AI companies to ensure their approach is responsible and ethical. The opportunities are endless if AI is developed properly, but it could just as easily lead to disaster if not.

Update: A previous headline ‘Microsoft has dropped some deals over AI ethical concerns’ was misconstrued as meaning the company dropped some existing deals. It has been updated to reflect Microsoft decided against some possible future partnerships over ethical concerns.

What are your thoughts on Microsoft’s approach to AI ethics?

 

The post Microsoft dropped some potential deals over AI ethical concerns appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/04/10/microsoft-ai-ethical-concerns/feed/ 0