bots – AI News https://news.deepgeniusai.com Artificial Intelligence News Fri, 30 Oct 2020 09:15:28 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png bots – AI News https://news.deepgeniusai.com 32 32 IBM study highlights rapid uptake and satisfaction with AI chatbots https://news.deepgeniusai.com/2020/10/27/ibm-study-uptake-satisfaction-ai-chatbots/ https://news.deepgeniusai.com/2020/10/27/ibm-study-uptake-satisfaction-ai-chatbots/#respond Tue, 27 Oct 2020 11:03:20 +0000 https://news.deepgeniusai.com/?p=9975 A study by IBM released this week highlights the rapid uptake of AI chatbots in addition to increasing customer satisfaction. Most of us are hardwired to hate not speaking directly to a human when we have a problem—following years of irritating voicemail systems. However, perhaps the only thing worse is being on hold for an... Read more »

The post IBM study highlights rapid uptake and satisfaction with AI chatbots appeared first on AI News.

]]>
A study by IBM released this week highlights the rapid uptake of AI chatbots in addition to increasing customer satisfaction.

Most of us are hardwired to hate not speaking directly to a human when we have a problem—following years of irritating voicemail systems. However, perhaps the only thing worse is being on hold for an uncertain amount of time due to overwhelmed call centres.

Chatbots have come a long way and can now quickly handle most queries within minutes. Where a human is required, the reduced demand through using virtual agent technology (VAT) means customers can get the assistance they need more quickly.

The COVID-19 pandemic has greatly increased the adoption of VAT as businesses seek to maintain customer service through such a challenging time.

According to IBM’s study, 99 percent of organisations reported increased customer satisfaction by integrating virtual agents. Human agents also report increased satisfaction and IBM says those “who feel valued and empowered with the proper tools and support are more likely to deliver a better experience to customers.”

68 percent of leaders cite improving the human agent experience as being among their key reasons for adopting VAT. There’s also economic incentive, with the cost of replacing a dissatisfied agent who leaves a business estimated at as much as 33 percent of the exiting employee’s salary.

IBM claims that VAT performance in the past has only been studied through individual case studies. The company set out, alongside Oxford Economics, to change that by surveying 1,005 respondents from companies using VAT daily.

Businesses wondering whether virtual assistants are worth the investment may be interested to know that 96 percent of the respondents “exceeded, achieved, or expect to achieve” their anticipated return.

On average, companies which have implemented VAT have increased their revenue by three percent.

IBM is one of the leading providers of chatbots through its Watson Assistant solution. While there’s little reason to doubt the claims made in the report, it’s worth keeping in mind that it’s not entirely unbiased.

Watson Assistant has gone from strength-to-strength and appears to have been among the few things which benefited from the pandemic. Between February and August, Watson Assistant usage increased by 65 percent.

You can download a full copy of IBM’s report here.

(Photo by Volodymyr Hryshchenko on Unsplash)

The post IBM study highlights rapid uptake and satisfaction with AI chatbots appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/10/27/ibm-study-uptake-satisfaction-ai-chatbots/feed/ 0
Babylon Health lashes out at doctor who raised AI chatbot safety concerns https://news.deepgeniusai.com/2020/02/26/babylon-health-doctor-ai-chatbot-safety-concerns/ https://news.deepgeniusai.com/2020/02/26/babylon-health-doctor-ai-chatbot-safety-concerns/#respond Wed, 26 Feb 2020 17:24:08 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6433 Controversial healthcare app maker Babylon Health has criticised the doctor who first raised concerns about the safety of their AI chatbot. Babylon Health’s chatbot is available in the company’s GP at Hand app, a digital healthcare solution championed by health secretary Matt Hancock that was also integrated into Samsung Health since last year. The chatbot... Read more »

The post Babylon Health lashes out at doctor who raised AI chatbot safety concerns appeared first on AI News.

]]>
Controversial healthcare app maker Babylon Health has criticised the doctor who first raised concerns about the safety of their AI chatbot.

Babylon Health’s chatbot is available in the company’s GP at Hand app, a digital healthcare solution championed by health secretary Matt Hancock that was also integrated into Samsung Health since last year.

The chatbot aims to reduce the burden on GPs and A&E departments by automating the triage process to determine whether someone can treat themselves at home, should book an online or in-person GP appointment, or go straight to a hospital.

A Twitter user under the pseudonym of Dr Murphy first reached out to us back in 2018 alleging that Babylon Health’s chatbot was giving unsafe advice. Dr Murphy recently unveiled himself as Dr David Watkins and went public with his findings at The Royal Society of Medicine’s “Recent developments in AI and digital health 2020“ event in addition to appearing on a BBC Newsnight report.

Over the past couple of years, Dr Watkins has provided many examples of the chatbot giving dangerous advice. In one example, an obese 48-year-old heavy smoker patient who presented himself with chest pains was suggested to book a consultation “in the next few hours”. Anyone with any common sense would have told you to dial an emergency number straight away.

This particular issue has since been rectified but Dr Watkins has highlighted many further examples over the years which show, very clearly, there are serious safety issues.

In a press release (PDF) on Monday, Babylon Health calls Dr Watkins a “troll” who has “targeted members of our staff, partners, clients, regulators and journalists and tweeted defamatory content about us”.

According to the release, Dr Watkins has conducted 2,400 tests of the chatbot in a bid to discredit the service while raising “fewer than 100 test results which he considered concerning”.

Babylon Health claims that in just 20 cases did Dr Watkins find genuine errors while others were “misrepresentations” or “mistakes,” according to Babylon’s own “panel of senior clinicians” who remain unnamed.

Speaking to TechCrunch, Dr Watkins called Babylon’s claims “utterly nonsense” and questions where the startup got its figures from as “there are certainly not 2,400 completed triage assessments”.

Dr Watkins estimates he has conducted between 800 and 900 full triages, some of which were repeat tests to see whether Babylon Health had fixed the issues he previously highlighted.

The doctor acknowledges Babylon Health’s chatbot has improved and has issues around the rate of around one in three instances. In 2018, when Dr Watkins first reached out to us and other outlets, he says this rate was “one in one”.

While it’s one account versus the other, the evidence shows that Babylon Health’s chatbot has issued dangerous advice on a number of occasions. Dr Watkins has dedicated many hours to highlighting these issues to Babylon Health in order to improve patient safety.

Rather than welcome his efforts and work with Dr Watkins to improve their service, it seems Babylon Health has decided to go on the offensive and “try and discredit someone raising patient safety concerns”.

In their press release, Babylon accuses Watkins of posting “over 6,000” misleading attacks but without giving details of where. Dr Watkins primarily uses Twitter to post his findings. His account, as of writing, has tweeted a total of 3,925 times and not just about Babylon’s service.

This isn’t the first time Babylon Health’s figures have come into question. Back in June 2018, Babylon Health held an event where it boasted its AI beat trainee GPs at the MRCGP exam used for testing their ability to diagnose medical problems. The average pass mark is 72 percent. “How did Babylon Health do?” said Dr Mobasher Butt at the event, a director at Babylon Health. “It got 82 percent.”

Given the number of dangerous suggestions to trivial ailments the chatbot has given, especially at the time, it’s hard to imagine the claim that it beats trainee GPs as being correct. Intriguingly, the video of the event has since been deleted from Babylon Health’s YouTube account and the company removed all links to coverage of it from the “Babylon in the news” part of its website.

When asked why it deleted the content, Babylon Health said in a statement: “As a fast-paced and dynamic health-tech company, Babylon is constantly refreshing the website with new information about our products and services. As such, older content is often removed to make way for the new.”

AI solutions like those offered by Babylon Health will help to reduce the demand on health services and ensure people have access to the right information and care whenever and wherever they need it. However, patient safety must come first.

Mistakes are less forgivable in healthcare due to the risk of potentially fatal or lifechanging consequences. The usual “move fast and break things” ethos in tech can’t apply here. 

There’s a general acceptance that rarely is a new technology going to be without its problems, but people want to see that best efforts are being made to limit and address those issues. Instead of welcoming those pointing out issues with their service before it leads to a serious incident, it seems Babylon Health would rather blame everyone else for its faults.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Babylon Health lashes out at doctor who raised AI chatbot safety concerns appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/02/26/babylon-health-doctor-ai-chatbot-safety-concerns/feed/ 0
Google’s Duplex booking AI often relies on humans for backup https://news.deepgeniusai.com/2019/05/23/google-duplex-booking-ai-humans-backup/ https://news.deepgeniusai.com/2019/05/23/google-duplex-booking-ai-humans-backup/#respond Thu, 23 May 2019 14:21:29 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5679 Google Duplex often calls on humans for backup when making reservations on behalf of users, and that should be welcomed. Duplex caused a stir when it debuted at Google’s I/O developer conference last year. The AI was shown calling a hair salon to make a booking and did so complete with human-like “ums” and “ahs”.... Read more »

The post Google’s Duplex booking AI often relies on humans for backup appeared first on AI News.

]]>
Google Duplex often calls on humans for backup when making reservations on behalf of users, and that should be welcomed.

Duplex caused a stir when it debuted at Google’s I/O developer conference last year. The AI was shown calling a hair salon to make a booking and did so complete with human-like “ums” and “ahs”.

The use of such human mannerisms goes to show Google’s intention was for the human to be unaware they’re in conversation with an AI. Following some outcry, Google and other tech giants have pledged to make it clear to humans if they’re not speaking to another person.

Duplex is slowly rolling out and is available for Pixel smartphone owners in the US. Currently, it turns out Duplex bookings are often being carried out by humans in call centres.

Google confirmed to the New York Times that about 25 percent of the Assistant-based calls start with a human in a call centre, while 15 percent require human intervention. Times reporters Brian Chen and Cade Metz made four sample reservations and just one was completed start to finish by the AI.

The practice of using humans as a backup should always be praised. Making this standard practice helps increase trust, reduces concerns about human workers being replaced, and provides some accountability when things go awry.

Only so much can go wrong when booking a hair appointment, but setting expectations now will help to guide developments further down the line.

AI is being increasingly used in a military capacity, and most will sleep better at night knowing a human is behind any final decision rather than complete automation. Just imagine if Soviet officer Stanislav Yevgrafovich Petrov decided to launch retaliatory nuclear missiles after his early warning system falsely reported the launch of missiles from the US back in 1983.

According to the Times, Google isn’t in a rush to replace the human callers, and that should be welcomed.

Related: Watch our interview with UNICRI AI and Robotics Centre head Irakli Beridze discussing issues like weaponisation and the impact on jobs:

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Google’s Duplex booking AI often relies on humans for backup appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/05/23/google-duplex-booking-ai-humans-backup/feed/ 0
DeepMind thrashed pro StarCraft 2 players in latest demo https://news.deepgeniusai.com/2019/01/25/deepmind-starcraft-2-players-demo/ https://news.deepgeniusai.com/2019/01/25/deepmind-starcraft-2-players-demo/#respond Fri, 25 Jan 2019 13:03:03 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4835 DeepMind’s AI demonstrated last night how its prowess in StarCraft 2 battles against professional human players has grown in recent months. The live stream of the showdowns was viewed by more than 55,000 people. “This is, of course, an exciting moment for us,” said David Silver, a researcher at DeepMind. “For the first time, we... Read more »

The post DeepMind thrashed pro StarCraft 2 players in latest demo appeared first on AI News.

]]>
DeepMind’s AI demonstrated last night how its prowess in StarCraft 2 battles against professional human players has grown in recent months.

The live stream of the showdowns was viewed by more than 55,000 people.

“This is, of course, an exciting moment for us,” said David Silver, a researcher at DeepMind. “For the first time, we saw an AI that was able to defeat a professional player.”

DeepMind created five versions of their ‘AlphaStar’ AI. Each AI was trained with historic game footage that StarCraft-developer Blizzard has been releasing on a monthly basis.

In order to further improve their abilities, the five AIs were pitted against each other in a league. The leading AI racked up experience that would equate to a human training for around 200 years.

Perhaps needless to say, AlphaStar wiped the floor with human players Grzegorz Komincz and Dario Wunsch.

You can watch AlphaStar taking on the human players below:

The only hope for humans so far is that AlphaStar was trained for a single map and using just the one alien race type of three available in the game. Removed from its comfort zone, it would not perform as well.

Video games have driven more rudimentary AI developments for decades. The advancement shown by AlphaStar could be used to create more complex ‘bots’ that can pose a challenge and help train even the best human players.

This isn’t the first time we’ve seen DeepMind’s AI bots in action – but, in the past, they’ve had a tendency of immediately rushing its opponents with ‘workers’ in a behaviour that Blizzard called “amusing”.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post DeepMind thrashed pro StarCraft 2 players in latest demo appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/01/25/deepmind-starcraft-2-players-demo/feed/ 0
DeepMind’s AI will show off its new StarCraft 2 skills this week https://news.deepgeniusai.com/2019/01/23/deepmind-ai-starcraft-2-skills-week/ https://news.deepgeniusai.com/2019/01/23/deepmind-ai-starcraft-2-skills-week/#respond Wed, 23 Jan 2019 17:27:32 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4500 DeepMind has been continuing to train its AI in the ways of StarCraft 2 and will show off its most recent progress this week. StarCraft 2 is a complex game with many strategies, making it the perfect testing ground for AI. Google’s DeepMind first started exploring how it can use AI to beat the world’s... Read more »

The post DeepMind’s AI will show off its new StarCraft 2 skills this week appeared first on AI News.

]]>
DeepMind has been continuing to train its AI in the ways of StarCraft 2 and will show off its most recent progress this week.

StarCraft 2 is a complex game with many strategies, making it the perfect testing ground for AI. Google’s DeepMind first started exploring how it can use AI to beat the world’s best StarCraft players back in 2016.

In 2017, StarCraft’s developer Blizzard made 65,000 past matches available to DeepMind researchers to begin training bots. Blizzard promised it would make a further half a million games available each month.

We’ve seen DeepMind’s AI bots in action with various degrees of success. The AI had a tendency of immediately rushing its opponents with ‘workers’ in a behaviour that Blizzard called “amusing,” but confessed it had a 50 percent success rate even against StarCraft 2’s AI bots on ‘insane’ difficulty.

Fed with some replays from human players using more complex strategies, the AI began adopting them.

“After feeding the agent replays from real players, it started to execute standard macro-focused strategies, as well as defend against aggressive tactics such as cannon rushes,” Blizzard said.

We’re yet to see these new strategies being used by DeepMind’s AI but it won’t be much longer until we do.

“It’s only been a few months since BlizzCon but DeepMind is ready to share more information on their research,” Blizzard said today.

“The StarCraft games have emerged as a ‘grand challenge’ for the AI community as they’re the perfect environment for benchmarking progress against problems such as planning, dealing with uncertainty, and spatial reasoning.”

You can find a stream of DeepMind’s AI playing StarCraft 2 via StarCraft’s Twitch or Deepmind’s YouTube at 6pm GMT/10am PT/1pm ET on January 24th.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post DeepMind’s AI will show off its new StarCraft 2 skills this week appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/01/23/deepmind-ai-starcraft-2-skills-week/feed/ 0
Microsoft acquires conversational AI company XOXCO https://news.deepgeniusai.com/2018/11/16/microsoft-conversational-ai-xoxco/ https://news.deepgeniusai.com/2018/11/16/microsoft-conversational-ai-xoxco/#comments Fri, 16 Nov 2018 13:42:03 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4207 Microsoft has announced the acquisition of Texas-based conversational AI firm XOXCO to amplify the company’s work in the field. XOXCO have been in operation since 2013 and gained renown for creating Howdy, the first commercially available bot for Slack. Microsoft believes that bots will become a key way that businesses engage with employees and customers.... Read more »

The post Microsoft acquires conversational AI company XOXCO appeared first on AI News.

]]>
Microsoft has announced the acquisition of Texas-based conversational AI firm XOXCO to amplify the company’s work in the field.

XOXCO have been in operation since 2013 and gained renown for creating Howdy, the first commercially available bot for Slack.

Microsoft believes that bots will become a key way that businesses engage with employees and customers. The company has undertaken many projects with the aim of unlocking their potential, to various success.

Tay, for example, has become an infamous example of a bot gone wrong after the wonderful denizens of the internet taught Microsoft’s creation to be a “Hitler-loving, incestual sex-promoting, ‘Bush did 9/11’-proclaiming robot”.

However, the company has grand ideas about inter-communicating bots which could be groundbreaking. Asking Cortana to order a pizza could invoke a bot by Dominos, or to book a flight may call on KAYAK.

The Microsoft Bot Framework already supports over 360,000 developers. With the acquisition of XOXCO, the company hopes to further democratise AI development, conversation and dialog, and the integration of conversational experiences where people communicate.

Lili Cheng, Corporate Vice President of Conversational AI at Microsoft, wrote in a post:

“Our goal is to make AI accessible and valuable to every individual and organization, amplifying human ingenuity with intelligent technology.

To do this, Microsoft is infusing intelligence across all its products and services to extend individuals’ and organizations’ capabilities and make them more productive, providing a powerful platform of AI services and tools that makes innovation by developers and partners faster and more accessible, and helping transform business by enabling breakthroughs to current approaches and entirely new scenarios that leverage the power of intelligent technology.”

Microsoft has made several related acquisitions this year, demonstrating how important AI and bots are to the company.

  • May – Microsoft bought Semantic Machines, another company working on conversational AI.
  • July – Bonsai was acquired, a firm combining machine teaching, reinforcement learning, and simulation.
  • September – Lobe came under Microsoft’s wing, a company aiming to make AI and deep learning development easier.

Gartner backs Microsoft’s belief in bots, recently predicting: “By 2020, conversational artificial intelligence will be a supported user experience for more than 50 percent of large, consumer-centric enterprises.”

Microsoft is setting itself up to be in one of the best positions to capitalise on the growth of conversational AIs, and it looks set to pay off.

 AI & >.

The post Microsoft acquires conversational AI company XOXCO appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/11/16/microsoft-conversational-ai-xoxco/feed/ 1
Bill forcing AI bots to reveal themselves faces EFF opposition https://news.deepgeniusai.com/2018/05/24/bill-ai-bot-reveal-eff/ https://news.deepgeniusai.com/2018/05/24/bill-ai-bot-reveal-eff/#comments Thu, 24 May 2018 13:58:39 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3175 A bill that would force AI bots to reveal themselves as not being human is facing opposition from the EFF over free speech concerns. Many were slightly disturbed by Google’s demo of its Duplex AI conducting a phone call and the other participant being unaware they weren’t speaking to a human. Less than a month... Read more »

The post Bill forcing AI bots to reveal themselves faces EFF opposition appeared first on AI News.

]]>
A bill that would force AI bots to reveal themselves as not being human is facing opposition from the EFF over free speech concerns.

Many were slightly disturbed by Google’s demo of its Duplex AI conducting a phone call and the other participant being unaware they weren’t speaking to a human. Less than a month later, Microsoft demonstrated it also had the same capabilities.

There are clearly big changes ahead in how we interact, and not everyone is going to be happy speaking to a robot without being aware. The B.O.T. Act (SB 1001) intends to make it illegal for a computer to speak to someone in California without revealing it’s not human.

The summary of the bill reads:

“This bill would make it unlawful for any person to use a bot, as defined, to communicate or interact with natural persons in California online with the intention of misleading and would provide that a person using a bot is presumed to act with the intent to mislead unless the person discloses that the bot is not a natural person.

The bill would require an online platform to enable users to report violations of this prohibition, to respond to the reports, and to provide the Attorney General with specified related information.”

Google and Microsoft have both said their respective AIs would reveal themselves not to be human regardless of legislation.

The B.O.T. Act is facing stiff opposition from the Electronic Freedom Foundation (EFF) who appear to be setting themselves up as champions of rights for machines.

In a post, the EFF wrote: “Why does it matter that a bot (instead of a human) is speaking such that we should have a government mandate to force disclosure?”

The non-profit for digital privacy argues the law raises ‘significant free speech concerns’ and could represent the start of what’s going to be a long debate over what rights machines should have.

Do you think AIs should be forced to reveal themselves as not human?

 

The post Bill forcing AI bots to reveal themselves faces EFF opposition appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/05/24/bill-ai-bot-reveal-eff/feed/ 1
Experts warn AI poses a ‘clear and present danger’ https://news.deepgeniusai.com/2018/02/21/experts-ai-clear-present-danger/ https://news.deepgeniusai.com/2018/02/21/experts-ai-clear-present-danger/#comments Wed, 21 Feb 2018 16:16:59 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=2859 A report by leading experts calls on governments and businesses to address the “clear and present danger” posed by unregulated AI. The foreboding report is titled ‘The Malicious Use Of Artificial Intelligence’ and was co-authored by experts from Oxford University, The Centre For The Study of Existential Risk, the Electronic Frontier Foundation, and more. Three... Read more »

The post Experts warn AI poses a ‘clear and present danger’ appeared first on AI News.

]]>
A report by leading experts calls on governments and businesses to address the “clear and present danger” posed by unregulated AI.

The foreboding report is titled ‘The Malicious Use Of Artificial Intelligence’ and was co-authored by experts from Oxford University, The Centre For The Study of Existential Risk, the Electronic Frontier Foundation, and more.

Three primary areas of risk were identified:

Digital security — The risk of AI being used for increasing the scale and efficiency of cyberattacks. These attacks could be to compromise other systems by reducing laborious tasks, or it could exploit human error with new attacks such as speech synthesis.

Physical security — The idea that AI could be used to inflict direct harm on living beings or physical buildings/systems/infrastructure. Some provided examples include connected vehicles being compromised to crash, or even situations once seen as dystopian such as swarms of micro-drones.

Political security — The researchers highlight the possibility of AI automating the creation of propaganda, or manipulating existing content to sway opinions. With the allegations of Russia using digital means to influence the outcome of the U.S. presidential elections, and other key international decisions, for many people this will be the clearest example of the present danger.

Here are some of the potential scenarios:

  • Chatbots which mimic the writing styles of friends or family members to gain trust, and could even mimic them over a video call.
  • A cleaning robot which goes inside a government ministry daily, but has been compromised to detonate an explosive device when a specific figure is spotted.
  • A state-powered AI system that identifies anyone who contradicts government policy, and promptly flags them for arrest.
  • The creation of a fake video of a high-profile figure saying, or doing, something controversial which leads them to lose their job.

As with most things, it will likely take a disaster to occur before action is taken. The researchers are joining previous calls for AI regulation  including for a robot ethics charter, and for a ‘global stand’ against militarisation in the attempt to be more proactive about countering malicious usage.

In the report, the researchers wrote:

“The proposed interventions require attention and action not just from AI researchers and companies but also from legislators, civil servants, regulators, security researchers and educators. The challenge is daunting and the stakes are high.”

Some of the proposals include:

  • Policymakers collaborating closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.
  • Researchers and engineers considering misuse of their work.
  • Identifying best practices.
  • Expand the range of stakeholders and domain experts involved in discussions of these challenges.

The full report (PDF) is quite a chilling read, and highlights scenarios which could be straight out of Black Mirror. Hopefully, policymakers read the report and take heed of the experts’ warnings before it becomes necessary.

What are your thoughts about the warnings of malicious AI?

 

The post Experts warn AI poses a ‘clear and present danger’ appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/02/21/experts-ai-clear-present-danger/feed/ 1
AI can now beat those pesky CAPTCHAs https://news.deepgeniusai.com/2017/10/27/ai-can-now-beat-those-pesky-captchas/ https://news.deepgeniusai.com/2017/10/27/ai-can-now-beat-those-pesky-captchas/#respond Fri, 27 Oct 2017 16:15:56 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=2625 We’ve all had to prove we’re not “a robot” at some point, but it turns out the robots can now fact beat those pesky CAPTCHA checks. Until now we’ve come to accept them as a necessary evil, but it seems CAPTCHAs are quickly becoming a waste of time. Whether it’s one that asks you to... Read more »

The post AI can now beat those pesky CAPTCHAs appeared first on AI News.

]]>
We’ve all had to prove we’re not “a robot” at some point, but it turns out the robots can now fact beat those pesky CAPTCHA checks.

Until now we’ve come to accept them as a necessary evil, but it seems CAPTCHAs are quickly becoming a waste of time. Whether it’s one that asks you to solve a simple math equation, copy a jumbled (and often utterly illegible) set of characters, or choose select blocks from a grid.

At least, that’s according to research published in the journal Science conducted by Californian artificial intelligence firm Vicarious.

To be fair, not everyone has access to the same CAPTCHA-cracking AI as Vicarious — a company with significant funding by Amazon founder Jeff Bezos and Facebook’s Mark Zuckerberg. So, for now, CAPTCHAs aren’t entirely irrelevant.

CAPTCHAs were developed in the 1990s to prevent automated bots creating mass fake accounts on websites, or buying loads of limited tickets at retail price for later scalping at much higher rates. With the number of ways that letters can be rendered and jumbled together, CAPTCHAs have been a test intended to be trivial for a human, but difficult for a machine.

Over the years, CAPTCHAs have intentionally become more difficult to solve to outsmart the machines — too much so in some cases. Google boasts its reCAPTCHA test, which is one that asks users to select the blocks containing specific features such as street signs, can only be solved by humans 87 percent of the time.

A company like Google, which has spent vast amounts of resources on building neural networks analysing millions of images, could probably build a bot to bypass even its own reCaptcha test — but it’s unlikely Google would let others have access to such software, or go down the ticket scalping route themselves (but you never know, perhaps keep an eye on their eBay page for Bieber tickets.)

Vicarious cracked basic CAPTCHA tests back in 2013 with a 90 percent accuracy rate. Since then, CAPTCHA designers went back to the drawing board and made them more difficult. Vicarious returned to their labs, and the paper claims their AI can now beat even Google’s reCAPTCHA test 66.6 percent of the time.

The AI is based on what Vicarious calls a Recursive Cortical Network. This network is supposed to mimic processes in the human brain while requiring less computing power than a full neural network; so it can identify objects even if they are obscured by other objects.

Your ball, CAPTCHA designers.

Do you think it will remain possible to fool automated bots? Share your thoughts in the comments.

 

The post AI can now beat those pesky CAPTCHAs appeared first on AI News.

]]>
https://news.deepgeniusai.com/2017/10/27/ai-can-now-beat-those-pesky-captchas/feed/ 0
Facebook’s CherryPi ranked sixth in ‘StarCraft’ tournament of AI bots https://news.deepgeniusai.com/2017/10/12/facebook-cherrypi-starcraft-ai-bots/ https://news.deepgeniusai.com/2017/10/12/facebook-cherrypi-starcraft-ai-bots/#respond Thu, 12 Oct 2017 14:46:31 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=2538 Facebook participated in a StarCraft tournament which pitted artificial intelligent “bots” against each other instead of humans. Facebook’s “CherryPi” AI bot ranked sixth in the overall tournament with 2,049 wins out of 2,966 games. The StarCraft AI competition made use of the original StarCraft PC game and its expansion, Brood War. The motive behind this... Read more »

The post Facebook’s CherryPi ranked sixth in ‘StarCraft’ tournament of AI bots appeared first on AI News.

]]>
Facebook participated in a StarCraft tournament which pitted artificial intelligent “bots” against each other instead of humans. Facebook’s “CherryPi” AI bot ranked sixth in the overall tournament with 2,049 wins out of 2,966 games.

The StarCraft AI competition made use of the original StarCraft PC game and its expansion, Brood War. The motive behind this was to evaluate the state of artificial intelligence and how it handles real-time strategy games. This genre poses a challenge because the player must scan the field, manage resources, investigate unknown environments, and respond quickly to threats.

The Github notes state: “BWAPI only reveals the visible parts of the game state to AI modules by default. Information on units that have gone back into the fog of war is denied to the AI. This enables programmers to write competitive non-cheating AIs that must plan and operate under partial information conditions.”

Facebook’s CherryPi bot managed to reach sixth place out of 28 opponents in its first swing; suggesting that the company did not lose badly. The AI team behind CherryPi comprised of eight individuals who published a dataset drawing on early data collected from CherryPi’s gameplay. The data included 496 million player actions and millions of captured frames.

All bots were ranked by their final winning percentage of one-on-one games. They could not cheat or take advantage of in-game glitches. Each session lasted up to 60 minutes with “fog of war” enabled, which covers unexplored areas of the field. Bots were penalised for slow computations.

Out of the 28 competing bots, 15 were created and submitted by independent developers. Even more, all five bots ranking higher than CherryPI were created by independent developers. The only other non-indie AI developer to reach the top 10 was Stanford University with its Arrakhammer bot.

Find more of our coverage of bots here.

Are you surprised where Facebook ranked in the competition? Share your thoughts in the comments.

 

The post Facebook’s CherryPi ranked sixth in ‘StarCraft’ tournament of AI bots appeared first on AI News.

]]>
https://news.deepgeniusai.com/2017/10/12/facebook-cherrypi-starcraft-ai-bots/feed/ 0