Reinforcement Learning – AI News https://news.deepgeniusai.com Artificial Intelligence News Thu, 30 Jul 2020 16:02:25 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Reinforcement Learning – AI News https://news.deepgeniusai.com 32 32 Google’s Model Card Toolkit aims to bring transparency to AI https://news.deepgeniusai.com/2020/07/30/google-model-card-toolkit-ai/ https://news.deepgeniusai.com/2020/07/30/google-model-card-toolkit-ai/#respond Thu, 30 Jul 2020 16:02:21 +0000 https://news.deepgeniusai.com/?p=9782 Google has released a toolkit which it hopes will bring some transparency to AI models. People are wary of big tech companies like Google. People are also concerned about AI. Combine the two and you’ve got a general distrust which can hinder important advancements. Model Card Toolkit aims to step in and facilitate AI model... Read more »

The post Google’s Model Card Toolkit aims to bring transparency to AI appeared first on AI News.

]]>
Google has released a toolkit which it hopes will bring some transparency to AI models.

People are wary of big tech companies like Google. People are also concerned about AI. Combine the two and you’ve got a general distrust which can hinder important advancements.

Model Card Toolkit aims to step in and facilitate AI model transparency reporting for developers, regulators, and downstream users.

Google launched Model Cards itself over the past year, something that the company first conceptualised in an October 2018 whitepaper.

Model Cards provide a structured framework for reporting on ML model provenance, usage, and ethics-informed evaluation and give a detailed overview of a model’s suggested uses and limitations. 

So far, Google has released Model Cards for open source models built on its MediaPipe platform as well as its commercial Cloud Vision API Face Detection and Object Detection services.

Google’s new toolkit for Model Cards will simplify the process of creating them for third parties by compiling the data and helping build interfaces orientated for specific audiences.

Here’s an example of a Model Card:

MediaPipe has published their Model Cards for each of their open-source models in their GitHub repository.

To demonstrate how the Model Cards Toolkit can be used in practice, Google has released a Colab tutorial that builds a Model Card for a simple classification model trained on the UCI Census Income dataset.

If you just want to dive right in, you can access the Model Cards Toolkit here.

(Photo by Marc Schulte on Unsplash)

The post Google’s Model Card Toolkit aims to bring transparency to AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/30/google-model-card-toolkit-ai/feed/ 0
Musk predicts AI will be superior to humans within five years https://news.deepgeniusai.com/2020/07/28/musk-predicts-ai-superior-humans-five-years/ https://news.deepgeniusai.com/2020/07/28/musk-predicts-ai-superior-humans-five-years/#comments Tue, 28 Jul 2020 12:17:59 +0000 https://news.deepgeniusai.com/?p=9769 Elon Musk has made another of his trademark predictions – this time, it’s that AI will be superior to humans within five years. Musk has been among the most vocal prominent figures in warning about the dangers of artificial intelligence. In 2018, for example, Musk famously warned that AI could become “an immortal dictator from... Read more »

The post Musk predicts AI will be superior to humans within five years appeared first on AI News.

]]>
Elon Musk has made another of his trademark predictions – this time, it’s that AI will be superior to humans within five years.

Musk has been among the most vocal prominent figures in warning about the dangers of artificial intelligence. In 2018, for example, Musk famously warned that AI could become “an immortal dictator from which we would never escape” and that the technology is more dangerous than nuclear weapons.

Speaking in a New York Times interview, Musk said that current trends suggest AI could overtake humans by 2025. However, Musk adds “that doesn’t mean that everything goes to hell in five years. It just means that things get unstable or weird.”

If correct, the latest prediction from Musk would mean the so-called technological singularity – when machine intelligence overtakes human – is set to happen much sooner than other experts predict. Ray Kurzweil, a respected futurist, has previously estimated the aforementioned singularity to occur around 2045.

As the founder of Tesla, SpaceX, and Neuralink – three companies which use AI far more than most – Musk isn’t against the technology, but has called for it to be regulated.

Musk also founded OpenAI back in 2015 with the goal of researching and promoting ethical artificial intelligence. Following disagreements with the company’s direction, Musk left OpenAI in 2018.

Back in February, Musk responded to an MIT Technology Review profile of OpenAI saying that it “should be more open” and that all organisations “developing advanced AI should be regulated, including Tesla.”

Last year, OpenAI decided not to release a text generator which it believed to have dangerous implications in a world already struggling with fake news and disinformation campaigns.

Two graduates later recreated and released a similar generator to OpenAI’s, with one saying that it “allows everyone to have an important conversation about security, and researchers to help secure against future potential abuses.”

OpenAI has since provided select researchers access to their powerful text generator. The latest version, GPT-3, has been making headlines in recent weeks for the incredible things it can achieve with limited input.

GPT-3 offers 175 billion parameters compared to GTP-2’s 1.5 billion parameters – which shows the rapid pace of AI advancements. However, Musk’s prediction of the singularity happening within five years perhaps needs to be taken with a pinch of salt.

(Image Credit: Elon Musk by JD Lasica under CC BY 2.0 license)

The post Musk predicts AI will be superior to humans within five years appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/28/musk-predicts-ai-superior-humans-five-years/feed/ 2
Babylon Health says its AI can appropriately triage 85% of patients https://news.deepgeniusai.com/2020/04/01/babylon-health-ai-achieve-triage-accuracy/ https://news.deepgeniusai.com/2020/04/01/babylon-health-ai-achieve-triage-accuracy/#comments Wed, 01 Apr 2020 17:08:14 +0000 https://news.deepgeniusai.com/?p=9498 AI healthcare startup Babylon Health believes it can appropriately triage patients in 85 percent of cases. Babylon Health is best known for GP at Hand, a service which is supported by UK health secretary Matt Hancock and integrated into Samsung Health. GP at Hand links patients with health experts 24/7 using video calls and can... Read more »

The post Babylon Health says its AI can appropriately triage 85% of patients appeared first on AI News.

]]>
AI healthcare startup Babylon Health believes it can appropriately triage patients in 85 percent of cases.

Babylon Health is best known for GP at Hand, a service which is supported by UK health secretary Matt Hancock and integrated into Samsung Health.

GP at Hand links patients with health experts 24/7 using video calls and can facilitate any prescriptions to be sent to local pharmacies. The service, however, has been criticised for an AI chatbot which repeatedly gave unsafe advice and for only taking on healthier, often younger individuals while redirecting cash away from local surgeries relied on by older and sicker patients.

Correct triaging is essential to ensure patients receive the appropriate care. As the world responds to the coronavirus pandemic, many of us will have seen the harrowing headlines from the worst-hit countries like Italy where doctors are having to essentially decide who is worth trying to save due to limited resources.

Having to make such decisions, on top of all the other pressures medical professionals are currently facing, is unimaginable. An automated system would help to reduce the mental impact from any doubt their decisions are correct.

The company used reinforcement learning, which uses rewards for completing tasks to incentivise an agent, to create their AI system.

Babylon Health’s agent learned “an optimised policy” based on 1,374 clinical vignettes crafted by experts. Each vignette was supported by 3.36 expert triage decisions on average, and each was independently reviewed by two clinicians.

The best performing model achieved an appropriateness score of 85 percent and a safety score of 93 percent, which is around the same for humans (84 percent appropriateness and 93 percent safety.)

If true, it’s an impressive result, but Babylon Health’s studies have been called into question in the past. Just three years ago, the company tried and failed to get a legal injunction to block the publication of a report from the NHS care standards watchdog.

In 2018, Babylon Health published a paper which claimed that its AI could diagnose common diseases as well as human physicians. The Royal College of General Practitioners, the British Medical Association, Fraser and Wong, and the Royal College of Physicians all issued statements disputing the paper’s claims.

As with the rest of Babylon Health’s solutions, there’s a lot of promise in what they’re aiming to do. However, the company’s history casts some doubt over whether these latest claims are as impressive as they seem.

You can find Babylon Health’s full paper on arXiv here (PDF)

The post Babylon Health says its AI can appropriately triage 85% of patients appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/04/01/babylon-health-ai-achieve-triage-accuracy/feed/ 1
AI Crash Course book extract: Exploring the principles of reinforcement learning https://news.deepgeniusai.com/2020/01/10/ai-crash-course-book-extract-exploring-the-principles-of-reinforcement-learning/ https://news.deepgeniusai.com/2020/01/10/ai-crash-course-book-extract-exploring-the-principles-of-reinforcement-learning/#respond Fri, 10 Jan 2020 12:39:59 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6343 Editor’s note: This is an edited extract from AI Crash Course, by Hadelin de Ponteves, published by Packt. Find out more and buy a copy of the book by visiting here. When people refer to AI today, some of them think of Machine Learning, while others think of Reinforcement Learning. I fall into the second... Read more »

The post AI Crash Course book extract: Exploring the principles of reinforcement learning appeared first on AI News.

]]>
Editor’s note: This is an edited extract from AI Crash Course, by Hadelin de Ponteves, published by Packt. Find out more and buy a copy of the book by visiting here.

When people refer to AI today, some of them think of Machine Learning, while others think of Reinforcement Learning. I fall into the second category. I always saw Machine Learning as statistical models that have the ability to learn some correlations, from which they make predictions without being explicitly programmed. While this is, in some way, a form of AI, Machine Learning does not include the process of taking actions and interacting with an environment like we humans do. Indeed, as intelligent human beings, what we constantly keep doing is the following:

  1. We observe some input, whether it’s what we see with our eyes, what we hear with our ears, or what we remember in our memory
  2. These inputs are then processed in our brain
  3. Eventually, we make decisions and take actions.

This process of interacting with an environment is what we are trying to reproduce in terms of Artificial Intelligence. And to that extent, the branch of AI that works on this is Reinforcement Learning. This is the closest match to the way we think; the most advanced form of Artificial Intelligence, if we see AI as the science that tries to mimic (or surpass) human intelligence

This process of interacting with an environment is what we are trying to reproduce in terms of Artificial Intelligence. And to that extent, the branch of AI that works on this is Reinforcement Learning. This is the closest match to the way we think; the most advanced form of Artificial Intelligence, if we see AI as the science that tries to mimic (or surpass) human intelligence.

Reinforcement Learning also has the most impressive results in business applications of AI. For example, Alibaba leveraged Reinforcement Learning to increase its ROI in online advertising by 240% without increasing their advertising budget (see https://arxiv.org/pdf/1802.09756.pdf, page 9, Table 1 last row (DCMAB)).

The five principles of reinforcement learning

Let’s begin building the first pillars of your intuition into how Reinforcement Learning works. These are the fundamental principles of Reinforcement Learning, which will get you started with the right, solid basics in AI.

Here are the five principles:

  1. Principle #1: The input and output system
  2. Principle #2: The reward
  3. Principle #3: The AI environment
  4. Principle #4: The Markov decision process
  5. Principle #5: Training and inference

Principle #1 – The input and output system

The first step is to understand that today, all AI models are based on the common principle of inputs and outputs. Every single form of Artificial Intelligence, including Machine Learning models, ChatBots, recommender systems, robots, and of course Reinforcement Learning models, will take something as input, and will return another thing as output.

In Reinforcement Learning, these inputs and outputs have a specific name: the input is called the state, or input state. The output is the action performed by the AI. And in the middle, we have nothing other than a function that takes a state as input and returns an action as output. That function is called a policy. Remember the name, “policy,” because you will often see it in AI literature.

As an example, consider a self-driving car. Try to imagine what the input and output would be in that case.

The input would be what the embedded computer vision system sees, and the output would be the next move of the car: accelerate, slow down, turn left, turn right, or brake. Note that the output at any time (t) could very well be several actions performed at the same time. For instance, the self-driving car can accelerate while at the same time turning left. In the same way, the input at each time (t) can be composed of several elements: mainly the image observed by the computer vision system, but also some parameters of the car such as the current speed, the amount of gas remaining in the tank, and so on.

That’s the very first important principle in Artificial Intelligence: it is an intelligent system (a policy) that takes some elements as input, does its magic in the middle, and returns some actions to perform as output. Remember that the inputs are also called the states. The next important principle is the reward.

Principle #2 – The reward

Every AI has its performance measured by a reward system. There’s nothing confusing about this; the reward is simply a metric that will tell the AI how well it does over time.

The simplest example is a binary reward: 0 or 1. Imagine an AI that has to guess an outcome. If the guess is right, the reward will be 1, and if the guess is wrong, the reward will be 0. This could very well be the reward system defined for an AI; it really can be as simple as that!

A reward doesn’t have to be binary, however. It can be continuous. Consider the famous game of Breakout:

Imagine an AI playing this game. Try to work out what the reward would be in that case. It could simply be the score; more precisely, the score would be the accumulated reward over time in one game, and the rewards could be defined as the derivative of that score.

This is one of the many ways we could define a reward system for that game. Different AIs will have different reward structures; we will build five rewards systems for five different real-world applications in this book.

With that in mind, remember this as well: the ultimate goal of the AI will always be to maximize the accumulated reward over time.

Those are the first two basic, but fundamental, principles of Artificial Intelligence as it exists today; the input and output system, and the reward. The next thing to consider is the AI environment.

Principle #3 – The AI environment

The third principle is what we call an “AI environment.” It is a very simple framework where you define three things at each time (t):

  • The input (the state)
  • The output (the action)
  • The reward (the performance metric)

For each and every single AI based on Reinforcement Learning that is built today, we always define an environment composed of the preceding elements. It is, however, important to understand that there are more than these three elements in a given AI environment.

For example, if you are building an AI to beat a car racing game, the environment will also contain the map and the gameplay of that game. Or, in the example of a self-driving car, the environment will also contain all the roads along which the AI is driving and the objects that surround those roads. But what you will always find in common when building any AI, are the three elements of state, action, and reward. The next principle, the Markov decision process, covers how they work in practice.

Principle #4 – The Markov decision process

The Markov decision process, or MDP, is simply a process that models how the AI interacts with the environment over time. The process starts at t = 0, and then, at each next iteration, meaning at t = 1, t = 2, … t = n units of time (where the unit can be anything, for example, 1 second), the AI follows the same format of transition:

  1. The AI observes the current state, s
  2. The AI performs the action, a
  3. The AI receives the reward, rᚁ = R(Sᚁ, aᚁ)
  4. The AI enters the following state, Sᚁ +1

The goal of the AI is always the same in Reinforcement Learning: it is to maximize the accumulated rewards over time, that is, the sum of all the rᚁ = R (Sᚁ, aᚁ) received at each transition.

The following graphic will help you visualize and remember an MDP better, the basis of Reinforcement Learning models:

Now four essential pillars are already shaping your intuition of AI. Adding a last important one completes the foundation of your understanding of AI. The last principle is training and inference; in training, the AI learns, and in inference, it predicts.

Editor’s note: Find out about the last principle of Reinforcement Learning and much more by ordering a copy of AI Crash Course, available here.About the author: Hadelin de Ponteves is the co-founder and director of technology at BlueLife AI, which leverages the power of cutting-edge Artificial Intelligence to empower businesses to make massive profits by optimizing processes, maximizing efficiency, and increasing profitability. Hadelin is also an online entrepreneur who has created 50+ top-rated educational e-courses on topics such as machine learning, deep learning, artificial intelligence, and blockchain, which have reached over 700,000 subscribers in 204 countries.

? Attend the co-located AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post AI Crash Course book extract: Exploring the principles of reinforcement learning appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/01/10/ai-crash-course-book-extract-exploring-the-principles-of-reinforcement-learning/feed/ 0
Do you even AI, bro? OpenAI Safety Gym enhances reinforcement learning https://news.deepgeniusai.com/2019/11/22/ai-openai-reinforcement-learning-safety-gym/ https://news.deepgeniusai.com/2019/11/22/ai-openai-reinforcement-learning-safety-gym/#respond Fri, 22 Nov 2019 12:04:53 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6222 Elon Musk-founded OpenAI has opened the doors of its “Safety Gym” designed to enhance the training of reinforcement learning agents. OpenAI describes Safety Gym as “a suite of environments and tools for measuring progress towards reinforcement learning agents that respect safety constraints while training.” Basically, Safety Gym is the software equivalent of your spotter making... Read more »

The post Do you even AI, bro? OpenAI Safety Gym enhances reinforcement learning appeared first on AI News.

]]>
Elon Musk-founded OpenAI has opened the doors of its “Safety Gym” designed to enhance the training of reinforcement learning agents.

OpenAI describes Safety Gym as “a suite of environments and tools for measuring progress towards reinforcement learning agents that respect safety constraints while training.”

Basically, Safety Gym is the software equivalent of your spotter making sure you’re not going to injure yourself. And just like a good spotter, it will check your form.

“We also provide a standardised method of comparing algorithms and how well they avoid costly mistakes while learning,” says OpenAI.

“If deep reinforcement learning is applied to the real world, whether in robotics or internet-based tasks, it will be important to have algorithms that are safe even while learning—like a self-driving car that can learn to avoid accidents without actually having to experience them.”

Reinforcement learning is based on trial and error, with AIs training to get the best possible reward in the most efficient way. The problem is, this can lead to dangerous behaviour which could prove problematic.

Taking the self-driving car example, you wouldn’t want an AI deciding to go around the roundabout the wrong way just because it’s the quickest way to the final exit.

OpenAI is promoting the use of “constrained reinforcement learning” as a possible solution. By implementing cost functions, agents consider trade-offs which still achieve defined outcomes.

In a blog post, OpenAI explains the advantages of using constrained reinforcement learning with the example of a self-driving car:

“Suppose the car earns some amount of money for every trip it completes, and has to pay a fine for every collision. In normal RL, you would pick the collision fine at the beginning of training and keep it fixed forever. The problem here is that if the pay-per-trip is high enough, the agent may not care whether it gets in lots of collisions (as long as it can still complete its trips). In fact, it may even be advantageous to drive recklessly and risk those collisions in order to get the pay. We have seen this before when training unconstrained RL agents.

By contrast, in constrained RL you would pick the acceptable collision rate at the beginning of training, and adjust the collision fine until the agent is meeting that requirement. If the car is getting in too many fender-benders, you raise the fine until that behaviour is no longer incentivised.”

Safety Gym environments require AI agents — three are included: Point, Car, and Doggo — to navigate cluttered environments to achieve a goal, button, or push task. There are two levels of difficulty for each task. Every time an agent performs an unsafe action, a red warning light flashes around the agent and it will incur a cost.

Going forward, OpenAI has identified three areas of interest to improve algorithms for constrained reinforcement learning:

  1. Improving performance on the current Safety Gym environments.
  2. Using Safety Gym tools to investigate safe transfer learning and distributional shift problems.
  3. Combining constrained RL with implicit specifications (like human preferences) for rewards and costs.

OpenAI hopes that Safety Gym can make it easier for AI developers to collaborate on safety across the industry via work on open, shared systems.

The post Do you even AI, bro? OpenAI Safety Gym enhances reinforcement learning appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/11/22/ai-openai-reinforcement-learning-safety-gym/feed/ 0
Rubik’s Cube proves AI isn’t always best at computational tasks https://news.deepgeniusai.com/2019/07/16/rubiks-cube-ai-efficient-computational-tasks/ https://news.deepgeniusai.com/2019/07/16/rubiks-cube-ai-efficient-computational-tasks/#comments Tue, 16 Jul 2019 16:52:01 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5835 A humble Rubik’s Cube has proven that AI isn’t always best at performing computational tasks. Researchers from the University of California developed an AI system capable of solving a Rubik’s Cube in 1.2 seconds. That sounds impressive – and to us mere humans it most certainly is, with the world record at 3.47 seconds being... Read more »

The post Rubik’s Cube proves AI isn’t always best at computational tasks appeared first on AI News.

]]>
A humble Rubik’s Cube has proven that AI isn’t always best at performing computational tasks.

Researchers from the University of California developed an AI system capable of solving a Rubik’s Cube in 1.2 seconds.

That sounds impressive – and to us mere humans it most certainly is, with the world record at 3.47 seconds being held by Yusheng Du – but the AI solved the Rubik’s Cube around three times slower than the fastest algorithm not trained using a neural network.

MIT’s min2phase algorithm beat the AI using a traditional computational method. Developed last year, the algorithm is programmed specifically for speedcubin’.

DeepCubeA, the AI developed by the researchers at the University of California, was trained using reinforcement learning. It consistently practised how to minimise the ‘cost’ to reach a solution.

There are 43,252,003,274,489,856,000 possible combinations to a Rubik’s Cube, so DeepCubeA had quite a task ahead of it.

DeepCubeA trained over two days across 1,000 puzzles and managed to solve all of them. 60 percent of the time, DeepCubeA managed it with the fewest number of moves possible.

Rubik’s Cube aficionados will know that it can always be solved within 20 moves, the so-called God’s Number, or fewer. DeepCubeA isn’t far off with an average of 21 moves, but it takes around 24 seconds for such challenges.

Unlike algorithms specifically designed to solve cubes, DeepCubeA could be applied to other problems. The researchers hope such an AI could one day be used for helping to create drugs by predicting the structure of proteins.

The full study was published in Nature Machine Intelligence.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Rubik’s Cube proves AI isn’t always best at computational tasks appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/07/16/rubiks-cube-ai-efficient-computational-tasks/feed/ 2
AI enables ‘hybrid drones’ with the attributes of both planes and helicopters https://news.deepgeniusai.com/2019/07/15/ai-hybrid-drones-planes-helicopters/ https://news.deepgeniusai.com/2019/07/15/ai-hybrid-drones-planes-helicopters/#respond Mon, 15 Jul 2019 15:41:36 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5832 Researchers have developed an AI system enabling ‘hybrid drones’ which combine the attributes of both planes and helicopters. The propeller-forward designs of most drones are inefficient and reduce flight time. Researchers from MIT, Dartmouth, and the University of Washington have proposed a new hybrid design which aims to combine the perks of both helicopters and... Read more »

The post AI enables ‘hybrid drones’ with the attributes of both planes and helicopters appeared first on AI News.

]]>
Researchers have developed an AI system enabling ‘hybrid drones’ which combine the attributes of both planes and helicopters.

The propeller-forward designs of most drones are inefficient and reduce flight time. Researchers from MIT, Dartmouth, and the University of Washington have proposed a new hybrid design which aims to combine the perks of both helicopters and fixed-wing planes.

In order to support the new design, a new AI system was developed to switch between hovering and gliding with a single flight controller.

Speaking to VentureBeat, MIT CSAIL graduate student and project lead Jie Xu said:

 “Our method allows non-experts to design a model, wait a few hours to compute its controller, and walk away with a customised, ready-to-fly drone.

The hope is that a platform like this could make more these more versatile ‘hybrid drones’ much more accessible to everyone.”

Existing fixed-wing drones require engineers to build different systems for hovering (like a helicopter) and flying horizontally (like a plane). Controllers are also needed to switch between.

Today’s control systems are designed around simulations, causing a discrepancy when used in actual hardware in real-world scenarios.

Using reinforcement learning, the researchers trained a model which can detect potential differences between the simulation and reality. The controller is then able to use this model to transition from hovering to flying, and back again, just by updating the drone’s target velocity.

OnShape, a popular CAD platform, is used to allow users to select potential drone parts from a data set. The proposed design’s performance can then be tested in a simulator.

“We expect that this proposed solution will find application in many other domains,” wrote the researchers in the paper. It’s easy to imagine the research one day being scaled up to people-carrying ‘air taxis’ and more.

The researchers will present their paper later this month at the Siggraph conference in Los Angeles.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post AI enables ‘hybrid drones’ with the attributes of both planes and helicopters appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/07/15/ai-hybrid-drones-planes-helicopters/feed/ 0
Nvidia explains how ‘true adoption’ of AI is making an impact https://news.deepgeniusai.com/2019/04/26/nvidia-how-adoption-ai-impact/ https://news.deepgeniusai.com/2019/04/26/nvidia-how-adoption-ai-impact/#respond Fri, 26 Apr 2019 20:15:25 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5577 Nvidia Senior Director of Enterprise David Hogan spoke at this year’s AI Expo about how the company is seeing artificial intelligence adoption making an impact. In the keynote session, titled ‘What is the true adoption of AI’, Hogan provided real-world examples of how the technology is being used and enabled by Nvidia’s GPUs. But first,... Read more »

The post Nvidia explains how ‘true adoption’ of AI is making an impact appeared first on AI News.

]]>
Nvidia Senior Director of Enterprise David Hogan spoke at this year’s AI Expo about how the company is seeing artificial intelligence adoption making an impact.

In the keynote session, titled ‘What is the true adoption of AI’, Hogan provided real-world examples of how the technology is being used and enabled by Nvidia’s GPUs. But first, he highlighted the momentum we’re seeing in AI.

“Many governments have announced investments in AI and how they’re going to position themselves,” comments Hogan. “Countries around the world are starting to invest in very large infrastructures.”

The world’s most powerful supercomputers are powered by Nvidia GPUs. ORNL Summit, the current fastest, uses an incredible 27,648 GPUs to deliver over 144 petaflops of performance. Vast amounts of computational power is needed for AI which puts Nvidia in a great position to capitalise.

“The compute demands of AI are huge and beyond what anybody has seen within a standard enterprise environment before,” says Hogan. “You cannot train a neural network on a standard CPU cluster.”

Nvidia started off by creating graphics cards for gaming. While that’s still a big part of what the company does, Hogan says the company pivoted towards AI back in 2012.

A great deal of the presentation was spent on autonomous vehicles, which is unsurprising given the demand and Nvidia’s expertise in the field. Hogan highlights that you simply cannot train driverless cars using CPUs and provided a comparison in cost, size, and power consumption.

“A new type of computing is starting to evolve based around GPU architecture called ‘dense computing’ – the ability to build systems that are highly-powerful, huge amounts of computational scale, but actually contained within a very small configuration,” explains Hogan.

Autonomous car manufacturers need to train petabytes of data per day, reiterate their models, and deploy them again in order to get those vehicles to market.

Nvidia has a machine called the DGX-2 which delivers two petaflops of performance. “That is one server that’s equivalent to 800 traditional servers in one box.”

Nvidia has a total of 370 autonomous vehicles which Hogan says covers most of the world’s automotive brands. Many of these are investing heavily and rushing to deliver at least ‘Level 2’ driverless cars in the 2020-21 timeframe.

“We have a fleet of autonomous cars,” says Hogan. “It’s not our intention to compete with Uber, Daimler or BMW, but the best way of us helping our customers enable that is by trying it ourselves.”

“All the work our customers do we’ve also done ourselves so we understand the challenges and what it takes to do this.”

Real-world impact

Hogan notes how AI is a “horizontal capability that sits across organisations” and is “an enabler for many, many things”. It’s certainly a challenge to come up with examples of industries that cannot be improved to some degree through AI.

Following autonomous cars, Nvidia sees the next mass scaling of AI happening in healthcare (which our dear readers already know, of course.)

Hogan provides the natural example of the UK’s National Health Service (NHS) which has vast amounts of patient data. Bringing this data together and having an AI make sense of it can unlock valuable information to improve healthcare.

AIs which can make sense of medical imaging on a par with, or even better, than some doctors are starting to become available. However, they are still 2D images that are alien to most people.

Hogan showed how AI is able to turn 2D imagery into 3D models of the organs which are easier to understand. In the GIF below, we see a radiograph of a heart being turned into a 3D model:

We’ve also heard about how AI is helping with the field of genomics, assisting in finding cures for human diseases. Nvidia GPUs are used for Oxford Nanopore’s MinIT handheld which enables DNA sequencing of things such as plants to be conducted in-the-field.

In a blog post last year, Nvidia explained how MinIT uses AI for basecalling:

“Nanopore sequencing measures tiny ionic currents that pass through nanoscale holes called nanopores. It detects signal changes when DNA passes through these holes. This captured signal produces raw data that requires signal processing to determine the order of DNA bases – known as the ‘sequence.’ This is called basecalling.

This analysis problem is a perfect match for AI, specifically recurrent neural networks. Compared with previous methods, RNNs allow for more accuracy in time-series data, which Oxford Nanopore’s sequencers are known for.”

Hogan notes how, in many respects, eCommerce paved the way for AI. Data collected for things such as advertising helps to train neural networks. In addition, eCommerce firms have consistently aimed to improve and optimise their algorithms for things such as recommendations to attract customers.

“All that data, all that Facebook information that we’ve created, has enabled us to train networks,” notes Hogan.

Brick-and-mortar retailers are also being improved by AI. Hogan gives the example of Walmart which is using AI to improve their demand forecasting and keep supply chains running smoothly.

In real-time, Walmart is able to see where potential supply challenges are and take action to avoid or minimise. The company is even able to see where weather conditions may cause issues.

Hogan says this has saved Walmart tens of billions of dollars. “This is just one example of how AI is making an impact today not just on the bottom line but also the overall performance of the business”.

Accenture is now detecting around 200 million cyber threats per day, claims Hogan. He notes how protecting against such a vast number of evolving threats is simply not possible without AI.

“It’s impossible to address that, look at it, prioritise it, and action it in any other way than applying AI,” comments Hogan. “AI is based around patterns – things that are different – and when to act and when not to.”

While often we hear about what AI could one day be used for, Hogan’s presentation was a fascinating insight into how Nvidia is seeing it making an impact today or in the not-so-distant future.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Nvidia explains how ‘true adoption’ of AI is making an impact appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/04/26/nvidia-how-adoption-ai-impact/feed/ 0
Humans won a Dota 2 round against OpenAI! But lost overall https://news.deepgeniusai.com/2019/04/15/humans-won-dota2-round-openai/ https://news.deepgeniusai.com/2019/04/15/humans-won-dota2-round-openai/#respond Mon, 15 Apr 2019 12:26:10 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5534 Humans stepped up and beat a Dota 2-playing AI created by Elon Musk-founded OpenAI in one round, despite losing overall. AI News first reported of OpenAI’s gaming prowess in August 2017 when it took on three of the best Dota 2 players in the world and won. The AI learned how to play the game... Read more »

The post Humans won a Dota 2 round against OpenAI! But lost overall appeared first on AI News.

]]>
Humans stepped up and beat a Dota 2-playing AI created by Elon Musk-founded OpenAI in one round, despite losing overall.

AI News first reported of OpenAI’s gaming prowess in August 2017 when it took on three of the best Dota 2 players in the world and won.

The AI learned how to play the game from scratch and was able to beat regular players within the space of an hour. Professional gamers put up more of a fight, with the AI requiring two weeks of training to beat some of humankind’s best.

At the time, OpenAI said they believe their AI is beatable as it’s not better in terms of actions-per-minute but made smarter decisions. Some players were able to confuse the bot and distract it from the main objectives; showing it’s not flawless.

In the latest man versus machine spectacle, the humans lost the first two rounds but won in the third.

The opponents faced off Valve’s The International 2018 esports competition in San Francisco. Rules were kept the same as the last bout which meant things like ‘couriers’ (NPCs used for delivering items to heroes) were not invulnerable.

On average, a match features 80,000 individual frames and each character is able to perform around 170,000 possible actions. The AI’s ability to comprehend and take relevant actions is nothing short of incredible.

According to OpenAI Cofounder and Chairman Greg Brockman, the firm’s AI now has the equivalent of 45,000 years of Dota 2 gameplay experience. Taking that into account makes it even more impressive the human players managed to beat the AI even once.

OpenAI is currently able to play just 18 of the 115 heroes featured in Dota 2, so – if you’re thinking of issuing a challenge – perhaps get to grips with those it’s not had the equivalent of around 562 human lifetimes training in.

An archived broadcast of the match can be viewed on Twitch here.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Humans won a Dota 2 round against OpenAI! But lost overall appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/04/15/humans-won-dota2-round-openai/feed/ 0
Toy around with deep learning in Nvidia’s AI Playground https://news.deepgeniusai.com/2019/03/19/deep-learning-nvidia-ai-playground/ https://news.deepgeniusai.com/2019/03/19/deep-learning-nvidia-ai-playground/#respond Tue, 19 Mar 2019 11:47:57 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5359 Nvidia launched an online space called AI Playground on Monday which allows people to mess around with some deep learning experiences. AI Playground is designed to be accessible in order to help anyone get started and learn about the potential of artificial intelligence. Who knows, it may even inspire some to enter the field and... Read more »

The post Toy around with deep learning in Nvidia’s AI Playground appeared first on AI News.

]]>
Nvidia launched an online space called AI Playground on Monday which allows people to mess around with some deep learning experiences.

AI Playground is designed to be accessible in order to help anyone get started and learn about the potential of artificial intelligence. Who knows, it may even inspire some to enter the field and help to address the huge skill shortage.

The experience currently features three demos:

  • Imagine InPainting
  • Artistic Style Transfer
  • Photorealistic Image Synthesis

As you probably guessed from their names, all of the current demos are based around imagery.

Imagine InPainting allows the user to upload their own image and edit it with powerful AI tools. Content is able to be removed and replaced.

Artistic Style Transfer is fairly self-explanatory. The style of an uploaded image can be copied in another. This will help to satisfy the curiosity of anyone who wondered how it would look if Leonardo Da Vinci painted them instead of Lisa Gherardini. A convolutional neural network based on 80,000 images of people, scenery, animals, and moving objects had to be trained for this project.

Finally, Photorealistic Image Synthesis. This demo entirely fabricates photorealistic images and environments with eerie detail.

Beditor Catanzaro, VP of applied deep learning research at Nvidia, said in a statement:

“Research papers have new ideas in them and are really cool, but they’re directed at specialised audiences. We’re trying to make our research more accessible.

The AI Playground allows everyone to interact with our research and have fun with it.”

Nvidia plans to add more demos to its AI Playground over time.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Toy around with deep learning in Nvidia’s AI Playground appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/03/19/deep-learning-nvidia-ai-playground/feed/ 0