Deep Learning – AI News https://news.deepgeniusai.com Artificial Intelligence News Tue, 28 Jul 2020 12:18:01 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Deep Learning – AI News https://news.deepgeniusai.com 32 32 Musk predicts AI will be superior to humans within five years https://news.deepgeniusai.com/2020/07/28/musk-predicts-ai-superior-humans-five-years/ https://news.deepgeniusai.com/2020/07/28/musk-predicts-ai-superior-humans-five-years/#comments Tue, 28 Jul 2020 12:17:59 +0000 https://news.deepgeniusai.com/?p=9769 Elon Musk has made another of his trademark predictions – this time, it’s that AI will be superior to humans within five years. Musk has been among the most vocal prominent figures in warning about the dangers of artificial intelligence. In 2018, for example, Musk famously warned that AI could become “an immortal dictator from... Read more »

The post Musk predicts AI will be superior to humans within five years appeared first on AI News.

]]>
Elon Musk has made another of his trademark predictions – this time, it’s that AI will be superior to humans within five years.

Musk has been among the most vocal prominent figures in warning about the dangers of artificial intelligence. In 2018, for example, Musk famously warned that AI could become “an immortal dictator from which we would never escape” and that the technology is more dangerous than nuclear weapons.

Speaking in a New York Times interview, Musk said that current trends suggest AI could overtake humans by 2025. However, Musk adds “that doesn’t mean that everything goes to hell in five years. It just means that things get unstable or weird.”

If correct, the latest prediction from Musk would mean the so-called technological singularity – when machine intelligence overtakes human – is set to happen much sooner than other experts predict. Ray Kurzweil, a respected futurist, has previously estimated the aforementioned singularity to occur around 2045.

As the founder of Tesla, SpaceX, and Neuralink – three companies which use AI far more than most – Musk isn’t against the technology, but has called for it to be regulated.

Musk also founded OpenAI back in 2015 with the goal of researching and promoting ethical artificial intelligence. Following disagreements with the company’s direction, Musk left OpenAI in 2018.

Back in February, Musk responded to an MIT Technology Review profile of OpenAI saying that it “should be more open” and that all organisations “developing advanced AI should be regulated, including Tesla.”

Last year, OpenAI decided not to release a text generator which it believed to have dangerous implications in a world already struggling with fake news and disinformation campaigns.

Two graduates later recreated and released a similar generator to OpenAI’s, with one saying that it “allows everyone to have an important conversation about security, and researchers to help secure against future potential abuses.”

OpenAI has since provided select researchers access to their powerful text generator. The latest version, GPT-3, has been making headlines in recent weeks for the incredible things it can achieve with limited input.

GPT-3 offers 175 billion parameters compared to GTP-2’s 1.5 billion parameters – which shows the rapid pace of AI advancements. However, Musk’s prediction of the singularity happening within five years perhaps needs to be taken with a pinch of salt.

(Image Credit: Elon Musk by JD Lasica under CC BY 2.0 license)

The post Musk predicts AI will be superior to humans within five years appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/28/musk-predicts-ai-superior-humans-five-years/feed/ 2
Beware the AI winter – but can Covid-19 alter this process? https://news.deepgeniusai.com/2020/05/26/beware-the-ai-winter-but-can-covid-19-alter-this-process/ https://news.deepgeniusai.com/2020/05/26/beware-the-ai-winter-but-can-covid-19-alter-this-process/#respond Tue, 26 May 2020 12:01:54 +0000 https://news.deepgeniusai.com/?p=9621 We have had a blockchain winter as the hype around the technology moves towards a reality – and the same will happen with artificial intelligence (AI). That’s according to Dr Karol Przystalski, CTO at IT consulting and software development provider Codete. Przystalski founded Codete having had a significant research background in AI, with previous employers... Read more »

The post Beware the AI winter – but can Covid-19 alter this process? appeared first on AI News.

]]>
We have had a blockchain winter as the hype around the technology moves towards a reality – and the same will happen with artificial intelligence (AI).

That’s according to Dr Karol Przystalski, CTO at IT consulting and software development provider Codete. Przystalski founded Codete having had a significant research background in AI, with previous employers including Sabre and IBM and a PhD exploring skin cancer pattern recognition using neural networks.

Yet what effect will the Covid-19 pandemic have on this change? Speaking with AI News, Przystalski argues – much like Dorian Selz, CEO of Squirro, in a piece published earlier this week – that while AI isn’t quite there to predict or solve the current pandemic, the future can look bright.

AI News: Hi Karol. Tell us about your career to date and your current role and responsibilities as the CTO of Codete?

Dr Karol Przystalski: The experience from the previous companies I worked at and the AI background that I had from my PhD work allowed me to get Codete off the ground. At the beginning, not every potential client could see the advantages of machine learning, but it has changed in the last couple of years. We’ve started to implement more and more machine learning-based solutions.

Currently, my responsibilities as the CTO are not focused solely on development, as we have already grown to 160 engineers. Even though I still devote some of my attention to research and development, most of my work right now is centred on mentoring and training in the areas of artificial intelligence and big data.

AI: Tell us about the big data and data science services Codete provides and how your company aims to differ from the competitors?

KP: We offer a number of services related to big data and data science: consulting, auditing, training, and software development support. Based on our extensive experience in machine learning solutions, we provide advice to our clients. We audit already implemented solutions, as well as whole processes of product development. We also have a workshop for managers on how not to fail with a machine learning project.

All the materials are based on our own case studies. As a technological partner, we focus on the quality of the applications that we deliver, and we always aim at full transparency in relationships with our clients.

AI: How difficult is it, in your opinion, for companies to gather data science expertise? Is there a shortage of skills and a gap in this area?

KP: In the past, to become a data scientist you had to have a mathematical background or, even better, a PhD in this field. We now know it’s not that hard to implement machine learning solutions, and almost every software developer can become a data scientist.

There are plenty of workshops, lectures, and many other materials dedicated to software developers who want to understand machine learning methods. Usually, the journey starts with a few proof of concepts and, in the next build, production solutions. It usually takes a couple of months at the very minimum to become a solid junior level data scientist, even for experienced software engineers. Codete is well-known in the machine learning communities at several universities, and that’s why we can easily extend our team with experienced ML engineers.

AI: What example can you provide of a client Codete has worked with throughout their journey, from research and development to choosing a solution for implementation?

KP: We don’t implement all of the projects that clients bring to us. In the first stage, we distinguish between projects that are buzzword-driven and the real-world ones.

One time, a client came to us with an idea for an NLP project for their business. After some research, it turned out that ML was not the best choice for the project – we recommended a simpler, cheaper solution that was more suitable in their case.

We are transparent with our clients, even if it takes providing them with constructive criticism on the solution they want to build. Most AI projects start with a PoC, and if it works well, the project goes through the next stages to a full production solution. In our AI projects, we follow the ‘fail fast’ approach to prevent our clients from potential over-investing.

AI: Which industries do you think will have the most potential for machine learning and AI and why?

KP: In the Covid-19 times, for sure the health, med, and pharma industries will grow and use AI more often. We will see more use cases applied in telemedicine and medical diagnosis. For sure, the pharma industry and the development of drugs might be supported by AI. We can see how fast the vaccine for Covid-19 is being developed. In the future, the process of finding a valid vaccine can be supported by AI.

But it is not only health-related industries which will use AI more often. I think that almost every industry will invest more in digitalisation, like process automation where ML can be applied. First, we will see an increasing interest in AI in the industries that were not affected by the virus so much, but in the long run even the hospitality and travel industry, as well as many governments, will introduce AI-based solutions to prevent future lockdown.

AI: What is the greatest benefit of AI in business in your opinion – and what is the biggest fear?

KP: There are plenty of ways machine learning can be applied in many industries. There is a machine learning and artificial intelligence hype going on now, and many managers become aware of the benefits that machine learning can bring to their companies. On the other hand, many can take AI for a solution for almost everything – but that’s how buzzword-driven projects are born, not real-world use cases.

This hype may end similarly to other tech hypes that we have witnessed before, when a buzzword was popular, but eventually only a limited number of companies applied the technology. Blockchain is a good example – many companies have tried using it, for almost everything, and in many cases the technology didn’t really prove useful, sometimes even causing new problems.

Blockchain is now being used with success in several industries. Just the same, we can have an ‘AI winter’ again, if we don’t distinguish between the hype and the true value behind an AI solution.

Photo by Aaron Burden on Unsplash

 Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , and Cyber Security & .

The post Beware the AI winter – but can Covid-19 alter this process? appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/05/26/beware-the-ai-winter-but-can-covid-19-alter-this-process/feed/ 0
AI bot had to unlearn English grammar to decipher Trump speeches https://news.deepgeniusai.com/2020/05/13/ai-bot-had-to-unlearn-english-grammar-to-decipher-trump-speeches/ https://news.deepgeniusai.com/2020/05/13/ai-bot-had-to-unlearn-english-grammar-to-decipher-trump-speeches/#respond Wed, 13 May 2020 09:41:30 +0000 https://news.deepgeniusai.com/?p=9597 A developer had to recalibrate his artificial intelligence bot to account for the unconventional grammar and syntax found in President Trump’s speeches. As originally reported by the Los Angeles Times, Bill Frischling noticed in 2017 that his AI bot, Margaret, was struggling to transcribe part of the President’s speech from May 4 that year commemorating... Read more »

The post AI bot had to unlearn English grammar to decipher Trump speeches appeared first on AI News.

]]>
A developer had to recalibrate his artificial intelligence bot to account for the unconventional grammar and syntax found in President Trump’s speeches.

As originally reported by the Los Angeles Times, Bill Frischling noticed in 2017 that his AI bot, Margaret, was struggling to transcribe part of the President’s speech from May 4 that year commemorating the 75th anniversary of the Battle of the Coral Sea. In particular, Margaret crashed after this 127-word section, featuring a multitude of sub-clauses and tense shifts:

“I know there are many active duty service personnel from both nations with us in the audience, and I want to express our gratitude to each and every one of you. We are privileged to be joined by many amazing veterans from our two countries, as well – and for really from so many different conflicts, there are so many conflicts that we fought on and worked on together – and by the way in all cases succeeded on – it’s nice to win.

“It’s nice to win, and we’ve won a lot, haven’t we Mr. Prime Minister? We’ve won a lot. We’re going to keep it going, by the way. You’ve given your love and loyalty to your nations, and tonight a room of grateful patriots says thank you.”

Frischling, in the words of the Times, ‘hired a computer expert with a PhD in machine punctuation to unteach Margaret normal grammar and syntax – and teach it to decipher Trump-speak instead.’ “It was still trying to punctuate it like it was English, versus trying to punctuate it like it was Trump,” he said.

Able to transcribe the President’s speeches unhindered after that, Margaret’s job is not just to keep a database of these remarks, but analyse behavioural patterns. According to Frischling, some of the behaviours Margaret has spotted include being ‘more comfortable’ telling falsehoods by talking quickly, as well as identifying when Trump is genuinely angry, as opposed to putting it on for show.

One example came at the White House coronavirus briefing on April 23, where Trump – against all medical advice – suggested patients should be injected with disinfectant to kill the virus. When a Washington Post reporter questioned this edict, Trump’s response, according to Margaret, was borne out of genuine anger. Yet Frischling added that for many of the President’s more pre-meditated attacks on ‘fake news’ – of which the Washington Post has been a frequent case – there is little palpable anger on show.

As this publication has previously reported, the lines between real and fake news continue to be blurred – with the President himself an obvious target. In January last year, a ‘deepfake’ video of a Trump speech was broadcast on a Fox-owned Seattle TV network, with an employee later sacked for the error. In February, the President outlined an executive order, titled ‘Maintaining American Leadership in Artificial Intelligence’ exploring five key principles.

President Trump is by no means the only world leader whose sentence construction could be considered off-beat. As reported in the Times (h/t @arusbridger) last week, UK Prime Minister Boris Johnson answered a question on coronavirus testing from Keir Starmer, the leader of the opposition, thus:

“As I think is readily apparent, Mr Speaker, to everybody who has studied the, er, the situation, and I think the scientists would, er, confirm, the difficulty in mid-March was that, er, the, er, tracing capacity that we had – it had been useful… in the containment phase of the epidemic er, that capacity was no longer useful or relevant since the, er, transmission from individuals within the UK um meant that it exceeded our capacity.

“As we get the new cases down, er, we will have a team that will genuinely be able to track and, er, trace hundreds of thousands of people across the country, and thereby to drive down the epidemic. And so, er, I mean, to put it in a nutshell, it is easier, er, to do now – now that we have built up the team on the, on the way out – than it was as er, the epidemic took off.”

One can only imagine what Margaret would have made of that transcription job.

Photo by Charles Deluvio on Unsplash

 Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , and Cyber Security & .

The post AI bot had to unlearn English grammar to decipher Trump speeches appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/05/13/ai-bot-had-to-unlearn-english-grammar-to-decipher-trump-speeches/feed/ 0
Deepfake app puts your face on GIFs while limiting data collection https://news.deepgeniusai.com/2020/01/14/deepfake-app-face-gifs-data-collection/ https://news.deepgeniusai.com/2020/01/14/deepfake-app-face-gifs-data-collection/#comments Tue, 14 Jan 2020 15:11:41 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6356 A new app called Doublicat allows users to superimpose their face into popular GIFs using deep learning technology. In the name of research, here’s one I made earlier: Doublicat uses a Generative Adversarial Network (GAN) to do its magic. The GAN is called RefaceAI and is developed by a company of the same name. RefaceAI... Read more »

The post Deepfake app puts your face on GIFs while limiting data collection appeared first on AI News.

]]>
A new app called Doublicat allows users to superimpose their face into popular GIFs using deep learning technology.

In the name of research, here’s one I made earlier:

Doublicat uses a Generative Adversarial Network (GAN) to do its magic. The GAN is called RefaceAI and is developed by a company of the same name.

RefaceAI was previously used in a face swapping app called Reflect. Elon Musk once used Reflect to put his face on Dwayne Johnson’s body. 

The app is a lot of fun, but – after concerns about viral Russian app FaceApp – many will be wondering just how much data is being collected in return.

Doublicat’s developers are upfront with asking for consent to store your photos upon first opening the app and this is confirmed in their privacy policy: “We may collect the photos, that you take with your camera while using our application.”

However, Doublicat says that photos are only stored on their server for 24 hours before they’re deleted. “The rest of the time your photos used in Doublicat application are stored locally on your mobile device and may be removed any time by either deleting these photos from your mobile device’s file system.”

The app also collects data about facial features but only the vector representations of each person’s face is stored. Doublicat assures that the facial recognition data collected “is not biometric data” and is deleted from their servers within 30 calendar days.

“In no way will Doublicat use your uploaded content for face recognition as Doublicat does not introduce the face recognition technologies or other technical means for processing biometric data for the unique identification or authentication of a user.”

The amount of data Doublicat can collect is limited compared to some alternatives. Apps such as Zao require users to 3D model their face whereas Doublicat only takes a front-facing picture.

RefaceAI is now looking to launch an app which can make deepfake videos rather than just GIFs. The move is likely to be controversial given the concerns around deepfakes and how such videos could be used for things such as political manipulation.

A fake video of Nancy Pelosi, Speaker of the United States House of Representatives, went viral last year after purportedly showing her slurring her words as if she was intoxicated. The clip shows how even a relatively unsophisticated video (it wasn’t an actual deepfake in this case) could be used to cause reputational damage and even swing votes.

A report from the NYU Stern Center for Business and Human Rights last September, covered by our sister publication MarketingTech, highlighted the various ways disinformation could be used ahead of this year’s presidential elections. One of the eight predictions is that deepfake videos will be used “to portray candidates saying and doing things they never said or did”.

Earlier this month, Facebook announced new policies around deepfakes. Any deepfake video that is designed to be misleading will be banned. The problem with the rules is they don’t cover videos altered for parody or those edited “solely to omit or change the order of words,” which will not sound encouraging to anyone wanting a firm stance against manipulation.

Doublicat is available for Android and iOS.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Deepfake app puts your face on GIFs while limiting data collection appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/01/14/deepfake-app-face-gifs-data-collection/feed/ 2
Applause’s new AI solution helps tackle bias and sources data at scale https://news.deepgeniusai.com/2019/11/06/applause-ai-tackle-bias-sources-data-scale/ https://news.deepgeniusai.com/2019/11/06/applause-ai-tackle-bias-sources-data-scale/#respond Wed, 06 Nov 2019 14:00:44 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6164 Testing specialists Applause have debuted an AI solution promising to help tackle algorithmic bias while providing the scale of data needed for robust training. Applause has built a vast global community of testers for its app testing solution which is trusted by brands including Google, Uber, PayPal, and more. The company is leveraging this relatively... Read more »

The post Applause’s new AI solution helps tackle bias and sources data at scale appeared first on AI News.

]]>
Testing specialists Applause have debuted an AI solution promising to help tackle algorithmic bias while providing the scale of data needed for robust training.

Applause has built a vast global community of testers for its app testing solution which is trusted by brands including Google, Uber, PayPal, and more. The company is leveraging this relatively unique asset to help overcome some of the biggest hurdles facing AI development.

AI News spoke with Kristin Simonini, VP of Product at Applause, about the company’s new solution and what it means for the industry ahead of her keynote at AI Expo North America later this month.

“Our customers have been needing additional support from us in the area of data collection to support their AI developments, train their system, and then test the functionality,” explains Simonini. “That latter part being more in-line with what they traditionally expect from us.”

Applause has worked predominantly with companies in the voice space but also their increasing expansion into things such as gathering and labelling images and running documents through OCR.

This existing breadth of experience in areas where AI is most commonly applied today puts the company and its testers in a good position to offer truly useful feedback on where improvements can be made.

Specifically, Applause’s new solution operates across five unique types of AI engagements:

  • Voice: Source utterances to train voice-enabled devices, and test those devices to ensure they understand and respond accurately.
  • OCR (Optimized Character Recognition): Provide documents and corresponding text to train algorithms to recognize text, and compare printed docs and the recognized text for accuracy.
  • Image Recognition: Deliver photos taken of predefined objects and locations, and ensure objects are being recognized and identified correctly.
  • Biometrics: Source biometric inputs like faces and fingerprints, and test whether those inputs result in an experience that’s easy to use and actually works
  • Chatbots: Give sample questions and varying intents for chatbots to answer, and interact with chatbots to ensure they understand and respond accurately in a human-like way.

“We have this ready global community that’s in a position to pull together whatever information an organisation might be looking for, do it at scale, and do it with that breadth and depth – in terms of locations, genders, races, devices, and all types of conditions – that make it possible to pull in a very diverse set of data to train an AI system.”

Some examples Simonini provides of the types of training data which Applause’s global testers can supply includes voice utterances, specific documents, and images which meet set criteria like “street corners” or “cats”. A lack of such niche data sets with the diversity necessary is one of the biggest obstacles faced today and one which Applause hopes to help overcome.

A significant responsibility

Everyone involved in developing emerging technologies carries a significant responsibility. AI is particularly sensitive because everyone knows it will have a huge impact across most parts of societies around the world, but no-one can really predict how.

How many jobs will AI replace? Will it be used for killer robots? Will it make decisions on whether to launch a missile? To what extent will facial recognition be used across society? These are important questions that no-one can give a guaranteed answer, but it’s certainly on the minds of a public that’s grown up around things like 1984 and Terminator.

One of the main concerns about AI is bias. Fantastic work by the likes of the Algorithmic Justice League has uncovered gross disparities between the effectiveness of facial recognition algorithms dependent on the race and gender of each individual. For example, IBM’s facial recognition algorithm was 99.7 percent accurate when used on lighter-skinned males compared to just 65.3 percent on darker-skinned females.

Simonini highlights another study she read recently where voice accuracy for white males was over 90 percent. However, for African-American females, it was more like 30 percent.

Addressing such disparities is not only necessary to prevent things such as inadvertently automating racial profiling or giving some parts of society an advantage over others, but also to allow AI to reach its full potential.

While there are many concerns, AI has a huge amount of power for good as long as it’s developed responsibly. AI can drive efficiencies to reduce our environmental impact, free up more time to spend with loved ones, and radically improve the lives of people with disabilities.

A failure of companies to take responsibility for their developments will lead to overregulation, and overregulation leads to reduced innovation. We asked Simonini whether she believes robust testing will reduce the likelihood of overregulation.

“I think it’s certainly improved the situation. I think that there’s always going to probably be some situations where people attempt to regulate, but if you can really show that effort has been put forward to get to a high level of accuracy and depth then I think it would be less likely.”

Human testing remains essential

Applause is not the only company working to reduce bias in algorithms. IBM, for example, has a tool called Fairness 360 which is essentially an AI itself used to scan other algorithms for signs of bias. We asked Simonini why Applause believes human testing is still necessary.

“Humans are unpredictable in how they’re going to react to something and in what manner they’re going to do it, how they choose to engage with these devices and applications,” comments Simonini. “We haven’t yet seen an advent of being able to effectively do that without the human element.”

An often highlighted challenge with voice recognition is the wide variety of languages spoken and their regional dialects. Many American voice recognition systems even struggle with my accent from the South West of England.

Simonini adds in another consideration about slang words and the need for voice services to keep up-to-date with changing vocabularies.

“Teenagers today like to, when something is hot or cool, say it’s “fire” [“lit” I believe is another one, just to prove I’m still down with the kids],” explains Simonini. “We were able to get these devices into homes and really try to understand some of those nuances.”

Simonini then further explains the challenge of understanding the context of these nuances. In her “fire” example, there’s a very clear need to understand when there’s a literal fire and when someone is just saying that something is cool.

“How do you distinguish between this being a real emergency? My volume and my tone and everything else about how I’ve used that same voice command is going to be different.”

The growth of AI apps and services

Applause established its business in traditional app testing. Given the expected growth in AI apps and services, we asked Simonini whether Applause believes its AI testing solution will become as big – or perhaps even bigger – than its current app testing business.

“We do talk about that; you know, how fast is this going to grow?” says Simonini. “I don’t want to keep talking about voice, but if you look statistically at the growth of the voice market vis-à-vis the growth and adoption of mobile; it’s happening at a much faster pace.”

“I think that it’s going to be a growing portion of our business but I don’t think it necessarily is going to replace anything given that those channels [such as mobile and desktop apps] will still be alive and complementary to one another.”

Simonini will be speaking at AI Expo North America on November 13th in a keynote titled Why The Human Element Remains Essential In Applied AI. We asked what attendees can expect from her talk.

“The angle that we chose to sort of speak about is really this intersection of the human and the AI and why we – given that it’s the business we’re in and what we see day-in, day-out – don’t believe that it becomes the replacement of but how it can work and complement one another.”

“It’s really a bit of where we landed when we went out to figure out whether you can replace an army of people with an army of robots and get the same results. And basically that no, there are still very human-focused needs from a testing perspective.”

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Applause’s new AI solution helps tackle bias and sources data at scale appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/11/06/applause-ai-tackle-bias-sources-data-scale/feed/ 0
Nvidia explains how ‘true adoption’ of AI is making an impact https://news.deepgeniusai.com/2019/04/26/nvidia-how-adoption-ai-impact/ https://news.deepgeniusai.com/2019/04/26/nvidia-how-adoption-ai-impact/#respond Fri, 26 Apr 2019 20:15:25 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5577 Nvidia Senior Director of Enterprise David Hogan spoke at this year’s AI Expo about how the company is seeing artificial intelligence adoption making an impact. In the keynote session, titled ‘What is the true adoption of AI’, Hogan provided real-world examples of how the technology is being used and enabled by Nvidia’s GPUs. But first,... Read more »

The post Nvidia explains how ‘true adoption’ of AI is making an impact appeared first on AI News.

]]>
Nvidia Senior Director of Enterprise David Hogan spoke at this year’s AI Expo about how the company is seeing artificial intelligence adoption making an impact.

In the keynote session, titled ‘What is the true adoption of AI’, Hogan provided real-world examples of how the technology is being used and enabled by Nvidia’s GPUs. But first, he highlighted the momentum we’re seeing in AI.

“Many governments have announced investments in AI and how they’re going to position themselves,” comments Hogan. “Countries around the world are starting to invest in very large infrastructures.”

The world’s most powerful supercomputers are powered by Nvidia GPUs. ORNL Summit, the current fastest, uses an incredible 27,648 GPUs to deliver over 144 petaflops of performance. Vast amounts of computational power is needed for AI which puts Nvidia in a great position to capitalise.

“The compute demands of AI are huge and beyond what anybody has seen within a standard enterprise environment before,” says Hogan. “You cannot train a neural network on a standard CPU cluster.”

Nvidia started off by creating graphics cards for gaming. While that’s still a big part of what the company does, Hogan says the company pivoted towards AI back in 2012.

A great deal of the presentation was spent on autonomous vehicles, which is unsurprising given the demand and Nvidia’s expertise in the field. Hogan highlights that you simply cannot train driverless cars using CPUs and provided a comparison in cost, size, and power consumption.

“A new type of computing is starting to evolve based around GPU architecture called ‘dense computing’ – the ability to build systems that are highly-powerful, huge amounts of computational scale, but actually contained within a very small configuration,” explains Hogan.

Autonomous car manufacturers need to train petabytes of data per day, reiterate their models, and deploy them again in order to get those vehicles to market.

Nvidia has a machine called the DGX-2 which delivers two petaflops of performance. “That is one server that’s equivalent to 800 traditional servers in one box.”

Nvidia has a total of 370 autonomous vehicles which Hogan says covers most of the world’s automotive brands. Many of these are investing heavily and rushing to deliver at least ‘Level 2’ driverless cars in the 2020-21 timeframe.

“We have a fleet of autonomous cars,” says Hogan. “It’s not our intention to compete with Uber, Daimler or BMW, but the best way of us helping our customers enable that is by trying it ourselves.”

“All the work our customers do we’ve also done ourselves so we understand the challenges and what it takes to do this.”

Real-world impact

Hogan notes how AI is a “horizontal capability that sits across organisations” and is “an enabler for many, many things”. It’s certainly a challenge to come up with examples of industries that cannot be improved to some degree through AI.

Following autonomous cars, Nvidia sees the next mass scaling of AI happening in healthcare (which our dear readers already know, of course.)

Hogan provides the natural example of the UK’s National Health Service (NHS) which has vast amounts of patient data. Bringing this data together and having an AI make sense of it can unlock valuable information to improve healthcare.

AIs which can make sense of medical imaging on a par with, or even better, than some doctors are starting to become available. However, they are still 2D images that are alien to most people.

Hogan showed how AI is able to turn 2D imagery into 3D models of the organs which are easier to understand. In the GIF below, we see a radiograph of a heart being turned into a 3D model:

We’ve also heard about how AI is helping with the field of genomics, assisting in finding cures for human diseases. Nvidia GPUs are used for Oxford Nanopore’s MinIT handheld which enables DNA sequencing of things such as plants to be conducted in-the-field.

In a blog post last year, Nvidia explained how MinIT uses AI for basecalling:

“Nanopore sequencing measures tiny ionic currents that pass through nanoscale holes called nanopores. It detects signal changes when DNA passes through these holes. This captured signal produces raw data that requires signal processing to determine the order of DNA bases – known as the ‘sequence.’ This is called basecalling.

This analysis problem is a perfect match for AI, specifically recurrent neural networks. Compared with previous methods, RNNs allow for more accuracy in time-series data, which Oxford Nanopore’s sequencers are known for.”

Hogan notes how, in many respects, eCommerce paved the way for AI. Data collected for things such as advertising helps to train neural networks. In addition, eCommerce firms have consistently aimed to improve and optimise their algorithms for things such as recommendations to attract customers.

“All that data, all that Facebook information that we’ve created, has enabled us to train networks,” notes Hogan.

Brick-and-mortar retailers are also being improved by AI. Hogan gives the example of Walmart which is using AI to improve their demand forecasting and keep supply chains running smoothly.

In real-time, Walmart is able to see where potential supply challenges are and take action to avoid or minimise. The company is even able to see where weather conditions may cause issues.

Hogan says this has saved Walmart tens of billions of dollars. “This is just one example of how AI is making an impact today not just on the bottom line but also the overall performance of the business”.

Accenture is now detecting around 200 million cyber threats per day, claims Hogan. He notes how protecting against such a vast number of evolving threats is simply not possible without AI.

“It’s impossible to address that, look at it, prioritise it, and action it in any other way than applying AI,” comments Hogan. “AI is based around patterns – things that are different – and when to act and when not to.”

While often we hear about what AI could one day be used for, Hogan’s presentation was a fascinating insight into how Nvidia is seeing it making an impact today or in the not-so-distant future.

The post Nvidia explains how ‘true adoption’ of AI is making an impact appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/04/26/nvidia-how-adoption-ai-impact/feed/ 0
Toy around with deep learning in Nvidia’s AI Playground https://news.deepgeniusai.com/2019/03/19/deep-learning-nvidia-ai-playground/ https://news.deepgeniusai.com/2019/03/19/deep-learning-nvidia-ai-playground/#respond Tue, 19 Mar 2019 11:47:57 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5359 Nvidia launched an online space called AI Playground on Monday which allows people to mess around with some deep learning experiences. AI Playground is designed to be accessible in order to help anyone get started and learn about the potential of artificial intelligence. Who knows, it may even inspire some to enter the field and... Read more »

The post Toy around with deep learning in Nvidia’s AI Playground appeared first on AI News.

]]>
Nvidia launched an online space called AI Playground on Monday which allows people to mess around with some deep learning experiences.

AI Playground is designed to be accessible in order to help anyone get started and learn about the potential of artificial intelligence. Who knows, it may even inspire some to enter the field and help to address the huge skill shortage.

The experience currently features three demos:

  • Imagine InPainting
  • Artistic Style Transfer
  • Photorealistic Image Synthesis

As you probably guessed from their names, all of the current demos are based around imagery.

Imagine InPainting allows the user to upload their own image and edit it with powerful AI tools. Content is able to be removed and replaced.

Artistic Style Transfer is fairly self-explanatory. The style of an uploaded image can be copied in another. This will help to satisfy the curiosity of anyone who wondered how it would look if Leonardo Da Vinci painted them instead of Lisa Gherardini. A convolutional neural network based on 80,000 images of people, scenery, animals, and moving objects had to be trained for this project.

Finally, Photorealistic Image Synthesis. This demo entirely fabricates photorealistic images and environments with eerie detail.

Beditor Catanzaro, VP of applied deep learning research at Nvidia, said in a statement:

“Research papers have new ideas in them and are really cool, but they’re directed at specialised audiences. We’re trying to make our research more accessible.

The AI Playground allows everyone to interact with our research and have fun with it.”

Nvidia plans to add more demos to its AI Playground over time.

The post Toy around with deep learning in Nvidia’s AI Playground appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/03/19/deep-learning-nvidia-ai-playground/feed/ 0
Microsoft and MIT develop AI to fix driverless car ‘blind spots’ https://news.deepgeniusai.com/2019/01/28/microsoft-mit-develop-ai-driverless-car/ https://news.deepgeniusai.com/2019/01/28/microsoft-mit-develop-ai-driverless-car/#respond Mon, 28 Jan 2019 16:18:30 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4846 Microsoft and MIT have partnered on a project to fix so-called virtual ‘blind spots’ which lead driverless cars to make errors. Roads, especially while shared with human drivers, are unpredictable places. Training a self-driving car for every possible situation is a monumental task. The AI developed by Microsoft and MIT compares the action taken by... Read more »

The post Microsoft and MIT develop AI to fix driverless car ‘blind spots’ appeared first on AI News.

]]>
Microsoft and MIT have partnered on a project to fix so-called virtual ‘blind spots’ which lead driverless cars to make errors.

Roads, especially while shared with human drivers, are unpredictable places. Training a self-driving car for every possible situation is a monumental task.

The AI developed by Microsoft and MIT compares the action taken by humans in a given scenario to what the driverless car’s own AI would do. Where the human decision is more optimal, the vehicle’s behaviour is updated for similar future occurrences.

Ramya Ramakrishnan, an author of the report, says:

“The model helps autonomous systems better know what they don’t know.

Many times, when these systems are deployed, their trained simulations don’t match the real-world setting [and] they could make mistakes, such as getting into accidents.

The idea is to use humans to bridge that gap between simulation and the real world, in a safe way, so we can reduce some of those errors.”

For example, if an emergency vehicle is approaching then a human driver should know to let them pass if safe to do so. These situations can get complex dependent on the surroundings.

On a country road, allowing the vehicle to pass could mean edging onto the grass. The last thing you, or the emergency services, want a driverless car to do is to handle all country roads the same and swerve off a cliff edge.

Humans can either ‘demonstrate’ the correct approach in the real world, or ‘correct’ by sitting at the wheel and taking over if the car’s actions are incorrect. A list of situations is compiled along with labels whether its actions were deemed acceptable or unacceptable.

The researchers have ensured a driverless car AI does not see its action as 100 percent safe even if the result has been so far. Using the Dawid-Skene machine learning algorithm, the AI uses probability calculations to spot patterns and determine if something is truly safe or still leaves the potential for error.

We’re yet to reach a point where the technology is ready for deployment. Thus far, the scientists have only tested it with video games. It offers a lot of promise, however, to help ensure driverless car AIs can one day safely respond to all situations.

? , , AI &

The post Microsoft and MIT develop AI to fix driverless car ‘blind spots’ appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/01/28/microsoft-mit-develop-ai-driverless-car/feed/ 0
Trump speech ‘DeepFake’ shows a present AI threat https://news.deepgeniusai.com/2019/01/14/trump-speech-deepfake-ai-threat/ https://news.deepgeniusai.com/2019/01/14/trump-speech-deepfake-ai-threat/#comments Mon, 14 Jan 2019 12:19:09 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4424 A so-called ‘DeepFake’ video of a Trump speech was broadcast on a Fox-owned Seattle TV network, showing a very present AI threat. The station, Q13, broadcasted a doctored Trump speech in which he somehow appeared even more orange and pulled amusing faces. You can see a side-by-side comparison with the original below: Following the broadcast,... Read more »

The post Trump speech ‘DeepFake’ shows a present AI threat appeared first on AI News.

]]>
A so-called ‘DeepFake’ video of a Trump speech was broadcast on a Fox-owned Seattle TV network, showing a very present AI threat.

The station, Q13, broadcasted a doctored Trump speech in which he somehow appeared even more orange and pulled amusing faces.

You can see a side-by-side comparison with the original below:

https://www.youtube.com/watch?v=UZLs11uSg-A&feature=youtu.be

Following the broadcast, a Q13 employee was sacked. It’s unclear if the worker created the clip or whether it was just allowed to air.

The video could be the first DeepFake to be televised, but it won’t be the last. Social media provides even less filtration and enables fake clips to spread with ease.

We’ve heard much about sophisticated disinformation campaigns. At one point, the US was arguably the most prominent creator of such campaigns to influence foreign decisions.

Russia, in particular, has been linked to vast disinformation campaigns. These have primarily targeted social media with things such as their infamous Twitter bots.

According to Pew Research, just five percent of Americans have ‘a lot of trust’ in the information they get from social media. This is much lower than in national and local news organisations.

It’s not difficult to imagine an explosion in doctored videos that appear like they’re coming from trusted outlets. Combining the reach of social media with the increased trust Americans have in traditional news organisations is a dangerous concept.

While the Trump video appears to be a bit of fun, the next could be used to influence an election or big policy decision. It’s a clear example of how AI is already creating new threats.

The post Trump speech ‘DeepFake’ shows a present AI threat appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/01/14/trump-speech-deepfake-ai-threat/feed/ 1
Talent has begun leaking from DeepMind in recent months https://news.deepgeniusai.com/2019/01/08/talent-deepmind-recent-months/ https://news.deepgeniusai.com/2019/01/08/talent-deepmind-recent-months/#respond Tue, 08 Jan 2019 12:31:12 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4382 If DeepMind is on your CV, you could walk into most tech companies and be offered a job on the spot with a six-figure salary. The firm is full of in-demand talent and its CEO once bragged that no employees had ever left. Speaking to The Guardian in 2016, DeepMind CEO Demis Hassabis said: “We... Read more »

The post Talent has begun leaking from DeepMind in recent months appeared first on AI News.

]]>
If DeepMind is on your CV, you could walk into most tech companies and be offered a job on the spot with a six-figure salary. The firm is full of in-demand talent and its CEO once bragged that no employees had ever left.

Speaking to The Guardian in 2016, DeepMind CEO Demis Hassabis said:

“We are able to literally get the best scientists from each country each year. So we’ll have, say, the person that won the Physics Olympiad in Poland, the person who got the top maths PhD of the year in France.

We’ve got more ideas than we’ve got researchers, but at the same time, there are more great people coming to our door than we can take on.”

In recent months, however, that renowned talent has started leaking out. Over the past 24 hours alone, two leading DeepMinders have announced they’re leaving the company.

Edward Grefenstette, a leading research scientist, announced he’s joining Facebook’s AI facility just down the road from DeepMind’s headquarters in London. Grefenstette co-founded Dark Blue Labs, a deep learning startup that DeepMind acquired in 2014.

Jack Kelly, a research engineer, has decided to leave DeepMind and launch a non-profit lab focused on fixing climate change. Kelly undertook the Computer Science MSc at Imperial College London with the explicit aim of using AI to mitigate climate change.


https:///

Such high-profile employees announcing their departure from a once 100 percent staff retention firm, within hours of each other, leads to wonder about deeper problems.

Outside the company, there are concerns about Google’s acquisition of DeepMind. These were reignited at the end of last year when Google absorbed DeepMind Health’s Streams app.

DeepMind Health was already controversial. In 2017, the UK government ruled the company had gained inappropriate access to medical data from 1.6 million patients when developing Streams.

Mustafa Suleyman, Head of Applied AI at DeepMind, wrote in a blog post:

“DeepMind operates autonomously from Google, and we’ve been clear from the outset that at no stage will patient data ever be linked or associated with Google accounts, products or services.”

Critics say this promise was broken when Streams was grabbed by Google.

The situation is sure to make potential future partners question whether to share data with DeepMind. Healthcare is one area set to benefit most from AI, yet it’s one that DeepMind could now struggle to find allies for.

Some of these concerns are sure to be held inside the company’s walls, but to what extent – and whether they’re resignation worthy – is unclear.

A pattern that has emerged from speaking to DeepMind employees is the mutual respect shared between researchers. At least one former employee maintains that DeepMind remains a great place to work.

/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Talent has begun leaking from DeepMind in recent months appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/01/08/talent-deepmind-recent-months/feed/ 0