ai expo – AI News https://news.deepgeniusai.com Artificial Intelligence News Wed, 25 Mar 2020 05:27:04 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png ai expo – AI News https://news.deepgeniusai.com 32 32 LG ThinQ: Our experience with AI so far and what’s next for the industry https://news.deepgeniusai.com/2019/11/18/lg-thinq-experience-ai-whats-next-industry/ https://news.deepgeniusai.com/2019/11/18/lg-thinq-experience-ai-whats-next-industry/#respond Mon, 18 Nov 2019 13:56:11 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6208 AI News spoke with LG corporate vice president Samuel Chang about ThinQ, the company’s brand for products and services incorporating AI. Chang played a major role in this year’s AI & Big Data Expo in Santa Clara last week, taking part in both a solo presentation on “Process Automation from IoT Data” and a panel... Read more »

The post LG ThinQ: Our experience with AI so far and what’s next for the industry appeared first on AI News.

]]>
AI News spoke with LG corporate vice president Samuel Chang about ThinQ, the company’s brand for products and services incorporating AI.

Chang played a major role in this year’s AI & Big Data Expo in Santa Clara last week, taking part in both a solo presentation on “Process Automation from IoT Data” and a panel discussion on “Data and the Customer”.

Following the event, AI News decided to catch up with Chang to discuss LG’s current experience in artificial intelligence and where the industry is heading next.

How important is AI becoming to differentiate from competing products?

With artificial intelligence gaining traction across a range of industries, it’s becoming important to allow differentiation among competing products. LG’s pursuit of “Evolve, Connect, Open” when it comes to AI and ThinQ offers enhanced convenience for users with diverse needs and tech preferences in their homes through personalised, proactive, easy, and efficient solutions. This approach allows the company to align with and offer the most expansive list of smart integrations on the market – whether with Google Assistant, Amazon Alexa, or future partnerships.

Specifically, with our open approach, LG is progressing toward incorporating IoT technology throughout the entire journey – including the product design/concept, the launch of new helpful tools, and proactive OTA (Over-The-Air) updates. LG is working with world-class cloud partners from Amazon, Google, and Microsoft, and combining with in-house development to develop, customise, and operate our IoT platforms.

AI is key to differentiating your products on the market by offering more robust opportunities and offerings for the larger consumer base.

What challenges has LG faced moving into AI and how did it overcome them?

The biggest challenge we faced as a whole moving into AI was creating truly meaningful products that will help our customer base with their daily routines and truly improve their lives. Through listening to what they want, we were able to produce a line of help innovations that do just that. 

LG ThinQ products use voice control to interact with users, as well as sensor data and diverse features such as product recognition and learning engine technologies to enhance their performance.

Do you think the smart home is meeting expectations?  

The smart home is a growing trend within the AI industry thanks to the influx of helpful tools introduced across the field. While many advancements have been made in such a short timeframe, we are still in the first inning of this technology adoption.

LG is focused on delivering better consumer benefits with advanced technology that will continue to improve over time. We aim to make the home more proactive and personalised for users based on our behaviour data collected across the products servicing their many needs. Knowing more about our user and their life leads to enhanced performance and, ultimately, to a better life.

Will distributed ledger technologies be important for smart devices and are there any plans at LG ThinQ to use them? 

While we are actively looking at distributed ledgers, including blockchain technology, there is no specific timetable for implementation. We believe this would be useful to enable an open ecosystem where various companies and contributors can participate. 

Is there a specific area of the market you and the LG ThinQ team are excited about?

LG is committed to producing helpful innovations for consumers. One example is our new AI-infused customer service solution, Proactive Customer Care. It’s a new paradigm in customer satisfaction, one that ensures greater value and peace-of-mind for owners of our smart home appliances.  

When it rolls out in the United States next year, Proactive Customer Care will leverage ThinQ AI to provide personalised support, alerting users to issues with their LG appliance, and offering helpful tips and solutions to maximise performance and long product life. Using the latest in AI technology, LG Proactive Customer Care is designed to inform LG smart appliance owners of potential problems before they even occur – it can expedite technician visits, if needed, and offer guidance on how to keep LG’s products functioning optimally.

What did you speak about during this year’s AI Expo North America?

I was thrilled to take part in my first AI & Big Data Expo North America in Santa Clara. As part of my activities onsite, I welcomed interested attendees to visit the panel discussion on “Data and The Customer” where I was joined by fellow industry insiders to discuss in the relationship between the data collected from companies and its customers.  

I also conducted a solo presentation on the topic of “Process Automation from IoT Data.” I helped to define process automation for IoT data and how it is key to use not only the right platform for your business, but also one that allows true process integration and automation for optimisation. This is essential to our work at LG and helps shape our future product launches.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post LG ThinQ: Our experience with AI so far and what’s next for the industry appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/11/18/lg-thinq-experience-ai-whats-next-industry/feed/ 0
Applause’s new AI solution helps tackle bias and sources data at scale https://news.deepgeniusai.com/2019/11/06/applause-ai-tackle-bias-sources-data-scale/ https://news.deepgeniusai.com/2019/11/06/applause-ai-tackle-bias-sources-data-scale/#respond Wed, 06 Nov 2019 14:00:44 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6164 Testing specialists Applause have debuted an AI solution promising to help tackle algorithmic bias while providing the scale of data needed for robust training. Applause has built a vast global community of testers for its app testing solution which is trusted by brands including Google, Uber, PayPal, and more. The company is leveraging this relatively... Read more »

The post Applause’s new AI solution helps tackle bias and sources data at scale appeared first on AI News.

]]>
Testing specialists Applause have debuted an AI solution promising to help tackle algorithmic bias while providing the scale of data needed for robust training.

Applause has built a vast global community of testers for its app testing solution which is trusted by brands including Google, Uber, PayPal, and more. The company is leveraging this relatively unique asset to help overcome some of the biggest hurdles facing AI development.

AI News spoke with Kristin Simonini, VP of Product at Applause, about the company’s new solution and what it means for the industry ahead of her keynote at AI Expo North America later this month.

“Our customers have been needing additional support from us in the area of data collection to support their AI developments, train their system, and then test the functionality,” explains Simonini. “That latter part being more in-line with what they traditionally expect from us.”

Applause has worked predominantly with companies in the voice space but also their increasing expansion into things such as gathering and labelling images and running documents through OCR.

This existing breadth of experience in areas where AI is most commonly applied today puts the company and its testers in a good position to offer truly useful feedback on where improvements can be made.

Specifically, Applause’s new solution operates across five unique types of AI engagements:

  • Voice: Source utterances to train voice-enabled devices, and test those devices to ensure they understand and respond accurately.
  • OCR (Optimized Character Recognition): Provide documents and corresponding text to train algorithms to recognize text, and compare printed docs and the recognized text for accuracy.
  • Image Recognition: Deliver photos taken of predefined objects and locations, and ensure objects are being recognized and identified correctly.
  • Biometrics: Source biometric inputs like faces and fingerprints, and test whether those inputs result in an experience that’s easy to use and actually works
  • Chatbots: Give sample questions and varying intents for chatbots to answer, and interact with chatbots to ensure they understand and respond accurately in a human-like way.

“We have this ready global community that’s in a position to pull together whatever information an organisation might be looking for, do it at scale, and do it with that breadth and depth – in terms of locations, genders, races, devices, and all types of conditions – that make it possible to pull in a very diverse set of data to train an AI system.”

Some examples Simonini provides of the types of training data which Applause’s global testers can supply includes voice utterances, specific documents, and images which meet set criteria like “street corners” or “cats”. A lack of such niche data sets with the diversity necessary is one of the biggest obstacles faced today and one which Applause hopes to help overcome.

A significant responsibility

Everyone involved in developing emerging technologies carries a significant responsibility. AI is particularly sensitive because everyone knows it will have a huge impact across most parts of societies around the world, but no-one can really predict how.

How many jobs will AI replace? Will it be used for killer robots? Will it make decisions on whether to launch a missile? To what extent will facial recognition be used across society? These are important questions that no-one can give a guaranteed answer, but it’s certainly on the minds of a public that’s grown up around things like 1984 and Terminator.

One of the main concerns about AI is bias. Fantastic work by the likes of the Algorithmic Justice League has uncovered gross disparities between the effectiveness of facial recognition algorithms dependent on the race and gender of each individual. For example, IBM’s facial recognition algorithm was 99.7 percent accurate when used on lighter-skinned males compared to just 65.3 percent on darker-skinned females.

Simonini highlights another study she read recently where voice accuracy for white males was over 90 percent. However, for African-American females, it was more like 30 percent.

Addressing such disparities is not only necessary to prevent things such as inadvertently automating racial profiling or giving some parts of society an advantage over others, but also to allow AI to reach its full potential.

While there are many concerns, AI has a huge amount of power for good as long as it’s developed responsibly. AI can drive efficiencies to reduce our environmental impact, free up more time to spend with loved ones, and radically improve the lives of people with disabilities.

A failure of companies to take responsibility for their developments will lead to overregulation, and overregulation leads to reduced innovation. We asked Simonini whether she believes robust testing will reduce the likelihood of overregulation.

“I think it’s certainly improved the situation. I think that there’s always going to probably be some situations where people attempt to regulate, but if you can really show that effort has been put forward to get to a high level of accuracy and depth then I think it would be less likely.”

Human testing remains essential

Applause is not the only company working to reduce bias in algorithms. IBM, for example, has a tool called Fairness 360 which is essentially an AI itself used to scan other algorithms for signs of bias. We asked Simonini why Applause believes human testing is still necessary.

“Humans are unpredictable in how they’re going to react to something and in what manner they’re going to do it, how they choose to engage with these devices and applications,” comments Simonini. “We haven’t yet seen an advent of being able to effectively do that without the human element.”

An often highlighted challenge with voice recognition is the wide variety of languages spoken and their regional dialects. Many American voice recognition systems even struggle with my accent from the South West of England.

Simonini adds in another consideration about slang words and the need for voice services to keep up-to-date with changing vocabularies.

“Teenagers today like to, when something is hot or cool, say it’s “fire” [“lit” I believe is another one, just to prove I’m still down with the kids],” explains Simonini. “We were able to get these devices into homes and really try to understand some of those nuances.”

Simonini then further explains the challenge of understanding the context of these nuances. In her “fire” example, there’s a very clear need to understand when there’s a literal fire and when someone is just saying that something is cool.

“How do you distinguish between this being a real emergency? My volume and my tone and everything else about how I’ve used that same voice command is going to be different.”

The growth of AI apps and services

Applause established its business in traditional app testing. Given the expected growth in AI apps and services, we asked Simonini whether Applause believes its AI testing solution will become as big – or perhaps even bigger – than its current app testing business.

“We do talk about that; you know, how fast is this going to grow?” says Simonini. “I don’t want to keep talking about voice, but if you look statistically at the growth of the voice market vis-à-vis the growth and adoption of mobile; it’s happening at a much faster pace.”

“I think that it’s going to be a growing portion of our business but I don’t think it necessarily is going to replace anything given that those channels [such as mobile and desktop apps] will still be alive and complementary to one another.”

Simonini will be speaking at AI Expo North America on November 13th in a keynote titled Why The Human Element Remains Essential In Applied AI. We asked what attendees can expect from her talk.

“The angle that we chose to sort of speak about is really this intersection of the human and the AI and why we – given that it’s the business we’re in and what we see day-in, day-out – don’t believe that it becomes the replacement of but how it can work and complement one another.”

“It’s really a bit of where we landed when we went out to figure out whether you can replace an army of people with an army of robots and get the same results. And basically that no, there are still very human-focused needs from a testing perspective.”

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Applause’s new AI solution helps tackle bias and sources data at scale appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/11/06/applause-ai-tackle-bias-sources-data-scale/feed/ 0
AI Expo Global: Fairness and safety in artificial intelligence https://news.deepgeniusai.com/2019/05/01/ai-expo-fairness-safety-artificial-intelligence/ https://news.deepgeniusai.com/2019/05/01/ai-expo-fairness-safety-artificial-intelligence/#respond Wed, 01 May 2019 16:36:31 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5594 AI News sat down with Faculty’s head of research Ilya Feige to discuss safe and fair practices in artificial intelligence development. Feige had just finished giving a talk entitled ‘Fairness in AI: Latest developments in AI safety’ at this year’s AI Expo Global. We managed to grab him to get more of his thoughts on... Read more »

The post AI Expo Global: Fairness and safety in artificial intelligence appeared first on AI News.

]]>
AI News sat down with Faculty’s head of research Ilya Feige to discuss safe and fair practices in artificial intelligence development.

Feige had just finished giving a talk entitled ‘Fairness in AI: Latest developments in AI safety’ at this year’s AI Expo Global. We managed to grab him to get more of his thoughts on the issue.

Rightfully, people are becoming increasingly concerned about unfair and unsafe AIs. Human biases are seeping into algorithms which poses a very real danger that prejudices and oppression could become automated by accident.

AI News reported last week on research from New York University that found inequality in STEM-based careers is causing algorithms to work better or worse for some parts of society over others.

Similar findings, by Joy Buolamwini and her team from the Algorithmic Justice League, highlighted a disparity in the effectiveness of the world’s leading facial recognition systems between genders and skin tones.

In an ideal world, all parts of society would be equally represented tomorrow. The reality is that issue is going to take much longer to rectify, but AI technologies are becoming increasingly used across society today.

AI News asked Feige for his perspective and how the impact of that problem can be reduced much sooner.

“I think the most important thing for organisations to do is to spend more time thinking about bias and on ensuring that every model they build is unbiased because a demographically disparate team can build non-disparate tech.”

Some companies are seeking to build AIs which can scan for bias in other algorithms. We asked Feige for his view on whether he believes this is an ideal solution.

“Definitely, I showed one in my talk. We have tests for: You give me a black box algorithm, I have no idea what your algorithm does – but I can give an input, calculate the output, and I can just tell you how biased it is according to various definitions of bias.”

“We can go even further and say: Let’s modify your algorithm and give it back so it’s unbiased according to one of those definitions.”

In the Western world, we consider ourselves fairly liberal and protective of individual freedoms. China, potentially the world’s leader in AI, has a questionable human rights record and is known for invasive surveillance and mass data collection. Meanwhile, Russia has a reputation for military aggression which some are concerned will drive its AI developments. Much of the Middle East, while not considered leaders in AI, is behind most of the world in areas such as female and gay rights.

We asked Feige for his thoughts on whether these regional attitudes could find their way into AI developments.

“It’s an interesting question. It’s not that some regions will take the issue more or less seriously, they just have different … we’ll say preferences. I suspect China takes surveillance and facial recognition seriously – more seriously than the UK – but they do so in order to leverage it for mass surveillance, for population control.”

“The UK is trying to walk a fine line in efficiently using that very useful technology but not undermine personal privacy and freedom of individuals.”

During his talk, Feige made the point that he’s less concerned about AI biases due to the fact that – unlike humans – algorithms can be controlled.

“This is a real source of optimism for me, just because human decision-making is incredibly biased and everyone knows that.”

Feige asked the audience to raise a hand if they were concerned about AI bias which prompted around half to do so. The same question was asked regarding human bias and most of the room had their hand up.

“You can be precise with machine learning algorithms. You can say: ‘This is the objective I’m trying to achieve, I’m trying to maximise the probability of a candidate being successful at their job according to historical people in their role’. Or, you can be precise about the data the model is trained on and say: ‘I’m going to ignore data from before this time period because things were ‘different’ back then’”.

“Humans have fixed past experiences they can’t control. I can’t change the fact my mum did most of the cooking when I was growing up and I don’t know how it affects my decision-making.”

“I also can’t force myself to hire based on success in their jobs, which I try to do. It’s hard to know if really I just had a good conversation about the football with the candidate.”

Faculty, of which Feige has the role of head of research, is a European company based in London. With the EU Commission recently publishing its guidelines on AI development, we took the opportunity to get his views on them.

“At a high-level, I think they’re great. They align quite a bit with how we think about these things. My biggest wish, whenever a body like that puts together some principles, is that there’s a big gap between that level of guidelines and what is useful for practitioners. Making those more precise is really important and those weren’t precise enough by my standards.”

“But not to just advocate putting the responsibility on policymakers. There’s also an onus on practitioners to try and articulate what bias looks like statistically and how that may apply to different problems, and then say: ‘Ok policy body, which of these is most relevant and can you now make those statements in this language’ and basically bridge the gap.”

Google recently created, then axed, a dedicated ‘ethics board’ for its AI developments. Such boards seem a good idea but representing society can be a minefield. Google’s faced criticism for having a conservative figure with strong anti-LGBTQ and immigrant views on the board.

Feige provided his take on whether companies should have an independent AI oversight board to ensure their developments are safe and ethical.

“To some degree, definitely. I suspect there are some cases you want that oversight board to be very external and like a regulator with a lot of overhead and a lot of teeth.”

“At Faculty, each one of our product teams has a shadow team – which has practically the same skill set – who monitor and oversee the work done by the project team to ensure it follows our internal set of values and guidelines.”

“I think the fundamental question here is how to do this in a productive way and ensure AI safety but that it doesn’t grind innovation to a halt. You can imagine where the UK has a really strong oversight stance and then some other country with much less regulatory oversight has companies which become large multinationals and operate in the UK anyway.”

Getting the balance right around regulation is difficult. Our sister publication IoT News interviewed a digital lawyer who raised the concern that Europe’s strict GDPR regulations will cause AI companies in the continent to fall behind their counterparts in Asia and America which have access to far more data.

Feige believes there is the danger of this happening, but European countries like the UK – whether it ultimately remains part of the EU and subject to regulations like GDPR or not – can use it as an opportunity to lead in AI safety.

Three reasons are provided why the UK could achieve this:

  1. The UK has significant AI talent and renowned universities.
  2. It has a fairly unobjectionable record and respected government (Feige clarifies in comparison to how some countries view the US and China).
  3. The UK has a fairly robust existing regulatory infrastructure – especially in areas such as financial services.

Among the biggest concerns about AI continues to be around its impact on the workforce, particularly whether it will replace low-skilled workers. We wanted to know whether using legislation to protect human workers is a good idea.

“You could ask the question a hundred years ago: ‘Should automation come into agriculture because 90 percent of the population works in it?’ and now it’s almost all automated. I suspect individuals may be hurt by automation but their children will be better off by it.”

“I think any heavy-handed regulation will have unintended consequences and should be thought about well.”

Our discussion with Feige was insightful and provided optimism that AI can be developed safely and fairly, as long as there’s a will to do so.

You can watch our full interview with Feige from AI Expo Global 2019 below:

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post AI Expo Global: Fairness and safety in artificial intelligence appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/05/01/ai-expo-fairness-safety-artificial-intelligence/feed/ 0
Nvidia explains how ‘true adoption’ of AI is making an impact https://news.deepgeniusai.com/2019/04/26/nvidia-how-adoption-ai-impact/ https://news.deepgeniusai.com/2019/04/26/nvidia-how-adoption-ai-impact/#respond Fri, 26 Apr 2019 20:15:25 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5577 Nvidia Senior Director of Enterprise David Hogan spoke at this year’s AI Expo about how the company is seeing artificial intelligence adoption making an impact. In the keynote session, titled ‘What is the true adoption of AI’, Hogan provided real-world examples of how the technology is being used and enabled by Nvidia’s GPUs. But first,... Read more »

The post Nvidia explains how ‘true adoption’ of AI is making an impact appeared first on AI News.

]]>
Nvidia Senior Director of Enterprise David Hogan spoke at this year’s AI Expo about how the company is seeing artificial intelligence adoption making an impact.

In the keynote session, titled ‘What is the true adoption of AI’, Hogan provided real-world examples of how the technology is being used and enabled by Nvidia’s GPUs. But first, he highlighted the momentum we’re seeing in AI.

“Many governments have announced investments in AI and how they’re going to position themselves,” comments Hogan. “Countries around the world are starting to invest in very large infrastructures.”

The world’s most powerful supercomputers are powered by Nvidia GPUs. ORNL Summit, the current fastest, uses an incredible 27,648 GPUs to deliver over 144 petaflops of performance. Vast amounts of computational power is needed for AI which puts Nvidia in a great position to capitalise.

“The compute demands of AI are huge and beyond what anybody has seen within a standard enterprise environment before,” says Hogan. “You cannot train a neural network on a standard CPU cluster.”

Nvidia started off by creating graphics cards for gaming. While that’s still a big part of what the company does, Hogan says the company pivoted towards AI back in 2012.

A great deal of the presentation was spent on autonomous vehicles, which is unsurprising given the demand and Nvidia’s expertise in the field. Hogan highlights that you simply cannot train driverless cars using CPUs and provided a comparison in cost, size, and power consumption.

“A new type of computing is starting to evolve based around GPU architecture called ‘dense computing’ – the ability to build systems that are highly-powerful, huge amounts of computational scale, but actually contained within a very small configuration,” explains Hogan.

Autonomous car manufacturers need to train petabytes of data per day, reiterate their models, and deploy them again in order to get those vehicles to market.

Nvidia has a machine called the DGX-2 which delivers two petaflops of performance. “That is one server that’s equivalent to 800 traditional servers in one box.”

Nvidia has a total of 370 autonomous vehicles which Hogan says covers most of the world’s automotive brands. Many of these are investing heavily and rushing to deliver at least ‘Level 2’ driverless cars in the 2020-21 timeframe.

“We have a fleet of autonomous cars,” says Hogan. “It’s not our intention to compete with Uber, Daimler or BMW, but the best way of us helping our customers enable that is by trying it ourselves.”

“All the work our customers do we’ve also done ourselves so we understand the challenges and what it takes to do this.”

Real-world impact

Hogan notes how AI is a “horizontal capability that sits across organisations” and is “an enabler for many, many things”. It’s certainly a challenge to come up with examples of industries that cannot be improved to some degree through AI.

Following autonomous cars, Nvidia sees the next mass scaling of AI happening in healthcare (which our dear readers already know, of course.)

Hogan provides the natural example of the UK’s National Health Service (NHS) which has vast amounts of patient data. Bringing this data together and having an AI make sense of it can unlock valuable information to improve healthcare.

AIs which can make sense of medical imaging on a par with, or even better, than some doctors are starting to become available. However, they are still 2D images that are alien to most people.

Hogan showed how AI is able to turn 2D imagery into 3D models of the organs which are easier to understand. In the GIF below, we see a radiograph of a heart being turned into a 3D model:

We’ve also heard about how AI is helping with the field of genomics, assisting in finding cures for human diseases. Nvidia GPUs are used for Oxford Nanopore’s MinIT handheld which enables DNA sequencing of things such as plants to be conducted in-the-field.

In a blog post last year, Nvidia explained how MinIT uses AI for basecalling:

“Nanopore sequencing measures tiny ionic currents that pass through nanoscale holes called nanopores. It detects signal changes when DNA passes through these holes. This captured signal produces raw data that requires signal processing to determine the order of DNA bases – known as the ‘sequence.’ This is called basecalling.

This analysis problem is a perfect match for AI, specifically recurrent neural networks. Compared with previous methods, RNNs allow for more accuracy in time-series data, which Oxford Nanopore’s sequencers are known for.”

Hogan notes how, in many respects, eCommerce paved the way for AI. Data collected for things such as advertising helps to train neural networks. In addition, eCommerce firms have consistently aimed to improve and optimise their algorithms for things such as recommendations to attract customers.

“All that data, all that Facebook information that we’ve created, has enabled us to train networks,” notes Hogan.

Brick-and-mortar retailers are also being improved by AI. Hogan gives the example of Walmart which is using AI to improve their demand forecasting and keep supply chains running smoothly.

In real-time, Walmart is able to see where potential supply challenges are and take action to avoid or minimise. The company is even able to see where weather conditions may cause issues.

Hogan says this has saved Walmart tens of billions of dollars. “This is just one example of how AI is making an impact today not just on the bottom line but also the overall performance of the business”.

Accenture is now detecting around 200 million cyber threats per day, claims Hogan. He notes how protecting against such a vast number of evolving threats is simply not possible without AI.

“It’s impossible to address that, look at it, prioritise it, and action it in any other way than applying AI,” comments Hogan. “AI is based around patterns – things that are different – and when to act and when not to.”

While often we hear about what AI could one day be used for, Hogan’s presentation was a fascinating insight into how Nvidia is seeing it making an impact today or in the not-so-distant future.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Nvidia explains how ‘true adoption’ of AI is making an impact appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/04/26/nvidia-how-adoption-ai-impact/feed/ 0
UNICRI AI and Robotics Centre: AI will transform our world https://news.deepgeniusai.com/2018/07/02/un-head-ai-transform-world/ https://news.deepgeniusai.com/2018/07/02/un-head-ai-transform-world/#respond Mon, 02 Jul 2018 10:58:03 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3427 Speaking at AI Expo in Amsterdam, Irakli Beridze from the AI and Robotics Centre at UNICRI provided his thoughts on how AI will transform our world. Irakli started with a positive note that’s easily forgotten: never has the world been more safe, connected, and prosperous. “We have developed technologies which have the potential to solve... Read more »

The post UNICRI AI and Robotics Centre: AI will transform our world appeared first on AI News.

]]>
Speaking at AI Expo in Amsterdam, Irakli Beridze from the AI and Robotics Centre at UNICRI provided his thoughts on how AI will transform our world.

Irakli started with a positive note that’s easily forgotten: never has the world been more safe, connected, and prosperous.

“We have developed technologies which have the potential to solve problems we never thought were possible,” says Beridze. “Most of them are related to the UN’s sustainable development goals.”

World-Changing Benefits

A look at the statistics provides evidence of a huge reduction in those dying from violence or living in extreme poverty. Many of the greatest threats we face today are shared challenges such as climate change, disease, and dwindling resources.

AI is a powerful tool which can help with all these challenges and more if we, as humankind, choose to use it this way. Alternatively, it could pose an existential threat.

Here are just some of the ways Beridze expects AI to aid the UN towards its goals:

Beridze dives deeper into some other potential benefits of AI to societies. A couple of the most interesting suggestions are its use to improve health and wellbeing, and to maintain peace, justice, and strong institutions.

Starting with health, Beridze highlights the use of AI to analyse large quantities of healthcare data in order to make scientific breakthroughs. Furthermore, it could be used to predict and project disease outbreaks to reduce mortality rates.

The impact of AI on healthcare is among our most covered subjects here on AI News. There are exciting developments on a near-daily basis.

Next up is the potential for AI when it comes to peace, law, and governance. Beridze believes AI can be integrated within an ‘e-government’ to reduce discrimination, prejudice, and corruption.

AI currently has a well-documented bias problem. However, solutions are becoming available to ensure the algorithms behind AIs are fair and do not favour any part of society over another. It’s ultimately easier to make a machine less discriminate than a person.

Global powers are seeking to establish themselves as leaders in AI. China and the US continue to be dominant by pumping billions of cash into developments, while smaller economies are playing to their own strengths.

Countries such as Japan are strong in fields such as robotics. The EU has the highest number of service robot manufacturing. Meanwhile, the UK is known for its leadership in ethics and strong academic attributes with leading universities.

There’s a now famous quote from Russian President Vladimir Putin speaking about AI which said: “Whoever becomes the leader in this sphere will become the ruler of the world.”

Putin’s quote was received in many ways. Some believe it was simply a matter of fact, while others saw it as confirmation of a potentially reckless race between world powers to become a leader.

AI-as-a-Threat

Regardless of what states do, criminals will seek to exploit AI for their own gain. This could take many forms, but one clear example is that of impersonation.

During Google’s I/O conference this year, the company showed off its Duplex demo where an AI assistant called a hair salon on a user’s behalf and was convincing enough to pass for a human. By training such a system with someone else’s voice, fraud could be completely automated.

Beridze will be meeting with Interpol next month to discuss the new risks posed by criminals using artificial intelligence, and how law enforcement agencies can work to counter them.

“When talking about the good sides of AI, we should never forget about the possible risks,” warns Beridze. “One of the biggest risks is the pace of development with how quickly it’s being developed and how fast we can adapt to that.”

One major concern is the potential impact on jobs. Low-wage workers are particularly threatened by automation.

“We don’t really have any solutions,” Beridze says. “We have some ideas that have been put on the table such as; Universal Basic Income, retraining of the population, some even say to slow down the pace of innovation.”

Other concerns highlighted by Beridze includes automated weapons, superintelligent systems like SkyNet famously depicted in the Terminator movies, and using things such as bots to influence democratic processes.

Solving International Verification

One of the most interesting uses for AI is for the verification of incidents where nations do not trust each other. This has perhaps been seen most often between Western nations and Russia where there’s still a clear level of distrust.

Take the recent chemical attack in Salisbury, UK on a former spy and his daughter. Western nations agreed it could only have been carried out by Russia. For its part, Russia denies the allegations and claims to have been locked out of seeing any evidence.

Beridze served as a special projects officer at the OPCW (Organisation for the Prohibition of Chemical Weapons) prior to joining UNICRI.

The OPCW is an independent organisation working alongside the United Nations that investigates chemical attacks. Members of OPCW represent around 98 percent of the global population.

Until a ruling last week, the OPCW was prohibited from assigning blame for a chemical attack. In Salisbury, the organisation stated it agreed with the UK’s findings the nerve agent of the kind first developed by the Soviet Union.

There have also been multiple chemical attacks in Syria. One particularly devastating attack in Douma was to be investigated by the OPCW but investigators claim they were blocked from accessing the site by Syria and its Russian allies.

Investigators were eventually provided access over a week after. However, Chlorine – at least one of the suspected chemicals used – is notoriously difficult to detect even a day after because of its gaseous state.

Russia and Syria both reject claims that chemical weapons were used. Moscow has offered several narratives on Douma, claiming simultaneously that there never was an attack and that it was the work of rebels in the area.

France said it was likely the evidence is gone, and the USA accused Russia and Syria of tampering with the site.

When everyone is pointing the finger at each other, there needs to be independent verification. Whenever people are involved there’s nearly always some accusations of foul play.

A provably unbiased, open-source AI which examines the evidence could be the answer.

“The time has come where we should employ technologies like AI and blockchain to start verification of issues where countries do not trust each other,” says Beridze. “We need to make a major leap from a system created in the [19]40s, to 80 years down the road where we live in a completely different world.”

Beridze’s session highlighted both the near-limitless potential for AI to have a positive impact on the world, or it could just as easily be devastating.

One thing is for sure, AI will transform our world. For better or worse, that’s up to all of us.

You can watch our interview with Irakli Beridze below:

Find out more about AI Expo and the next event here.

What impact do you think AI will have on the world?

 

The post UNICRI AI and Robotics Centre: AI will transform our world appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/07/02/un-head-ai-transform-world/feed/ 0
AI Expo: TomTom will tell users what road trip they’ll enjoy most https://news.deepgeniusai.com/2018/06/27/ai-expo-tomtom-road-trip/ https://news.deepgeniusai.com/2018/06/27/ai-expo-tomtom-road-trip/#respond Wed, 27 Jun 2018 14:07:03 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3410 Automotive solutions giant TomTom will be enhancing its Road Trips service using machine learning to tell users what their next road trip should be. Like everything in life, we all have our preference as to what kind of roads we like to drive. Some like big open highways, others like windy country roads. Some like... Read more »

The post AI Expo: TomTom will tell users what road trip they’ll enjoy most appeared first on AI News.

]]>
Automotive solutions giant TomTom will be enhancing its Road Trips service using machine learning to tell users what their next road trip should be.

Like everything in life, we all have our preference as to what kind of roads we like to drive. Some like big open highways, others like windy country roads. Some like rolling hills, others prefer to keep things flat.

TomTom’s Road Trips is a community-based service which features some routes from TomTom itself but mostly relies on users to submit their own. And submit they did.

Within the first week, 350 routes were published. Now that number has swelled considerably and finding the perfect road trip is becoming a lot more difficult by the day.

TomTom will soon be implementing machine learning to offer personalised recommendations based on what trips an individual user previously enjoyed.

Here at the AI Expo in Amsterdam, TomTom Group Data Scientist Pierluigi Casale shared the company’s preliminary results from its tests.

As the image shows, the percentage of users who liked the recommended route jumped up nearly 70 percent from those shown routes at random. Even likes of the popular routes were almost 50 percent less than routes suggested by TomTom’s algorithm.

You may not be able to control the weather, traffic, or company during your road trip… but at least TomTom can set your journey off to a good start with the perfect route.

Find out more about AI Expo and the next event here.

What are your thoughts on TomTom’s use of machine learning for road trips?

 

The post AI Expo: TomTom will tell users what road trip they’ll enjoy most appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/06/27/ai-expo-tomtom-road-trip/feed/ 0