Featured – AI News https://news.deepgeniusai.com Artificial Intelligence News Mon, 11 Jan 2021 17:12:10 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Featured – AI News https://news.deepgeniusai.com 32 32 Police use of Clearview AI’s facial recognition increased 26% after Capitol raid https://news.deepgeniusai.com/2021/01/11/police-use-clearview-ai-facial-recognition-increased-26-capitol-raid/ https://news.deepgeniusai.com/2021/01/11/police-use-clearview-ai-facial-recognition-increased-26-capitol-raid/#respond Mon, 11 Jan 2021 17:12:08 +0000 https://news.deepgeniusai.com/?p=10153 Clearview AI reports that police use of the company’s highly-controversial facial recognition system jumped 26 percent following the raid on the Capitol. The facial recognition system relies on scraping the data of people from across the web without their explicit consent, a practice which has naturally raised some eyebrows—including the ACLU’s which called it a... Read more »

The post Police use of Clearview AI’s facial recognition increased 26% after Capitol raid appeared first on AI News.

]]>
Clearview AI reports that police use of the company’s highly-controversial facial recognition system jumped 26 percent following the raid on the Capitol.

The facial recognition system relies on scraping the data of people from across the web without their explicit consent, a practice which has naturally raised some eyebrows—including the ACLU’s which called it a “nightmare scenario” for privacy.

Around three billion images are said to have been scraped for Clearview AI’s system.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland once argued.

Whether you call them protestors or domestic terrorists, the Trump supporters who raided the US Capitol Building last week – incited by the president to halt democracy and overturn the votes of millions of Americans – committed clear criminal offences that were bipartisanly condemned.

In comments to New York Times, Clearview AI CEO Hoan Ton-That claimed the company’s witnesses “a 26 percent increase of searches over our usual weekday search volume” on January 7th, following the riots.

Given the number of individuals involved, law enforcement has a gargantuan task to identify and locate the people that went far beyond exercising their right to peaceful protest and invaded a federal building, caused huge amounts of damage, and threatened elected representatives and staff.

The FBI has issued public appeals, but it’s little surprise that law enforcement is turning to automated means—regardless of the controversy. According to Clearview AI, approximately 2,400 agencies across the US use the company’s facial recognition technology.

Last year, the UK and Australia launched a joint probe into Clearview AI’s practices.

“The Office of the Australian Information Commissioner (OAIC) and the UK’s Information Commissioner’s Office (ICO) have opened a joint investigation into the personal information handling practices of Clearview Inc., focusing on the company’s use of ‘scraped’ data and biometrics of individuals,” the ICO wrote in a statement.

A similar probe was also launched by the EU’s privacy watchdog. The European Data Protection Board ruled that any use of the service by law enforcement in Europe would “likely not be consistent with the EU data protection regime” and that it “has doubts as to whether any Union or Member State law provides a legal basis for using a service such as the one offered by Clearview AI.”

Clearview AI has already been forced to suspend operations in Canada after the federal Office of the Privacy Commissioner of Canada opened an investigation into the company.

While Clearview AI’s facial recognition tech continues to have widespread use in the US, some police departments have taken the independent decision to ban officers from using such systems due to the well-documented inaccuracies which particularly affect minority communities.

The post Police use of Clearview AI’s facial recognition increased 26% after Capitol raid appeared first on AI News.

]]>
https://news.deepgeniusai.com/2021/01/11/police-use-clearview-ai-facial-recognition-increased-26-capitol-raid/feed/ 0
OpenAI’s latest neural network creates images from written descriptions https://news.deepgeniusai.com/2021/01/06/openai-latest-neural-network-creates-images-written-descriptions/ https://news.deepgeniusai.com/2021/01/06/openai-latest-neural-network-creates-images-written-descriptions/#comments Wed, 06 Jan 2021 18:28:28 +0000 https://news.deepgeniusai.com/?p=10142 OpenAI has debuted its latest jaw-dropping innovation, an image-generating neural network called DALL·E. DALL·E is a 12-billion parameter version of GPT-3 which is trained to generate images from text descriptions. “We find that DALL·E is able to create plausible images for a great variety of sentences that explore the compositional structure of language,“ OpenAI explains.... Read more »

The post OpenAI’s latest neural network creates images from written descriptions appeared first on AI News.

]]>
OpenAI has debuted its latest jaw-dropping innovation, an image-generating neural network called DALL·E.

DALL·E is a 12-billion parameter version of GPT-3 which is trained to generate images from text descriptions.

“We find that DALL·E is able to create plausible images for a great variety of sentences that explore the compositional structure of language,“ OpenAI explains.

Generated images can range from drawings, to objects, and even manipulated real-world photos. Here are some examples of each provided by OpenAI:

Just as OpenAI’s GPT-3 text generator caused alarm about implications such as helping to create fake news for the kinds of disinformation campaigns recently seen around COVID-19, 5G, and attempting to influence various democratic processes—similar concerns will be raised about the company’s latest innovation.

People are increasingly aware of fake news and not to believe everything they read, especially from unknown sources without good track records. However, as humans, we’re still used to believing what we can see with our eyes. Fake news with fake supporting imagery is a rather convincing combination.

Much like it argued with GPT-3, OpenAI essentially says that – by putting the technology out there as responsibly as possible – it helps to raise awareness and drives research into how the implications can be tackled before such neural networks are inevitably created and used by malicious parties.

“We recognise that work involving generative models has the potential for significant, broad societal impacts,” OpenAI said.

“In the future, we plan to analyse how models like DALL·E relate to societal issues like economic impact on certain work processes and professions, the potential for bias in the model outputs, and the longer-term ethical challenges implied by this technology.”

Technological advancements will almost always be used for damaging purposes—but often the benefits outweigh the risks. I’d wager you could write pages about the good and bad sides of the internet, but overall it’s a pretty fantastic thing.

When it comes down to it: If the “good guys” don’t build it, you can be sure the bad ones will.

(Image Credit: Justin Jay Wang/OpenAI)

The post OpenAI’s latest neural network creates images from written descriptions appeared first on AI News.

]]>
https://news.deepgeniusai.com/2021/01/06/openai-latest-neural-network-creates-images-written-descriptions/feed/ 1
Google is telling its scientists to give AI a ‘positive’ spin https://news.deepgeniusai.com/2020/12/24/google-telling-scientists-give-ai-positive-spin/ https://news.deepgeniusai.com/2020/12/24/google-telling-scientists-give-ai-positive-spin/#respond Thu, 24 Dec 2020 10:09:16 +0000 https://news.deepgeniusai.com/?p=10136 Google has reportedly been telling its scientists to give AI a “positive” spin in research papers. Documents obtained by Reuters suggest that, in at least three cases, Google’s researchers were requested to refrain from being critical of AI technology. A “sensitive topics” review was established by Google earlier this year to catch papers which cast... Read more »

The post Google is telling its scientists to give AI a ‘positive’ spin appeared first on AI News.

]]>
Google has reportedly been telling its scientists to give AI a “positive” spin in research papers.

Documents obtained by Reuters suggest that, in at least three cases, Google’s researchers were requested to refrain from being critical of AI technology.

A “sensitive topics” review was established by Google earlier this year to catch papers which cast a negative light on AI ahead of their publication.

Google asks its scientists to consult with legal, policy, and public relations teams prior to publishing anything on topics which could be deemed sensitive like sentiment analysis and categorisations of people based on race and/or political affiliation.

The new review means that papers from Google’s expert researchers which raise questions about AI developments may never be published. Reuters says four staff researchers believe Google is interfering with studies into potential technology harms.

Google recently faced scrutiny after firing leading AI ethics researcher Timnit Gebru.

Gebru is considered a pioneer in the field and researched the risks and inequalities found in large language models. She claims to have been fired by Google over an unpublished paper and sending an email critical of the company’s practices.

In an internal email countering Gebru’s claims, Head of Google Research Jeff Dean wrote:

“We’ve approved dozens of papers that Timnit and/or the other Googlers have authored and then published, but as you know, papers often require changes during the internal review process (or are even deemed unsuitable for submission). 

Unfortunately, this particular paper was only shared with a day’s notice before its deadline — we require two weeks for this sort of review — and then instead of awaiting reviewer feedback, it was approved for submission and submitted.

A cross-functional team then reviewed the paper as part of our regular process and the authors were informed that it didn’t meet our bar for publication and were given feedback about why.”

While it’s one word against another, it’s not a great look for Google.

“Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues,” Reuters reported one of Google’s documents as saying.

On its public-facing website, Google says that its scientists have “substantial” freedom—but that’s increasingly appearing like it’s not the case.

(Photo by Mitchell Luo on Unsplash)

The post Google is telling its scientists to give AI a ‘positive’ spin appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/24/google-telling-scientists-give-ai-positive-spin/feed/ 0
Chinese AI chipmaker Horizon endeavours to raise $700M to rival NVIDIA https://news.deepgeniusai.com/2020/12/22/chinese-ai-chipmaker-horizon-raise-700m-rival-nvidia/ https://news.deepgeniusai.com/2020/12/22/chinese-ai-chipmaker-horizon-raise-700m-rival-nvidia/#comments Tue, 22 Dec 2020 16:10:04 +0000 https://news.deepgeniusai.com/?p=10133 AI chipmaker Horizon Robotics is seeking to raise $700 million in a new funding round. Horizon is often seen as potentially becoming China’s equivalent of NVIDIA. The company is founded by Dr Kai Yu, a prominent industry figure with quite the credentials. Yu led Baidu’s AI Research lab for three years, founded the Baidu Institute... Read more »

The post Chinese AI chipmaker Horizon endeavours to raise $700M to rival NVIDIA appeared first on AI News.

]]>
AI chipmaker Horizon Robotics is seeking to raise $700 million in a new funding round.

Horizon is often seen as potentially becoming China’s equivalent of NVIDIA. The company is founded by Dr Kai Yu, a prominent industry figure with quite the credentials.

Yu led Baidu’s AI Research lab for three years, founded the Baidu Institute of Deep Learning, and launched the company’s autonomous driving business unit.

Furthermore, Yu has taught at Stanford University, published over 60 papers, and even won first place in the ImageNet challenge which evaluates algorithms for object detection and image classification.

China is yet to produce a chipset firm which can match the capabilities of Western equivalents.

With increasing US sanctions making it more difficult for Chinese firms to access American semiconductors, a number of homegrown companies are emerging and gaining attention from investors.

Horizon is just five-years-old and specialises in making AI chips for robots and autonomous vehicles. The company has already attracted significant funding.

Around two years ago, Horizon completed a $600 million funding round with a $3 billion valuation. The company has secured $150 million so far as part of this latest round.

While it’s likely the incoming Biden administration in the US will take a less strict approach to trade with China, it seems Beijing wants to build more homegrown alternatives which can match or surpass Western counterparts.

Chinese tech giants like Huawei are investing significant resources in their chip manufacturing capabilities to ensure the country has the tech it needs to power groundbreaking advancements like self-driving cars.

The post Chinese AI chipmaker Horizon endeavours to raise $700M to rival NVIDIA appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/22/chinese-ai-chipmaker-horizon-raise-700m-rival-nvidia/feed/ 1
Facebook is developing a news-summarising AI called TL;DR https://news.deepgeniusai.com/2020/12/16/facebook-developing-news-summarising-ai-tldr/ https://news.deepgeniusai.com/2020/12/16/facebook-developing-news-summarising-ai-tldr/#comments Wed, 16 Dec 2020 17:19:16 +0000 https://news.deepgeniusai.com/?p=10126 Facebook is developing an AI called TL;DR which summarises news into shorter snippets. Anyone who’s spent much time on the web will know what TL;DR stands for⁠—but, for everyone else, it’s an acronym for “Too Long, Didn’t Read”. It’s an understandable sentiment we’ve all felt at some point. People lead busy lives. Some outlets now... Read more »

The post Facebook is developing a news-summarising AI called TL;DR appeared first on AI News.

]]>
Facebook is developing an AI called TL;DR which summarises news into shorter snippets.

Anyone who’s spent much time on the web will know what TL;DR stands for⁠—but, for everyone else, it’s an acronym for “Too Long, Didn’t Read”.

It’s an understandable sentiment we’ve all felt at some point. People lead busy lives. Some outlets now even specialise in short, at-a-glance news.

The problem is, it’s hard to get the full picture of a story in just a brief snippet.

In a world where fake news can be posted and spread like wildfire across social networks – almost completely unchecked – it feels even more dangerous to normalise “news” being delivered in short-form without full context.

There are two sides to most stories, and it’s hard to see how both can be summarised properly.

However, the argument also goes the other way. When articles are too long, people have a natural habit of skim-reading them. Skimming in this way often means people then believe they’re fully informed on a topic… when we know that’s often not the case.

TL;DR needs to strike a healthy balance between summarising the news but not so much that people don’t get enough of the story. Otherwise, it could increase existing societal problems with misinformation, fake news, and lack of media trust.

According to BuzzFeed, Facebook showed off TL;DR during an internal meeting this week. 

Facebook appears to be planning to add an AI-powered assistant to TL;DR which can answer questions about the article. The assistant could help to clear up anything the reader is uncertain about, but it’s also going to have to prove it doesn’t suffer from any biases which arguably all current algorithms suffer from to some extent.

The AI is also going to have to be very careful in not taking things like quotes out-of-context and end up further automating the spread of misinformation.

There’s also going to be a debate over what sources Facebook should use. Should Facebook stick only to the “mainstream media” which many believe follow the agendas of certain powerful moguls? Or serve news from smaller outlets without much historic credibility? The answer probably lies somewhere in the middle, but it’s going to be difficult to get right.

Facebook continues to be a major source of misinformation – in large part driven by algorithms promoting such content – and it’s had little success so far in any news-related efforts. I think most people will be expecting this to be another disaster waiting to happen.

(Image Credit: Mark Zuckerberg by Alessio Jacona under CC BY-SA 2.0 license)

The post Facebook is developing a news-summarising AI called TL;DR appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/16/facebook-developing-news-summarising-ai-tldr/feed/ 1
EU human rights agency issues report on AI ethical considerations https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/ https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/#respond Mon, 14 Dec 2020 16:34:34 +0000 https://news.deepgeniusai.com/?p=10117 The European Union’s Fundamental Rights Agency (FRA) has issued a report on AI which delves into the ethical considerations which must be made about the technology. FRA’s report is titled Getting The Future Right and opens with some of the ways AI is already making lives better—such as helping with cancer diagnosis, and even predicting... Read more »

The post EU human rights agency issues report on AI ethical considerations appeared first on AI News.

]]>
The European Union’s Fundamental Rights Agency (FRA) has issued a report on AI which delves into the ethical considerations which must be made about the technology.

FRA’s report is titled Getting The Future Right and opens with some of the ways AI is already making lives better—such as helping with cancer diagnosis, and even predicting where burglaries are likely to take place.

“The possibilities seem endless,” writes Michael O’Flaherty, Director of the FRA, in the report’s foreword. “But how can we fully uphold fundamental rights standards when using AI?”

The FRA interviewed over a hundred public administration officials, private company staff, and a diverse range of experts, in a bid to answer that question.

With evidence of algorithms having biases which could lead to automating societal issues like racial profiling—it’s a question that needs answering if the full potential of AI is going to be unlocked for the whole of society.

O’Flaherty says:

“AI is not infallible, it is made by people – and humans can make mistakes. That is why people need to be aware when AI is used, how it works and how to challenge automated decisions. The EU needs to clarify how existing rules apply to AI. And organisations need to assess how their technologies can interfere with people’s rights both in the development and use of AI.

“We have an opportunity to shape AI that not only respects our human and fundamental rights but that also protects and promotes them.”

AI is being used in almost every industry in some form or another—if not already, it will be soon.

Biases in AI are more dangerous in some industries than others. Policing is an obvious example, but in areas like financial services it could mean one person being given a loan or mortgage compared to another.

Without due transparency, these biases could happen without anyone knowing the reasons behind such decisions—it could simply be because someone grew up in a different neighbourhood. Each automated decision has a very real human impact.

The FRA calls for the EU to:

  • Make sure that AI respects ALL fundamental rights – AI can affect many rights – not just privacy or data protection. It can also discriminate or impede justice. Any future AI legislation has to consider this and create effective safeguards.
  • Guarantee that people can challenge decisions taken by AI – people need to know when AI is used and how it is used, as well as how and where to complain. Organisations using AI need to be able to explain how their systems take decisions.
  • Assess AI before and during its use to reduce negative impacts – private and public organisations should carry out assessments of how AI could harm fundamental rights.
  • Provide more guidance on data protection rules – the EU should further clarify how data protection rules apply to AI. More clarity is also needed on the implications of automated decision-making and the right to human review when AI is used.
  • Assess whether AI discriminates – awareness about the potential for AI to discriminate, and the impact of this, is relatively low. This calls for more research funding to look into the potentially discriminatory effects of AI so Europe can guard against it.
  • Create an effective oversight system – the EU should invest in a more ‘joined-up’ system to hold businesses and public administrations accountable when using AI. Authorities need to ensure that oversight bodies have adequate resources and skills to do the job.

The EU has increased its scrutiny of “big tech” companies like Google in recent years over concerns of invasive privacy practices and abusing their market positions. Last week, AI News reported that Google had controversially fired leading AI ethics researcher Timnit Gebru after she criticised her employer in an email.

Google chief executive Sundar Pichai wrote in a memo: “We need to accept responsibility for the fact that a prominent black, female leader with immense talent left Google unhappily.

“It’s incredibly important to me that our black, women, and under-represented Googlers know that we value you and you do belong at Google.”

Gebru gave an interview to the BBC this week in which she called Google and big tech “institutionally racist”. With that in mind, the calls made in the FRA’s report seem especially important to heed.

You can download a full copy of the FRA’s report here.

(Photo by Guillaume Périgois on Unsplash)

The post EU human rights agency issues report on AI ethical considerations appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/feed/ 0
Former NHS surgeon creates AI ‘virtual patient’ for remote training https://news.deepgeniusai.com/2020/12/11/former-nhs-surgeon-ai-virtual-patient-remote-training/ https://news.deepgeniusai.com/2020/12/11/former-nhs-surgeon-ai-virtual-patient-remote-training/#comments Fri, 11 Dec 2020 14:05:07 +0000 https://news.deepgeniusai.com/?p=10102 A former NHS surgeon has created an AI-powered “virtual patient” which helps to keep skills sharp during a time when most in-person training is on hold. Dr Alex Young is a trained orthopaedic and trauma surgeon who founded Virti and set out to use emerging technologies to provide immersive training for both new healthcare professionals... Read more »

The post Former NHS surgeon creates AI ‘virtual patient’ for remote training appeared first on AI News.

]]>
A former NHS surgeon has created an AI-powered “virtual patient” which helps to keep skills sharp during a time when most in-person training is on hold.

Dr Alex Young is a trained orthopaedic and trauma surgeon who founded Virti and set out to use emerging technologies to provide immersive training for both new healthcare professionals and experienced ones looking to hone their skills.

COVID-19 has put most in-person training on hold to minimise transmission risks. Hospitals and universities across the UK and US are now using the virtual patient as a replacement—including our fantastic local medics and surgeons at the Bristol NHS Foundation Trust.

The virtual patient uses Natural Language Processing (NLP) and ‘narrative branching’ to allow medics to roleplay lifelike clinical scenarios. Medics and trainees can interact with the virtual patient using their tablet, desktop, or even VR/AR headsets for a more immersive experience.

Dr Alex Young comments:

“We’ve been working with healthcare organisations for several years, but the pandemic has created really specific challenges that technology is helping to solve. It’s no longer safe or practicable to have 30 medics in a room with an actor, honing their clinical soft-skills. With our virtual patient technology, we’ve created an extremely realistic and repeatable experience that can provide feedback in real time. This means clinicians and students can continue to learn valuable skills.

Right now, communication with patients can be very difficult. There’s a lot of PPE involved and patients are often on their own. Having healthcare staff who are skilled in handling these situations can therefore make a huge difference to that patient’s experience.”

Some of the supported scenarios include: breaking bad news, comforting a patient in distress, and communicating effectively whilst their faces are obscured by PPE. Virti’s technology was also used at the peak of the pandemic to train NHS staff in key skills required on the front line, such as how to safely use PPE, how to navigate an unfamiliar intensive care ward, how to engage with patients and their families, and how to use a ventilator.

Tom Woollard, West Suffolk Hospital Clinical Skills and Simulation Tutor, who used the Virti platform at the peak of the COVID pandemic, comments:

“We’ve been using Virti’s technology in our intensive care unit to help train staff who have been drafted in to deal with COVID-19 demand.

The videos which we have created and uploaded are being accessed on the Virti platform by nursing staff, physiotherapists and Operational Department Practitioners (ODPs) to orient them in the new environment and reduce their anxiety.

The tech has helped us to reach a large audience and deliver formerly labour-intensive training and teaching which is now impossible with social distancing.

In the future, West Suffolk will consider applying Virti tech to other areas of hospital practice.”

The use of speech recognition, NLP, and ‘narrative branching’ provides a realistic simulation of how a patient would likely respond—providing lifelike responses in speech, body language, and mannerisms.

The AI delivers real-time feedback to the user so they can learn and improve. With upwards of 70 percent of complaints against health professionals and care providers attributable to poor communication, the virtual patient could help to deliver better care while reducing time spent handling complaints.

Virti’s groundbreaking technology has – quite rightly – been named one of TIME’s best inventions of 2020.

The post Former NHS surgeon creates AI ‘virtual patient’ for remote training appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/11/former-nhs-surgeon-ai-virtual-patient-remote-training/feed/ 1
Algorithmia: AI budgets are increasing but deployment challenges remain https://news.deepgeniusai.com/2020/12/10/algorithmia-ai-budgets-increasing-deployment-challenges-remain/ https://news.deepgeniusai.com/2020/12/10/algorithmia-ai-budgets-increasing-deployment-challenges-remain/#comments Thu, 10 Dec 2020 12:52:07 +0000 https://news.deepgeniusai.com/?p=10099 A new report from Algorithmia has found that enterprise budgets for AI are rapidly increasing but significant deployment challenges remain. Algorithmia’s 2021 Enterprise Trends in Machine Learning report features the views of 403 business leaders involved with machine learning initiatives. Diego Oppenheimer, CEO of Algorithmia, says: “COVID-19 has caused rapid change which has challenged our... Read more »

The post Algorithmia: AI budgets are increasing but deployment challenges remain appeared first on AI News.

]]>
A new report from Algorithmia has found that enterprise budgets for AI are rapidly increasing but significant deployment challenges remain.

Algorithmia’s 2021 Enterprise Trends in Machine Learning report features the views of 403 business leaders involved with machine learning initiatives.

Diego Oppenheimer, CEO of Algorithmia, says:

“COVID-19 has caused rapid change which has challenged our assumptions in many areas. In this rapidly changing environment, organisations are rethinking their investments and seeing the importance of AI/ML to drive revenue and efficiency during uncertain times.

Before the pandemic, the top concern for organisations pursuing AI/ML initiatives was a lack of skilled in-house talent. Today, organisations are worrying more about how to get ML models into production faster and how to ensure their performance over time.

While we don’t want to marginalise these issues, I am encouraged by the fact that the type of challenges have more to do with how to maximise the value of AI/ML investments as opposed to whether or not a company can pursue them at all.”

The main takeaway is that AI budgets are significantly increasing. 83 percent of respondents said they’ve increased their budgets compared to last year.

Despite a difficult year for many companies, business leaders are not being put off of AI investments—in fact, they’re doubling-down.

In Algorithmia’s summer survey, 50 percent of respondents said they plan to spend more on AI this year. Around one in five even said they “plan to spend a lot more.”

76 percent of businesses report they are now prioritising AI/ML over other IT initiatives. 64 percent say the priority of AI/ML has increased relative to other IT initiatives over the last 12 months.

With unemployment figures around the world at their highest for several years – even decades in some cases – it’s at least heartening to hear that 76 percent of respondents said they’ve not reduced the size of their AI/ML teams. 27 percent even report an increase.

43 percent say their AI/ML initiatives “matter way more than we thought” and close to one in four believe their AI/ML initiatives should have been their top priority sooner. Process automation and improving customer experiences are the two main areas for AI investments.

While it’s been all good news so far, there are AI deployment issues being faced by many companies which are yet to be addressed.

Governance is, by far, the biggest AI challenge being faced by companies. 56 percent of the businesses ranked governance, security, and auditability issues as a concern.

Regulatory compliance is vital but can be confusing, especially with different regulations between not just countries but even states. 67 percent of the organisations report having to comply with multiple regulations for their AI/ML deployments.

The next major challenge after governance is with basic deployment and organisational challenges. 

Basic integration issues were ranked by 49 percent of businesses as a problem. Furthermore, more job roles are being involved with AI deployment strategies than ever before—it’s no longer seen as just the domain of data scientists.

However, there’s perhaps some light at the end of the tunnel. Organisations are reporting improved outcomes when using dedicated, third-party MLOps solutions.

While keeping in mind Algorithmia is a third-party MLOps solution, the report claims organisations using such a platform spend an average of around 21 percent less on infrastructure costs. Furthermore, it also helps to free up their data scientists—who spend less time on model deployment.

You can find a full copy of Algorithmia’s report here (requires signup)

The post Algorithmia: AI budgets are increasing but deployment challenges remain appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/10/algorithmia-ai-budgets-increasing-deployment-challenges-remain/feed/ 1
AWS announces nine major updates for its ML platform SageMaker https://news.deepgeniusai.com/2020/12/09/aws-nine-major-updates-ml-platform-sagemaker/ https://news.deepgeniusai.com/2020/12/09/aws-nine-major-updates-ml-platform-sagemaker/#comments Wed, 09 Dec 2020 14:47:48 +0000 https://news.deepgeniusai.com/?p=10096 Amazon Web Services (AWS) has announced nine major new updates for its cloud-based machine learning platform, SageMaker. SageMaker aims to provide a machine learning service which can be used to build, train, and deploy ML models for virtually any use case. During this year’s re:Invent conference, AWS made several announcements to further improve SageMaker’s capabilities.... Read more »

The post AWS announces nine major updates for its ML platform SageMaker appeared first on AI News.

]]>
Amazon Web Services (AWS) has announced nine major new updates for its cloud-based machine learning platform, SageMaker.

SageMaker aims to provide a machine learning service which can be used to build, train, and deploy ML models for virtually any use case.

During this year’s re:Invent conference, AWS made several announcements to further improve SageMaker’s capabilities.

Swami Sivasubramanian, VP of Amazon Machine Learning at AWS, said:

“Hundreds of thousands of everyday developers and data scientists have used our industry-leading machine learning service, Amazon SageMaker, to remove barriers to building, training, and deploying custom machine learning models. One of the best parts about having such a widely-adopted service like SageMaker is that we get lots of customer suggestions which fuel our next set of deliverables.

Today, we are announcing a set of tools for Amazon SageMaker that makes it much easier for developers to build end-to-end machine learning pipelines to prepare, build, train, explain, inspect, monitor, debug, and run custom machine learning models with greater visibility, explainability, and automation at scale.”

The first announcement is Data Wrangler, a feature which aims to automate the preparation of data for machine learning.

Data Wrangler enables customers to choose the data they want from their various data stores and import it with a single click. Over 300 built-in data transformers are included to help customers normalise, transform, and combine features without having to write any code.

Frank Farrall, Principal of AI Ecosystems and Platforms Leader at Deloitte, comments:

“SageMaker Data Wrangler enables us to hit the ground running to address our data preparation needs with a rich collection of transformation tools that accelerate the process of machine learning data preparation needed to take new products to market.

In turn, our clients benefit from the rate at which we scale deployments, enabling us to deliver measurable, sustainable results that meet the needs of our clients in a matter of days rather than months.”

The second announcement is Feature Store. Amazon SageMaker Feature Store provides a new repository that makes it easy to store, update, retrieve, and share machine learning features for training and inference.

Feature Store aims to overcome the problem of storing features which are mapped to multiple models. A purpose-built feature store helps developers to access and share features that make it much easier to name, organise, find, and share sets of features among teams of developers and data scientists. Because it resides in SageMaker Studio – close to where ML models are run – AWS claims it provides single-digit millisecond inference latency.

Mammad Zadeh, VP of Engineering, Data Platform at Intuit, says:

“We have worked closely with AWS in the lead up to the release of Amazon SageMaker Feature Store, and we are excited by the prospect of a fully managed feature store so that we no longer have to maintain multiple feature repositories across our organization.

Our data scientists will be able to use existing features from a central store and drive both standardisation and reuse of features across teams and models.”

Next up, we have SageMaker Pipelines—which claims to be the first purpose-built, easy-to-use continuous integration and continuous delivery (CI/CD) service for machine learning.

Developers can define each step of an end-to-end machine learning workflow including the data-load steps, transformations from Amazon SageMaker Data Wrangler, features stored in Amazon SageMaker Feature Store, training configuration and algorithm set up, debugging steps, and optimisation steps.

SageMaker Clarify may be one of the most important features being debuted by AWS this week considering ongoing events.

Clarify aims to provide bias detection across the machine learning workflow, enabling developers to build greater fairness and transparency into their ML models. Rather than turn to often time-consuming open-source tools, developers can use the integrated solution to quickly try and counter any bias in models.

Andreas Heyden, Executive VP of Digital Innovations for the DFL Group, says:

“Amazon SageMaker Clarify seamlessly integrates with the rest of the Bundesliga Match Facts digital platform and is a key part of our long-term strategy of standardising our machine learning workflows on Amazon SageMaker.

By using AWS’s innovative technologies, such as machine learning, to deliver more in-depth insights and provide fans with a better understanding of the split-second decisions made on the pitch, Bundesliga Match Facts enables viewers to gain deeper insights into the key decisions in each match.”

Deep Profiling for Amazon SageMaker automatically monitors system resource utilisation and provides alerts where required for any detected training bottlenecks. The feature works across frameworks (PyTorch, Apache MXNet, and TensorFlow) and collects system and training metrics automatically without requiring any code changes in training scripts.

Next up, we have Distributed Training on SageMaker which AWS claims makes it possible to train large, complex deep learning models up to two times faster than current approaches.

Kristóf Szalay, CTO at Turbine, comments:

“We use machine learning to train our in silico human cell model, called Simulated Cell, based on a proprietary network architecture. By accurately predicting various interventions on the molecular level, Simulated Cell helps us to discover new cancer drugs and find combination partners for existing therapies.

Training of our simulation is something we continuously iterate on, but on a single machine each training takes days, hindering our ability to iterate on new ideas quickly.

We are very excited about Distributed Training on Amazon SageMaker, which we are expecting to decrease our training times by 90% and to help us focus on our main task: to write a best-of-the-breed codebase for the cell model training.

Amazon SageMaker ultimately allows us to become more effective in our primary mission: to identify and develop novel cancer drugs for patients.”

SageMaker’s Data Parallelism engine scales training jobs from a single GPU to hundreds or thousands by automatically splitting data across multiple GPUs, improving training time by up to 40 percent.

With edge computing advancements increasing rapidly, AWS is keeping pace with SageMaker Edge Manager.

Edge Manager helps developers to optimise, secure, monitor, and maintain ML models deployed on fleets of edge devices. In addition to helping optimise ML models and manage edge devices, Edge Manager also provides the ability to cryptographically sign models, upload prediction data from devices to SageMaker for monitoring and analysis, and view a dashboard which tracks and provided a visual report on the operation of the deployed models within the SageMaker console.

Igor Bergman, VP of Cloud and Software of PCs and Smart Devices at Lenovo, comments:

“SageMaker Edge Manager will help eliminate the manual effort required to optimise, monitor, and continuously improve the models after deployment. With it, we expect our models will run faster and consume less memory than with other comparable machine-learning platforms.

As we extend AI to new applications across the Lenovo services portfolio, we will continue to require a high-performance pipeline that is flexible and scalable both in the cloud and on millions of edge devices. That’s why we selected the Amazon SageMaker platform. With its rich edge-to-cloud and CI/CD workflow capabilities, we can effectively bring our machine learning models to any device workflow for much higher productivity.”

Finally, SageMaker JumpStart aims to make it easier for developers which have little experience with machine learning deployments to get started.

JumpStart provides developers with an easy-to-use, searchable interface to find best-in-class solutions, algorithms, and sample notebooks. Developers can select from several end-to-end machine learning templates(e.g. fraud detection, customer churn prediction, or forecasting) and deploy them directly into their SageMaker Studio environments.

AWS has been on a roll with SageMaker improvements—delivering more than 50 new capabilities over the past year. After this bumper feature drop, we probably shouldn’t expect any more until we’ve put 2020 behind us.

You can find coverage of AWS’ more cloud-focused announcements via our sister publication CloudTech here.

The post AWS announces nine major updates for its ML platform SageMaker appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/09/aws-nine-major-updates-ml-platform-sagemaker/feed/ 1
State of European Tech: Investment in ‘deep tech’ like AI drops 13% https://news.deepgeniusai.com/2020/12/08/state-of-european-tech-investment-deep-tech-ai-drops-13-percent/ https://news.deepgeniusai.com/2020/12/08/state-of-european-tech-investment-deep-tech-ai-drops-13-percent/#comments Tue, 08 Dec 2020 12:43:11 +0000 https://news.deepgeniusai.com/?p=10073 The latest State of European Tech report highlights that investment in “deep tech” like AI has dropped 13 percent this year. Data from Dealroom was used for the State of European Tech report. Dealroom defines deep tech as 16 fields: Artificial Intelligence, Machine Learning, Big Data, Augmented Reality, Virtual Reality, Drones, Autonomous Driving, Blockchain, Nanotech,... Read more »

The post State of European Tech: Investment in ‘deep tech’ like AI drops 13% appeared first on AI News.

]]>
The latest State of European Tech report highlights that investment in “deep tech” like AI has dropped 13 percent this year.

Data from Dealroom was used for the State of European Tech report. Dealroom defines deep tech as 16 fields: Artificial Intelligence, Machine Learning, Big Data, Augmented Reality, Virtual Reality, Drones, Autonomous Driving, Blockchain, Nanotech, Robotics, Internet of Things, 3D Technology, Computer Vision, Connected Devices, Sensors Technology, and Recognition Technology (NLP, image, video, text, speech recognition).

In 2019, there was $10.2 billion capital invested in European deep tech. In 2020, that dropped to $8.9 billion:

I think it’s fair to say that 2020 has been a tough year for most people and businesses. Economic uncertainty – not just from COVID-19 but also trade wars, Brexit, and a rather tumultuous US presidential election – has naturally led to fewer investments and people tightening their wallets.

For just one example, innovative satellite firm OneWeb was forced to declare bankruptcy earlier this year after crucial funding it was close to securing was pulled during the peak of the pandemic. Fortunately, OneWeb was saved following an acquisition by the UK government and Bharti Global—but not all companies have been so fortunate.

Many European businesses will now be watching the close-to-collapse Brexit talks with hope that a deal can yet be salvaged to limit the shock to supply lines, prevent disruption to Europe’s leading financial hub, and help to build a friendly relationship going forward with a continued exchange of ideas and talent rather than years of bitterness and resentment.

The report shows the UK has retained its significant lead in European tech investment and startups this year:

Despite the uncertainties, the UK looks unlikely to lose its position as the hub of European technology anytime soon.

Investments in European tech as a whole should bounce back – along with the rest of the world – in 2021, with promising COVID-19 vaccines rolling out and hopefully some calm in geopolitics.

94 percent of survey respondents for the report stated they have either increased or maintained their appetite to invest in the European venture asset class. Furthermore, a record number of US institutions have participated in more than one investment round in Europe this year—up 36% since 2016.

You can find a full copy of the State of European Tech report here.

The post State of European Tech: Investment in ‘deep tech’ like AI drops 13% appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/08/state-of-european-tech-investment-deep-tech-ai-drops-13-percent/feed/ 1