ai – AI News https://news.deepgeniusai.com Artificial Intelligence News Wed, 06 Jan 2021 18:32:32 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png ai – AI News https://news.deepgeniusai.com 32 32 OpenAI’s latest neural network creates images from written descriptions https://news.deepgeniusai.com/2021/01/06/openai-latest-neural-network-creates-images-written-descriptions/ https://news.deepgeniusai.com/2021/01/06/openai-latest-neural-network-creates-images-written-descriptions/#comments Wed, 06 Jan 2021 18:28:28 +0000

OpenAI has debuted its latest jaw-dropping innovation, an image-generating neural network called DALL·E.

DALL·E is a 12-billion parameter version of GPT-3 which is trained to generate images from text descriptions.

“We find that DALL·E is able to create plausible images for a great variety of sentences that explore the compositional structure of language,“ OpenAI explains.

Generated images can range from drawings, to objects, and even manipulated real-world photos. Here are some examples of each provided by OpenAI:

Just as OpenAI’s GPT-3 text generator caused alarm about implications such as helping to create fake news for the kinds of disinformation campaigns recently seen around COVID-19, 5G, and attempting to influence various democratic processes—similar concerns will be raised about the company’s latest innovation.

People are increasingly aware of fake news and not to believe everything they read, especially from unknown sources without good track records. However, as humans, we’re still used to believing what we can see with our eyes. Fake news with fake supporting imagery is a rather convincing combination.

Much like it argued with GPT-3, OpenAI essentially says that – by putting the technology out there as responsibly as possible – it helps to raise awareness and drives research into how the implications can be tackled before such neural networks are inevitably created and used by malicious parties.

“We recognise that work involving generative models has the potential for significant, broad societal impacts,” OpenAI said.

“In the future, we plan to analyse how models like DALL·E relate to societal issues like economic impact on certain work processes and professions, the potential for bias in the model outputs, and the longer-term ethical challenges implied by this technology.”

Technological advancements will almost always be used for damaging purposes—but often the benefits outweigh the risks. I’d wager you could write pages about the good and bad sides of the internet, but overall it’s a pretty fantastic thing.

When it comes down to it: If the “good guys” don’t build it, you can be sure the bad ones will.

(Image Credit: Justin Jay Wang/OpenAI)

The post OpenAI’s latest neural network creates images from written descriptions appeared first on AI News.

]]> https://news.deepgeniusai.com/2021/01/06/openai-latest-neural-network-creates-images-written-descriptions/feed/ 1
Google is telling its scientists to give AI a ‘positive’ spin https://news.deepgeniusai.com/2020/12/24/google-telling-scientists-give-ai-positive-spin/ https://news.deepgeniusai.com/2020/12/24/google-telling-scientists-give-ai-positive-spin/#respond Thu, 24 Dec 2020 10:09:16 +0000 https://news.deepgeniusai.com/?p=10136 Google has reportedly been telling its scientists to give AI a “positive” spin in research papers. Documents obtained by Reuters suggest that, in at least three cases, Google’s researchers were requested to refrain from being critical of AI technology. A “sensitive topics” review was established by Google earlier this year to catch papers which cast... Read more »

The post Google is telling its scientists to give AI a ‘positive’ spin appeared first on AI News.

]]>
Google has reportedly been telling its scientists to give AI a “positive” spin in research papers.

Documents obtained by Reuters suggest that, in at least three cases, Google’s researchers were requested to refrain from being critical of AI technology.

A “sensitive topics” review was established by Google earlier this year to catch papers which cast a negative light on AI ahead of their publication.

Google asks its scientists to consult with legal, policy, and public relations teams prior to publishing anything on topics which could be deemed sensitive like sentiment analysis and categorisations of people based on race and/or political affiliation.

The new review means that papers from Google’s expert researchers which raise questions about AI developments may never be published. Reuters says four staff researchers believe Google is interfering with studies into potential technology harms.

Google recently faced scrutiny after firing leading AI ethics researcher Timnit Gebru.

Gebru is considered a pioneer in the field and researched the risks and inequalities found in large language models. She claims to have been fired by Google over an unpublished paper and sending an email critical of the company’s practices.

In an internal email countering Gebru’s claims, Head of Google Research Jeff Dean wrote:

“We’ve approved dozens of papers that Timnit and/or the other Googlers have authored and then published, but as you know, papers often require changes during the internal review process (or are even deemed unsuitable for submission). 

Unfortunately, this particular paper was only shared with a day’s notice before its deadline — we require two weeks for this sort of review — and then instead of awaiting reviewer feedback, it was approved for submission and submitted.

A cross-functional team then reviewed the paper as part of our regular process and the authors were informed that it didn’t meet our bar for publication and were given feedback about why.”

While it’s one word against another, it’s not a great look for Google.

“Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues,” Reuters reported one of Google’s documents as saying.

On its public-facing website, Google says that its scientists have “substantial” freedom—but that’s increasingly appearing like it’s not the case.

(Photo by Mitchell Luo on Unsplash)

The post Google is telling its scientists to give AI a ‘positive’ spin appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/24/google-telling-scientists-give-ai-positive-spin/feed/ 0
Chinese AI chipmaker Horizon endeavours to raise $700M to rival NVIDIA https://news.deepgeniusai.com/2020/12/22/chinese-ai-chipmaker-horizon-raise-700m-rival-nvidia/ https://news.deepgeniusai.com/2020/12/22/chinese-ai-chipmaker-horizon-raise-700m-rival-nvidia/#comments Tue, 22 Dec 2020 16:10:04 +0000 https://news.deepgeniusai.com/?p=10133 AI chipmaker Horizon Robotics is seeking to raise $700 million in a new funding round. Horizon is often seen as potentially becoming China’s equivalent of NVIDIA. The company is founded by Dr Kai Yu, a prominent industry figure with quite the credentials. Yu led Baidu’s AI Research lab for three years, founded the Baidu Institute... Read more »

The post Chinese AI chipmaker Horizon endeavours to raise $700M to rival NVIDIA appeared first on AI News.

]]>
AI chipmaker Horizon Robotics is seeking to raise $700 million in a new funding round.

Horizon is often seen as potentially becoming China’s equivalent of NVIDIA. The company is founded by Dr Kai Yu, a prominent industry figure with quite the credentials.

Yu led Baidu’s AI Research lab for three years, founded the Baidu Institute of Deep Learning, and launched the company’s autonomous driving business unit.

Furthermore, Yu has taught at Stanford University, published over 60 papers, and even won first place in the ImageNet challenge which evaluates algorithms for object detection and image classification.

China is yet to produce a chipset firm which can match the capabilities of Western equivalents.

With increasing US sanctions making it more difficult for Chinese firms to access American semiconductors, a number of homegrown companies are emerging and gaining attention from investors.

Horizon is just five-years-old and specialises in making AI chips for robots and autonomous vehicles. The company has already attracted significant funding.

Around two years ago, Horizon completed a $600 million funding round with a $3 billion valuation. The company has secured $150 million so far as part of this latest round.

While it’s likely the incoming Biden administration in the US will take a less strict approach to trade with China, it seems Beijing wants to build more homegrown alternatives which can match or surpass Western counterparts.

Chinese tech giants like Huawei are investing significant resources in their chip manufacturing capabilities to ensure the country has the tech it needs to power groundbreaking advancements like self-driving cars.

The post Chinese AI chipmaker Horizon endeavours to raise $700M to rival NVIDIA appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/22/chinese-ai-chipmaker-horizon-raise-700m-rival-nvidia/feed/ 1
Facebook is developing a news-summarising AI called TL;DR https://news.deepgeniusai.com/2020/12/16/facebook-developing-news-summarising-ai-tldr/ https://news.deepgeniusai.com/2020/12/16/facebook-developing-news-summarising-ai-tldr/#comments Wed, 16 Dec 2020 17:19:16 +0000 https://news.deepgeniusai.com/?p=10126 Facebook is developing an AI called TL;DR which summarises news into shorter snippets. Anyone who’s spent much time on the web will know what TL;DR stands for⁠—but, for everyone else, it’s an acronym for “Too Long, Didn’t Read”. It’s an understandable sentiment we’ve all felt at some point. People lead busy lives. Some outlets now... Read more »

The post Facebook is developing a news-summarising AI called TL;DR appeared first on AI News.

]]>
Facebook is developing an AI called TL;DR which summarises news into shorter snippets.

Anyone who’s spent much time on the web will know what TL;DR stands for⁠—but, for everyone else, it’s an acronym for “Too Long, Didn’t Read”.

It’s an understandable sentiment we’ve all felt at some point. People lead busy lives. Some outlets now even specialise in short, at-a-glance news.

The problem is, it’s hard to get the full picture of a story in just a brief snippet.

In a world where fake news can be posted and spread like wildfire across social networks – almost completely unchecked – it feels even more dangerous to normalise “news” being delivered in short-form without full context.

There are two sides to most stories, and it’s hard to see how both can be summarised properly.

However, the argument also goes the other way. When articles are too long, people have a natural habit of skim-reading them. Skimming in this way often means people then believe they’re fully informed on a topic… when we know that’s often not the case.

TL;DR needs to strike a healthy balance between summarising the news but not so much that people don’t get enough of the story. Otherwise, it could increase existing societal problems with misinformation, fake news, and lack of media trust.

According to BuzzFeed, Facebook showed off TL;DR during an internal meeting this week. 

Facebook appears to be planning to add an AI-powered assistant to TL;DR which can answer questions about the article. The assistant could help to clear up anything the reader is uncertain about, but it’s also going to have to prove it doesn’t suffer from any biases which arguably all current algorithms suffer from to some extent.

The AI is also going to have to be very careful in not taking things like quotes out-of-context and end up further automating the spread of misinformation.

There’s also going to be a debate over what sources Facebook should use. Should Facebook stick only to the “mainstream media” which many believe follow the agendas of certain powerful moguls? Or serve news from smaller outlets without much historic credibility? The answer probably lies somewhere in the middle, but it’s going to be difficult to get right.

Facebook continues to be a major source of misinformation – in large part driven by algorithms promoting such content – and it’s had little success so far in any news-related efforts. I think most people will be expecting this to be another disaster waiting to happen.

(Image Credit: Mark Zuckerberg by Alessio Jacona under CC BY-SA 2.0 license)

The post Facebook is developing a news-summarising AI called TL;DR appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/16/facebook-developing-news-summarising-ai-tldr/feed/ 1
EU human rights agency issues report on AI ethical considerations https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/ https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/#respond Mon, 14 Dec 2020 16:34:34 +0000 https://news.deepgeniusai.com/?p=10117 The European Union’s Fundamental Rights Agency (FRA) has issued a report on AI which delves into the ethical considerations which must be made about the technology. FRA’s report is titled Getting The Future Right and opens with some of the ways AI is already making lives better—such as helping with cancer diagnosis, and even predicting... Read more »

The post EU human rights agency issues report on AI ethical considerations appeared first on AI News.

]]>
The European Union’s Fundamental Rights Agency (FRA) has issued a report on AI which delves into the ethical considerations which must be made about the technology.

FRA’s report is titled Getting The Future Right and opens with some of the ways AI is already making lives better—such as helping with cancer diagnosis, and even predicting where burglaries are likely to take place.

“The possibilities seem endless,” writes Michael O’Flaherty, Director of the FRA, in the report’s foreword. “But how can we fully uphold fundamental rights standards when using AI?”

The FRA interviewed over a hundred public administration officials, private company staff, and a diverse range of experts, in a bid to answer that question.

With evidence of algorithms having biases which could lead to automating societal issues like racial profiling—it’s a question that needs answering if the full potential of AI is going to be unlocked for the whole of society.

O’Flaherty says:

“AI is not infallible, it is made by people – and humans can make mistakes. That is why people need to be aware when AI is used, how it works and how to challenge automated decisions. The EU needs to clarify how existing rules apply to AI. And organisations need to assess how their technologies can interfere with people’s rights both in the development and use of AI.

“We have an opportunity to shape AI that not only respects our human and fundamental rights but that also protects and promotes them.”

AI is being used in almost every industry in some form or another—if not already, it will be soon.

Biases in AI are more dangerous in some industries than others. Policing is an obvious example, but in areas like financial services it could mean one person being given a loan or mortgage compared to another.

Without due transparency, these biases could happen without anyone knowing the reasons behind such decisions—it could simply be because someone grew up in a different neighbourhood. Each automated decision has a very real human impact.

The FRA calls for the EU to:

  • Make sure that AI respects ALL fundamental rights – AI can affect many rights – not just privacy or data protection. It can also discriminate or impede justice. Any future AI legislation has to consider this and create effective safeguards.
  • Guarantee that people can challenge decisions taken by AI – people need to know when AI is used and how it is used, as well as how and where to complain. Organisations using AI need to be able to explain how their systems take decisions.
  • Assess AI before and during its use to reduce negative impacts – private and public organisations should carry out assessments of how AI could harm fundamental rights.
  • Provide more guidance on data protection rules – the EU should further clarify how data protection rules apply to AI. More clarity is also needed on the implications of automated decision-making and the right to human review when AI is used.
  • Assess whether AI discriminates – awareness about the potential for AI to discriminate, and the impact of this, is relatively low. This calls for more research funding to look into the potentially discriminatory effects of AI so Europe can guard against it.
  • Create an effective oversight system – the EU should invest in a more ‘joined-up’ system to hold businesses and public administrations accountable when using AI. Authorities need to ensure that oversight bodies have adequate resources and skills to do the job.

The EU has increased its scrutiny of “big tech” companies like Google in recent years over concerns of invasive privacy practices and abusing their market positions. Last week, AI News reported that Google had controversially fired leading AI ethics researcher Timnit Gebru after she criticised her employer in an email.

Google chief executive Sundar Pichai wrote in a memo: “We need to accept responsibility for the fact that a prominent black, female leader with immense talent left Google unhappily.

“It’s incredibly important to me that our black, women, and under-represented Googlers know that we value you and you do belong at Google.”

Gebru gave an interview to the BBC this week in which she called Google and big tech “institutionally racist”. With that in mind, the calls made in the FRA’s report seem especially important to heed.

You can download a full copy of the FRA’s report here.

(Photo by Guillaume Périgois on Unsplash)

The post EU human rights agency issues report on AI ethical considerations appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/feed/ 0
Former NHS surgeon creates AI ‘virtual patient’ for remote training https://news.deepgeniusai.com/2020/12/11/former-nhs-surgeon-ai-virtual-patient-remote-training/ https://news.deepgeniusai.com/2020/12/11/former-nhs-surgeon-ai-virtual-patient-remote-training/#comments Fri, 11 Dec 2020 14:05:07 +0000 https://news.deepgeniusai.com/?p=10102 A former NHS surgeon has created an AI-powered “virtual patient” which helps to keep skills sharp during a time when most in-person training is on hold. Dr Alex Young is a trained orthopaedic and trauma surgeon who founded Virti and set out to use emerging technologies to provide immersive training for both new healthcare professionals... Read more »

The post Former NHS surgeon creates AI ‘virtual patient’ for remote training appeared first on AI News.

]]>
A former NHS surgeon has created an AI-powered “virtual patient” which helps to keep skills sharp during a time when most in-person training is on hold.

Dr Alex Young is a trained orthopaedic and trauma surgeon who founded Virti and set out to use emerging technologies to provide immersive training for both new healthcare professionals and experienced ones looking to hone their skills.

COVID-19 has put most in-person training on hold to minimise transmission risks. Hospitals and universities across the UK and US are now using the virtual patient as a replacement—including our fantastic local medics and surgeons at the Bristol NHS Foundation Trust.

The virtual patient uses Natural Language Processing (NLP) and ‘narrative branching’ to allow medics to roleplay lifelike clinical scenarios. Medics and trainees can interact with the virtual patient using their tablet, desktop, or even VR/AR headsets for a more immersive experience.

Dr Alex Young comments:

“We’ve been working with healthcare organisations for several years, but the pandemic has created really specific challenges that technology is helping to solve. It’s no longer safe or practicable to have 30 medics in a room with an actor, honing their clinical soft-skills. With our virtual patient technology, we’ve created an extremely realistic and repeatable experience that can provide feedback in real time. This means clinicians and students can continue to learn valuable skills.

Right now, communication with patients can be very difficult. There’s a lot of PPE involved and patients are often on their own. Having healthcare staff who are skilled in handling these situations can therefore make a huge difference to that patient’s experience.”

Some of the supported scenarios include: breaking bad news, comforting a patient in distress, and communicating effectively whilst their faces are obscured by PPE. Virti’s technology was also used at the peak of the pandemic to train NHS staff in key skills required on the front line, such as how to safely use PPE, how to navigate an unfamiliar intensive care ward, how to engage with patients and their families, and how to use a ventilator.

Tom Woollard, West Suffolk Hospital Clinical Skills and Simulation Tutor, who used the Virti platform at the peak of the COVID pandemic, comments:

“We’ve been using Virti’s technology in our intensive care unit to help train staff who have been drafted in to deal with COVID-19 demand.

The videos which we have created and uploaded are being accessed on the Virti platform by nursing staff, physiotherapists and Operational Department Practitioners (ODPs) to orient them in the new environment and reduce their anxiety.

The tech has helped us to reach a large audience and deliver formerly labour-intensive training and teaching which is now impossible with social distancing.

In the future, West Suffolk will consider applying Virti tech to other areas of hospital practice.”

The use of speech recognition, NLP, and ‘narrative branching’ provides a realistic simulation of how a patient would likely respond—providing lifelike responses in speech, body language, and mannerisms.

The AI delivers real-time feedback to the user so they can learn and improve. With upwards of 70 percent of complaints against health professionals and care providers attributable to poor communication, the virtual patient could help to deliver better care while reducing time spent handling complaints.

Virti’s groundbreaking technology has – quite rightly – been named one of TIME’s best inventions of 2020.

The post Former NHS surgeon creates AI ‘virtual patient’ for remote training appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/11/former-nhs-surgeon-ai-virtual-patient-remote-training/feed/ 1
Algorithmia: AI budgets are increasing but deployment challenges remain https://news.deepgeniusai.com/2020/12/10/algorithmia-ai-budgets-increasing-deployment-challenges-remain/ https://news.deepgeniusai.com/2020/12/10/algorithmia-ai-budgets-increasing-deployment-challenges-remain/#comments Thu, 10 Dec 2020 12:52:07 +0000 https://news.deepgeniusai.com/?p=10099 A new report from Algorithmia has found that enterprise budgets for AI are rapidly increasing but significant deployment challenges remain. Algorithmia’s 2021 Enterprise Trends in Machine Learning report features the views of 403 business leaders involved with machine learning initiatives. Diego Oppenheimer, CEO of Algorithmia, says: “COVID-19 has caused rapid change which has challenged our... Read more »

The post Algorithmia: AI budgets are increasing but deployment challenges remain appeared first on AI News.

]]>
A new report from Algorithmia has found that enterprise budgets for AI are rapidly increasing but significant deployment challenges remain.

Algorithmia’s 2021 Enterprise Trends in Machine Learning report features the views of 403 business leaders involved with machine learning initiatives.

Diego Oppenheimer, CEO of Algorithmia, says:

“COVID-19 has caused rapid change which has challenged our assumptions in many areas. In this rapidly changing environment, organisations are rethinking their investments and seeing the importance of AI/ML to drive revenue and efficiency during uncertain times.

Before the pandemic, the top concern for organisations pursuing AI/ML initiatives was a lack of skilled in-house talent. Today, organisations are worrying more about how to get ML models into production faster and how to ensure their performance over time.

While we don’t want to marginalise these issues, I am encouraged by the fact that the type of challenges have more to do with how to maximise the value of AI/ML investments as opposed to whether or not a company can pursue them at all.”

The main takeaway is that AI budgets are significantly increasing. 83 percent of respondents said they’ve increased their budgets compared to last year.

Despite a difficult year for many companies, business leaders are not being put off of AI investments—in fact, they’re doubling-down.

In Algorithmia’s summer survey, 50 percent of respondents said they plan to spend more on AI this year. Around one in five even said they “plan to spend a lot more.”

76 percent of businesses report they are now prioritising AI/ML over other IT initiatives. 64 percent say the priority of AI/ML has increased relative to other IT initiatives over the last 12 months.

With unemployment figures around the world at their highest for several years – even decades in some cases – it’s at least heartening to hear that 76 percent of respondents said they’ve not reduced the size of their AI/ML teams. 27 percent even report an increase.

43 percent say their AI/ML initiatives “matter way more than we thought” and close to one in four believe their AI/ML initiatives should have been their top priority sooner. Process automation and improving customer experiences are the two main areas for AI investments.

While it’s been all good news so far, there are AI deployment issues being faced by many companies which are yet to be addressed.

Governance is, by far, the biggest AI challenge being faced by companies. 56 percent of the businesses ranked governance, security, and auditability issues as a concern.

Regulatory compliance is vital but can be confusing, especially with different regulations between not just countries but even states. 67 percent of the organisations report having to comply with multiple regulations for their AI/ML deployments.

The next major challenge after governance is with basic deployment and organisational challenges. 

Basic integration issues were ranked by 49 percent of businesses as a problem. Furthermore, more job roles are being involved with AI deployment strategies than ever before—it’s no longer seen as just the domain of data scientists.

However, there’s perhaps some light at the end of the tunnel. Organisations are reporting improved outcomes when using dedicated, third-party MLOps solutions.

While keeping in mind Algorithmia is a third-party MLOps solution, the report claims organisations using such a platform spend an average of around 21 percent less on infrastructure costs. Furthermore, it also helps to free up their data scientists—who spend less time on model deployment.

You can find a full copy of Algorithmia’s report here (requires signup)

The post Algorithmia: AI budgets are increasing but deployment challenges remain appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/10/algorithmia-ai-budgets-increasing-deployment-challenges-remain/feed/ 1
State of European Tech: Investment in ‘deep tech’ like AI drops 13% https://news.deepgeniusai.com/2020/12/08/state-of-european-tech-investment-deep-tech-ai-drops-13-percent/ https://news.deepgeniusai.com/2020/12/08/state-of-european-tech-investment-deep-tech-ai-drops-13-percent/#comments Tue, 08 Dec 2020 12:43:11 +0000 https://news.deepgeniusai.com/?p=10073 The latest State of European Tech report highlights that investment in “deep tech” like AI has dropped 13 percent this year. Data from Dealroom was used for the State of European Tech report. Dealroom defines deep tech as 16 fields: Artificial Intelligence, Machine Learning, Big Data, Augmented Reality, Virtual Reality, Drones, Autonomous Driving, Blockchain, Nanotech,... Read more »

The post State of European Tech: Investment in ‘deep tech’ like AI drops 13% appeared first on AI News.

]]>
The latest State of European Tech report highlights that investment in “deep tech” like AI has dropped 13 percent this year.

Data from Dealroom was used for the State of European Tech report. Dealroom defines deep tech as 16 fields: Artificial Intelligence, Machine Learning, Big Data, Augmented Reality, Virtual Reality, Drones, Autonomous Driving, Blockchain, Nanotech, Robotics, Internet of Things, 3D Technology, Computer Vision, Connected Devices, Sensors Technology, and Recognition Technology (NLP, image, video, text, speech recognition).

In 2019, there was $10.2 billion capital invested in European deep tech. In 2020, that dropped to $8.9 billion:

I think it’s fair to say that 2020 has been a tough year for most people and businesses. Economic uncertainty – not just from COVID-19 but also trade wars, Brexit, and a rather tumultuous US presidential election – has naturally led to fewer investments and people tightening their wallets.

For just one example, innovative satellite firm OneWeb was forced to declare bankruptcy earlier this year after crucial funding it was close to securing was pulled during the peak of the pandemic. Fortunately, OneWeb was saved following an acquisition by the UK government and Bharti Global—but not all companies have been so fortunate.

Many European businesses will now be watching the close-to-collapse Brexit talks with hope that a deal can yet be salvaged to limit the shock to supply lines, prevent disruption to Europe’s leading financial hub, and help to build a friendly relationship going forward with a continued exchange of ideas and talent rather than years of bitterness and resentment.

The report shows the UK has retained its significant lead in European tech investment and startups this year:

Despite the uncertainties, the UK looks unlikely to lose its position as the hub of European technology anytime soon.

Investments in European tech as a whole should bounce back – along with the rest of the world – in 2021, with promising COVID-19 vaccines rolling out and hopefully some calm in geopolitics.

94 percent of survey respondents for the report stated they have either increased or maintained their appetite to invest in the European venture asset class. Furthermore, a record number of US institutions have participated in more than one investment round in Europe this year—up 36% since 2016.

You can find a full copy of the State of European Tech report here.

The post State of European Tech: Investment in ‘deep tech’ like AI drops 13% appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/08/state-of-european-tech-investment-deep-tech-ai-drops-13-percent/feed/ 1
NVIDIA breakthrough emulates images from small datasets for groundbreaking AI training https://news.deepgeniusai.com/2020/12/07/nvidia-emulates-images-small-datasets-ai-training/ https://news.deepgeniusai.com/2020/12/07/nvidia-emulates-images-small-datasets-ai-training/#respond Mon, 07 Dec 2020 16:08:23 +0000 https://news.deepgeniusai.com/?p=10069 NVIDIA’s latest breakthrough emulates new images from existing small datasets with truly groundbreaking potential for AI training. The company demonstrated its latest AI model using a small dataset – just a fraction of the size typically used for a Generative Adversarial Network (GAN) – of artwork from the Metropolitan Museum of Art. From the dataset,... Read more »

The post NVIDIA breakthrough emulates images from small datasets for groundbreaking AI training appeared first on AI News.

]]>
NVIDIA’s latest breakthrough emulates new images from existing small datasets with truly groundbreaking potential for AI training.

The company demonstrated its latest AI model using a small dataset – just a fraction of the size typically used for a Generative Adversarial Network (GAN) – of artwork from the Metropolitan Museum of Art.

From the dataset, NVIDIA’s AI was able to create new images which replicate the style of the original artist’s work. These images can then be used to help train further AI models.

The AI achieved this impressive feat by applying a breakthrough neural network training technique similar to the popular NVIDIA StyleGAN2 model. 

The technique is called Adaptive Discriminator Augmentation (ADA) and NVIDIA claims that it reduces the number of training images required by 10-20x while still getting great results.

David Luebke, VP of Graphics Research at NVIDIA, said:

“These results mean people can use GANs to tackle problems where vast quantities of data are too time-consuming or difficult to obtain.

I can’t wait to see what artists, medical experts and researchers use it for.”

Healthcare is a particularly exciting field where NVIDIA’s research could be applied. For example, it could help to create cancer histology images to train other AI models.

The breakthrough will help with the issues around most current datasets.

Large datasets are often required for AI training but aren’t always available. On the other hand, large datasets are difficult to ensure their content is suitable and does not unintentionally lead to algorithmic bias.

Earlier this year, MIT was forced to remove a large dataset called 80 Million Tiny Images. The dataset is popular for training AIs but was found to contain images labelled with racist, misogynistic, and other unacceptable terms.

A statement on MIT’s website claims it was unaware of the offensive labels and they were “a consequence of the automated data collection procedure that relied on nouns from WordNet.”

The statement goes on to explain the 80 million images contained in the dataset – with sizes of just 32×32 pixels – meant that manual inspection would be almost impossible and couldn’t guarantee all offensive images would be removed.

By starting with a small dataset that can be feasibly checked manually, a technique like NVIDIA’s ADA could be used to create new images which emulate the originals and can scale up to the required size for training AI models.

In a blog post, NVIDIA wrote:

“It typically takes 50,000 to 100,000 training images to train a high-quality GAN. But in many cases, researchers simply don’t have tens or hundreds of thousands of sample images at their disposal.

With just a couple thousand images for training, many GANs would falter at producing realistic results. This problem, called overfitting, occurs when the discriminator simply memorizes the training images and fails to provide useful feedback to the generator.”

You can find NVIDIA’s full research paper here (PDF). The paper is being presented at this year’s NeurIPS conference as one of a record 28 NVIDIA Research papers accepted to the prestigious conference.

The post NVIDIA breakthrough emulates images from small datasets for groundbreaking AI training appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/07/nvidia-emulates-images-small-datasets-ai-training/feed/ 0
Google fires ethical AI researcher Timnit Gebru after critical email https://news.deepgeniusai.com/2020/12/04/google-fires-ethical-ai-researcher-timnit-gebru-email/ https://news.deepgeniusai.com/2020/12/04/google-fires-ethical-ai-researcher-timnit-gebru-email/#comments Fri, 04 Dec 2020 16:18:56 +0000 https://news.deepgeniusai.com/?p=10062 A leading figure in ethical AI development has been fired by Google after criticising the company. Timnit Gebru is considered a pioneer in the field and researched the risks and inequalities found in large language models. Gebru claims she was fired by Google over an unpublished paper and sending an email critical of the company’s... Read more »

The post Google fires ethical AI researcher Timnit Gebru after critical email appeared first on AI News.

]]>
A leading figure in ethical AI development has been fired by Google after criticising the company.

Timnit Gebru is considered a pioneer in the field and researched the risks and inequalities found in large language models.

Gebru claims she was fired by Google over an unpublished paper and sending an email critical of the company’s practices.

The paper questions whether language models can be too big, who benefits from them, and whether they can increase prejudice and inequalities. Some recent cases validate her claims about large models and datasets in general.

For example, MIT was forced to remove a large dataset earlier this year called 80 Million Tiny Images. The dataset is popular for training AIs but was found to contain images labelled with racist, misogynistic, and other unacceptable terms.

A statement on MIT’s website claims it was unaware of the offensive labels and they were “a consequence of the automated data collection procedure that relied on nouns from WordNet.”

The statement goes on to explain the 80 million images contained in the dataset – with sizes of just 32×32 pixels – meant that manual inspection would be almost impossible and couldn’t guarantee all offensive images would be removed.

Gebru reportedly sent an email to the Google Brain Women and Allies listserv that is “inconsistent with the expectations of a Google manager.”

In the email, Gebru expressed her frustration with a perceived lack of progress at Google in hiring women at Google. Gebru claimed she was also told not to publish a piece of research and advised employees to stop filling out diversity paperwork because it didn’t matter.

On top of the questionable reasons for her firing, Gebru says her former colleagues were emailed saying she offered her resignation—which she claims was not the case:

Platformer obtained an email from Jeff Dean, Head of Google Research, which was sent to employees and offers his take on Gebru’s claims:

“We’ve approved dozens of papers that Timnit and/or the other Googlers have authored and then published, but as you know, papers often require changes during the internal review process (or are even deemed unsuitable for submission). Unfortunately, this particular paper was only shared with a day’s notice before its deadline — we require two weeks for this sort of review — and then instead of awaiting reviewer feedback, it was approved for submission and submitted.

A cross functional team then reviewed the paper as part of our regular process and the authors were informed that it didn’t meet our bar for publication and were given feedback about why. It ignored too much relevant research — for example, it talked about the environmental impact of large models, but disregarded subsequent research showing much greater efficiencies. Similarly, it raised concerns about bias in language models, but didn’t take into account recent research to mitigate these issues.”

Dean goes on to claim Gebru made demands which included revealing the identities of the individuals he and Google Research VP of Engineering Megan Kacholia consulted with as part of the paper’s review. If the demands weren’t met, Gebru reportedly said she would leave the company.

It’s a case of one word against another, but – for a company already in the spotlight from both the public and regulators over questionable practices – being seen to fire an ethics researcher for calling out problems is not going to be good PR.

(Image Credit: Timnit Gebru by Kimberly White/Getty Images for TechCrunch under CC BY 2.0 license)

The post Google fires ethical AI researcher Timnit Gebru after critical email appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/04/google-fires-ethical-ai-researcher-timnit-gebru-email/feed/ 2