Neural Network – AI News https://news.deepgeniusai.com Artificial Intelligence News Wed, 06 Jan 2021 18:32:32 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Neural Network – AI News https://news.deepgeniusai.com 32 32 OpenAI’s latest neural network creates images from written descriptions https://news.deepgeniusai.com/2021/01/06/openai-latest-neural-network-creates-images-written-descriptions/ https://news.deepgeniusai.com/2021/01/06/openai-latest-neural-network-creates-images-written-descriptions/#comments Wed, 06 Jan 2021 18:28:28 +0000 https://news.deepgeniusai.com/?p=10142 OpenAI has debuted its latest jaw-dropping innovation, an image-generating neural network called DALL·E. DALL·E is a 12-billion parameter version of GPT-3 which is trained to generate images from text descriptions. “We find that DALL·E is able to create plausible images for a great variety of sentences that explore the compositional structure of language,“ OpenAI explains.... Read more »

The post OpenAI’s latest neural network creates images from written descriptions appeared first on AI News.

]]>
OpenAI has debuted its latest jaw-dropping innovation, an image-generating neural network called DALL·E.

DALL·E is a 12-billion parameter version of GPT-3 which is trained to generate images from text descriptions.

“We find that DALL·E is able to create plausible images for a great variety of sentences that explore the compositional structure of language,“ OpenAI explains.

Generated images can range from drawings, to objects, and even manipulated real-world photos. Here are some examples of each provided by OpenAI:

Just as OpenAI’s GPT-3 text generator caused alarm about implications such as helping to create fake news for the kinds of disinformation campaigns recently seen around COVID-19, 5G, and attempting to influence various democratic processes—similar concerns will be raised about the company’s latest innovation.

People are increasingly aware of fake news and not to believe everything they read, especially from unknown sources without good track records. However, as humans, we’re still used to believing what we can see with our eyes. Fake news with fake supporting imagery is a rather convincing combination.

Much like it argued with GPT-3, OpenAI essentially says that – by putting the technology out there as responsibly as possible – it helps to raise awareness and drives research into how the implications can be tackled before such neural networks are inevitably created and used by malicious parties.

“We recognise that work involving generative models has the potential for significant, broad societal impacts,” OpenAI said.

“In the future, we plan to analyse how models like DALL·E relate to societal issues like economic impact on certain work processes and professions, the potential for bias in the model outputs, and the longer-term ethical challenges implied by this technology.”

Technological advancements will almost always be used for damaging purposes—but often the benefits outweigh the risks. I’d wager you could write pages about the good and bad sides of the internet, but overall it’s a pretty fantastic thing.

When it comes down to it: If the “good guys” don’t build it, you can be sure the bad ones will.

(Image Credit: Justin Jay Wang/OpenAI)

The post OpenAI’s latest neural network creates images from written descriptions appeared first on AI News.

]]>
https://news.deepgeniusai.com/2021/01/06/openai-latest-neural-network-creates-images-written-descriptions/feed/ 1
NVIDIA sets another AI inference record in MLPerf https://news.deepgeniusai.com/2020/10/22/nvidia-sets-another-ai-inference-record-mlperf/ https://news.deepgeniusai.com/2020/10/22/nvidia-sets-another-ai-inference-record-mlperf/#comments Thu, 22 Oct 2020 09:16:41 +0000 https://news.deepgeniusai.com/?p=9966 NVIDIA has set yet another record for AI inference in MLPerf with its A100 Tensor Core GPUs. MLPerf consists of five inference benchmarks which cover the main three AI applications today: image classification, object detection, and translation. “Industry-standard MLPerf benchmarks provide relevant performance data on widely used AI networks and help make informed AI platform... Read more »

The post NVIDIA sets another AI inference record in MLPerf appeared first on AI News.

]]>
NVIDIA has set yet another record for AI inference in MLPerf with its A100 Tensor Core GPUs.

MLPerf consists of five inference benchmarks which cover the main three AI applications today: image classification, object detection, and translation.

“Industry-standard MLPerf benchmarks provide relevant performance data on widely used AI networks and help make informed AI platform buying decisions,” said Rangan Majumder, VP of Search and AI at Microsoft.

Last year, NVIDIA led all five benchmarks for both server and offline data centre scenarios with its Turing GPUs. A dozen companies participated.

23 companies participated in this year’s MLPerf but NVIDIA maintained its lead with the A100 outperforming CPUs by up to 237x in data centre inference.

For perspective, NVIDIA notes that a single NVIDIA DGX A100 system – with eight A100 GPUs – provides the same performance as nearly 1,000 dual-socket CPU servers on some AI applications.

“We’re at a tipping point as every industry seeks better ways to apply AI to offer new services and grow their business,” said Ian Buck, Vice President of Accelerated Computing at NVIDIA.

“The work we’ve done to achieve these results on MLPerf gives companies a new level of AI performance to improve our everyday lives.”

The widespread availability of NVIDIA’s AI platform through every major cloud and data centre infrastructure provider is unlocking huge potential for companies across various industries to improve their operations.

The post NVIDIA sets another AI inference record in MLPerf appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/10/22/nvidia-sets-another-ai-inference-record-mlperf/feed/ 1
Researchers achieve 94% power reduction for on-device AI tasks https://news.deepgeniusai.com/2020/09/17/researchers-achieve-power-reduction-on-device-ai-tasks/ https://news.deepgeniusai.com/2020/09/17/researchers-achieve-power-reduction-on-device-ai-tasks/#respond Thu, 17 Sep 2020 15:47:52 +0000 https://news.deepgeniusai.com/?p=9859 Researchers from Applied Brain Research (ABR) have achieved significantly reduced power consumption for a range of AI-powered devices. ABR designed a new neural network called the Legendre Memory Unit (LMU). With LMU, on-device AI tasks – such as those on speech-enabled devices like wearables, smartphones, and smart speakers – can take up to 94 percent... Read more »

The post Researchers achieve 94% power reduction for on-device AI tasks appeared first on AI News.

]]>
Researchers from Applied Brain Research (ABR) have achieved significantly reduced power consumption for a range of AI-powered devices.

ABR designed a new neural network called the Legendre Memory Unit (LMU). With LMU, on-device AI tasks – such as those on speech-enabled devices like wearables, smartphones, and smart speakers – can take up to 94 percent less power.

The reduction in power consumption achieved through LMU will be particularly beneficial to smaller form-factor devices such as smartwatches; which struggle with small batteries. IoT devices which carry out AI tasks – but may have to last months, if not years, before they’re replaced – should also benefit.

LMU is described as a Recurrent Neural Network (RNN) which enables lower power and more accurate processing of time-varying signals.

ABR says the LMU can be used to build AI networks for all time-varying tasks—such as speech processing, video analysis, sensor monitoring, and control systems.

The AI industry’s current go-to model is the Long-Short-Term-Memory (LSTM) network. LSTM was first proposed back in 1995 and is used for most popular speech recognition and translation services today like those from Google, Amazon, Facebook, and Microsoft.

Last year, researchers from the University of Waterloo debuted LMU as an alternative RNN to LSTM. Those researchers went on to form ABR, which now consists of 20 employees.

Peter Suma, co-CEO of Applied Brain Research, said in an email:

“We are a University of Waterloo spinout from the Theoretical Neuroscience Lab at UW. We looked at how the brain processes signals in time and created an algorithm based on how “time-cells” in your brain work.

We called the new AI, a Legendre-Memory-Unit (LMU) after a mathematical tool we used to model the time cells. The LMU is mathematically proven to be optimal at processing signals. You cannot do any better. Over the coming years, this will make all forms of temporal AI better.”

ABR debuted a paper in late-2019 during the NeurIPS conference which demonstrated that LMU is 1,000,000x more accurate than the LSTM while encoding 100x more time-steps.

In terms of size, the LMU model is also smaller. LMU uses 500 parameters versus the LSTM’s 41,000 (a 98 percent reduction in network size.)

“We implemented our speech recognition with the LMU and it lowered the power used for command word processing to ~8 millionths of a watt, which is 94 percent less power than the best on the market today,” says Suma. “For full speech, we got the power down to 4 milli-watts, which is about 70 percent smaller than the best out there.”

Suma says the next step for ABR is to work on video, sensor and drone control AI processing—to also make them smaller and better.

A full whitepaper detailing LMU and its benefits can be found on preprint repository arXiv here.

The post Researchers achieve 94% power reduction for on-device AI tasks appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/09/17/researchers-achieve-power-reduction-on-device-ai-tasks/feed/ 0
Google’s Model Card Toolkit aims to bring transparency to AI https://news.deepgeniusai.com/2020/07/30/google-model-card-toolkit-ai/ https://news.deepgeniusai.com/2020/07/30/google-model-card-toolkit-ai/#respond Thu, 30 Jul 2020 16:02:21 +0000 https://news.deepgeniusai.com/?p=9782 Google has released a toolkit which it hopes will bring some transparency to AI models. People are wary of big tech companies like Google. People are also concerned about AI. Combine the two and you’ve got a general distrust which can hinder important advancements. Model Card Toolkit aims to step in and facilitate AI model... Read more »

The post Google’s Model Card Toolkit aims to bring transparency to AI appeared first on AI News.

]]>
Google has released a toolkit which it hopes will bring some transparency to AI models.

People are wary of big tech companies like Google. People are also concerned about AI. Combine the two and you’ve got a general distrust which can hinder important advancements.

Model Card Toolkit aims to step in and facilitate AI model transparency reporting for developers, regulators, and downstream users.

Google launched Model Cards itself over the past year, something that the company first conceptualised in an October 2018 whitepaper.

Model Cards provide a structured framework for reporting on ML model provenance, usage, and ethics-informed evaluation and give a detailed overview of a model’s suggested uses and limitations. 

So far, Google has released Model Cards for open source models built on its MediaPipe platform as well as its commercial Cloud Vision API Face Detection and Object Detection services.

Google’s new toolkit for Model Cards will simplify the process of creating them for third parties by compiling the data and helping build interfaces orientated for specific audiences.

Here’s an example of a Model Card:

MediaPipe has published their Model Cards for each of their open-source models in their GitHub repository.

To demonstrate how the Model Cards Toolkit can be used in practice, Google has released a Colab tutorial that builds a Model Card for a simple classification model trained on the UCI Census Income dataset.

If you just want to dive right in, you can access the Model Cards Toolkit here.

(Photo by Marc Schulte on Unsplash)

The post Google’s Model Card Toolkit aims to bring transparency to AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/30/google-model-card-toolkit-ai/feed/ 0
Over 1,000 researchers sign letter opposing ‘crime predicting’ AI https://news.deepgeniusai.com/2020/06/24/over-1000-researchers-sign-letter-crime-predicting-ai/ https://news.deepgeniusai.com/2020/06/24/over-1000-researchers-sign-letter-crime-predicting-ai/#respond Wed, 24 Jun 2020 12:24:25 +0000 https://news.deepgeniusai.com/?p=9706 More than 1,000 researchers, academics, and experts have signed an open letter opposing the use of AI to predict crime. Anyone who has watched the sci-fi classic Minority Report will be concerned about attempts to predict crime before it happens. In an ideal scenario, crime prediction could help determine where to allocate police resources –... Read more »

The post Over 1,000 researchers sign letter opposing ‘crime predicting’ AI appeared first on AI News.

]]>
More than 1,000 researchers, academics, and experts have signed an open letter opposing the use of AI to predict crime.

Anyone who has watched the sci-fi classic Minority Report will be concerned about attempts to predict crime before it happens. In an ideal scenario, crime prediction could help determine where to allocate police resources – but the reality will be very different.

The researchers are speaking out ahead of an imminent publication titled ‘A Deep Neural Network Model to Predict Criminality Using Image Processing’. In the paper, the authors claim to be able to predict whether a person will become a criminal based on automated facial recognition.

“By automating the identification of potential threats without bias, our aim is to produce tools for crime prevention, law enforcement, and military applications that are less impacted by implicit biases and emotional responses,” says Harrisburg University Professor and co-author of the paper Nathaniel J.S. Ashby.

“Our next step is finding strategic partners to advance this mission.”

Finding willing partners may prove to be a challenge. Signatories of the open letter include employees working on AI from tech giants including Microsoft, Google, and Facebook.

In their letter, the signatories highlight the many issues of today’s AI technologies which make dabbling in crime prediction so dangerous.

Chief among the concerns is the well-documented racial bias of algorithms. Every current facial recognition system is more accurate when detecting white males and often incorrectly flags members of the BAME community as criminals more often when used in a law enforcement setting.

However, even if the inaccuracies with facial recognition algorithms are addressed, the researchers highlight the problems with the current justice system which have been put in the spotlight in recent weeks following the murder of George Floyd.

In their letter, the researchers explain:

“Research of this nature — and its accompanying claims to accuracy — rest on the assumption that data regarding criminal arrest and conviction can serve as reliable, neutral indicators of underlying criminal activity. Yet these records are far from neutral.

As numerous scholars have demonstrated, historical court and arrest data reflect the policies and practices of the criminal justice system. These data reflect who police choose to arrest, how judges choose to rule, and which people are granted longer or more lenient sentences.

Countless studies have shown that people of color are treated more harshly than similarly situated white people at every stage of the legal system, which results in serious distortions in the data. Thus, any software built within the existing criminal legal framework will inevitably echo those same prejudices and fundamental inaccuracies when it comes to determining if a person has the ‘face of a criminal.’”

Among the co-authors of the disputed paper is Jonathan W. Korn, a Ph.D. student who is highlighted as an NYPD veteran. Korn says that AI which can predict criminality would be “a significant advantage for law enforcement agencies.”

While such a system would make the lives of law enforcement officers easier, it would do so at the cost of privacy and the automation of racial profiling.

“Machine learning programs are not neutral; research agendas and the data sets they work with often inherit dominant cultural beliefs about the world,” warn the letter’s authors.

“The uncritical acceptance of default assumptions inevitably leads to discriminatory design in algorithmic systems, reproducing ideas which normalise social hierarchies and legitimise violence against marginalised groups.”

You can find the full open letter here.

(Photo by Bill Oxford on Unsplash)

The post Over 1,000 researchers sign letter opposing ‘crime predicting’ AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/24/over-1000-researchers-sign-letter-crime-predicting-ai/feed/ 0
MIT software shows how NLP systems are snookered by simple synonyms https://news.deepgeniusai.com/2020/02/12/mit-software-shows-how-nlp-systems-are-snookered-by-simple-synonyms/ https://news.deepgeniusai.com/2020/02/12/mit-software-shows-how-nlp-systems-are-snookered-by-simple-synonyms/#respond Wed, 12 Feb 2020 11:48:11 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6411 Here’s an example of how artificial intelligence can still seriously lack behind some human attributes: tests have shown how natural language processing (NLP) systems can be tricked into misunderstanding text by merely swapping one word for a synonym. A research team at MIT developed software, called TextFooler, which looked for words which were most crucial... Read more »

The post MIT software shows how NLP systems are snookered by simple synonyms appeared first on AI News.

]]>
Here’s an example of how artificial intelligence can still seriously lack behind some human attributes: tests have shown how natural language processing (NLP) systems can be tricked into misunderstanding text by merely swapping one word for a synonym.

A research team at MIT developed software, called TextFooler, which looked for words which were most crucial to an NLP classifier and replaced them. The team offered an example:

“The characters, cast in impossibly contrived situations, are totally estranged from reality”, and
“The characters, cast in impossibly engineered circumstances, are fully estranged from reality”

No problem for a human to decipher. Yet the results on the AIs were startling. For instance BERT, Google’s neural net, was worse by a factor of up to seven at identifying whether reviews on Yelp were positive or negative.

Douglas Heaven, writing a roundup of the study for MIT Technology Review, explained why the research was important. “We have seen many examples of adversarial attacks, most often with image recognition systems, where tiny alterations to the input can flummox an AI and make it misclassify what it sees,” Heaven wrote. “TextFooler shows that this style of attack also breaks NLP, the AI behind virtual assistants – such as Siri, Alexa and Google Home – as well as other language classifiers like spam filters and hate-speech detectors.”

This publication has explored various methods where AI technologies are outstripping human efforts, such as detecting breast cancer, playing StarCraft, and public debating. In other fields, resistance – however futile – remains. In December it was reported that human drivers were still overall beating AIs at drone racing, although the chief technology officer of the Drone Race League predicted that 2023 would be the year where AI took over.

The end goal for software such as TextFooler, the researchers hope, is to make NLP systems more robust.

Postscript: For those reading from outside the British Isles, China, and certain Commonwealth countries – to ‘snooker’ someone, deriving from the sport of the same name, is to ‘leave one in a difficult position.’ The US equivalent is ‘behind the eight-ball’, although that would have of course thrown the headline out.

? Attend the co-located 

The post MIT software shows how NLP systems are snookered by simple synonyms appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/02/12/mit-software-shows-how-nlp-systems-are-snookered-by-simple-synonyms/feed/ 0
Nvidia comes out on top in first MLPerf inference benchmarks https://news.deepgeniusai.com/2019/11/07/nvidia-comes-out-on-top-in-first-mlperf-inference-benchmarks/ https://news.deepgeniusai.com/2019/11/07/nvidia-comes-out-on-top-in-first-mlperf-inference-benchmarks/#respond Thu, 07 Nov 2019 11:19:57 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6172 The first benchmark results from the MLPerf consortium have been released and Nvidia is a clear winner for inference performance. For those unaware, inference takes a deep learning model and processes incoming data however it’s been trained to. MLPerf is a consortium which aims to provide “fair and useful” standardised benchmarks for inference performance. MLPerf... Read more »

The post Nvidia comes out on top in first MLPerf inference benchmarks appeared first on AI News.

]]>
The first benchmark results from the MLPerf consortium have been released and Nvidia is a clear winner for inference performance.

For those unaware, inference takes a deep learning model and processes incoming data however it’s been trained to.

MLPerf is a consortium which aims to provide “fair and useful” standardised benchmarks for inference performance. MLPerf can be thought of as doing for inference what SPEC does for benchmarking CPUs and general system performance.

The consortium has released its first benchmarking results, a painstaking effort involving over 30 companies and over 200 engineers and practitioners. MLPerf’s first call for submissions led to over 600 measurements spanning 14 companies and 44 systems. 

However, for datacentre inference, only four of the processors are commercially-available:

  • Intel Xeon P9282
  • Habana Goya
  • Google TPUv3
  • Nvidia Turing

Nvidia wasted no time in boasting of its performance beating the three other processors across various neural networks in both server and offline scenarios:

The easiest direct comparisons are possible in the ImageNet ResNet-50 v1.6 offline scenario where the greatest number of major players and startups submitted results.

In that scenario, Nvidia once again boasted the best performance on a per-processor basis with its Titan RTX GPU. Despite the 2x Google Cloud TPU v3-8 submission using eight Intel Skylake processors, it had a similar performance to the SCAN 3XS DBP T496X2 Fluid which used four Titan RTX cards (65,431.40 vs 66,250.40 inputs/second).

Ian Buck, GM and VP of Accelerated Computing at NVIDIA, said:

“AI is at a tipping point as it moves swiftly from research to large-scale deployment for real applications.

AI inference is a tremendous computational challenge. Combining the industry’s most advanced programmable accelerator, the CUDA-X suite of AI algorithms and our deep expertise in AI computing, NVIDIA can help datacentres deploy their large and growing body of complex AI models.”

However, it’s worth noting that the Titan RTX doesn’t support ECC memory so – despite its sterling performance – this omission may prevent its use in some datacentres.

Another interesting takeaway when comparing the Cloud TPU results against Nvidia is the performance difference when moving from offline to server scenarios.

  • Google Cloud TPU v3 offline: 32,716.00
  • Google Cloud TPU v3 server: 16,014.29
  • Nvidia SCAN 3XS DBP T496X2 Fluid offline: 66,250.40
  • Nvidia SCAN 3XS DBP T496X2 Fluid server: 60,030.57

As you can see, the Cloud TPU system performance is slashed by over a half when used in a server scenario. The SCAN 3XS DBP T496X2 Fluid system performance only drops around 10 percent in comparison.

You can peruse MLPerf’s full benchmark results here.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Nvidia comes out on top in first MLPerf inference benchmarks appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/11/07/nvidia-comes-out-on-top-in-first-mlperf-inference-benchmarks/feed/ 0
Microsoft chief Brad Smith warns that killer robots are ‘unstoppable’ https://news.deepgeniusai.com/2019/09/23/microsoft-brad-smith-killer-robots-unstoppable/ https://news.deepgeniusai.com/2019/09/23/microsoft-brad-smith-killer-robots-unstoppable/#respond Mon, 23 Sep 2019 12:06:08 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6040 Microsoft chief Brad Smith issued a warning over the weekend that killer robots are ‘unstoppable’ and a new digital Geneva Convention is required. Most sci-fi fans will think of Terminator when they hear of killer robots. In the classic film series, a rogue military AI called Skynet gained self-awareness after spreading to millions of servers... Read more »

The post Microsoft chief Brad Smith warns that killer robots are ‘unstoppable’ appeared first on AI News.

]]>
Microsoft chief Brad Smith issued a warning over the weekend that killer robots are ‘unstoppable’ and a new digital Geneva Convention is required.

Most sci-fi fans will think of Terminator when they hear of killer robots. In the classic film series, a rogue military AI called Skynet gained self-awareness after spreading to millions of servers around the world. Concluding that humans would attempt to shut it down, Skynet sought to exterminate all of mankind in the interest of self-preservation.

While it was once just a popcorn flick, Terminator now offers a dire warning of what could be if precautions are not taken.

As with most technologies, AI will find itself increasingly used for military applications. The ultimate goal for general artificial intelligence is to self-learn. Combine both, and Skynet no longer seems the wild dramatisation that it once did.

Speaking to The Telegraph, Smith seems to agree. Smith points towards developments in the US, China, UK, Russia, Isreal, South Korea, and others, who are all developing autonomous weapon systems.

Wars could one day be fought on battlefields entirely with robots, a scenario that has many pros and cons. On the one hand, it reduces the risk to human troops. On the other, it makes declaring war easier and runs the risk of machines going awry.

Many technologists have likened the race to militarise AI to the nuclear arms race. In a pursuit to be the first and best, dangerous risks may be taken.

There’s still no clear responsible entity for death or injuries caused by an autonomous machine – the manufacturer, developer, or an overseer. This has also been a subject of much debate in regards to how insurance will work with driverless cars.

With military applications, many technologists have called for AI to never make a combat decision – especially one that would result in fatalities – on its own. While AI can make recommendations, a final decision must be made by a human.

Preventing unimaginable devastation

The story of Russian lieutenant colonel Stanislav Petrov in 1983 offers a warning of how a machine without human oversight may cause unimaginable devastation.

Petrov’s computers reported that an intercontinental missile had been launched by the US towards the Soviet Union. The Soviet Union’s strategy was an immediate and compulsory nuclear counter-attack against the US in such a scenario. Petrov used his instinct that the computer was incorrect and decided against launching a nuclear missile, and he was right. 

Had the decision in 1983 whether to deploy a nuclear missile been made solely on the computer, one would have been launched and met with retaliatory launches from the US and its allies.

Smith wants to see a new digital Geneva Convention in order to bring world powers together in agreement over acceptable norms when it comes to AI. “The safety of civilians is at risk today. We need more urgent action, and we need it in the form of a digital Geneva Convention, rules that will protect civilians and soldiers.” 

Many companies – including thousands of Google employees, following backlash over a Pentagon contract to develop AI tech for drones – have pledged not to develop AI technologies for harmful use.

Smith has launched a new book called Tools and Weapons. At the launch, Smith also called for stricter rules over the use of facial recognition technology. “There needs to be a new law in this space, we need regulation in the world of facial recognition in order to protect against potential abuse.”

Last month, a report from Dutch NGO PAX said leading tech firms are putting the world ‘at risk’ of killer AI. Microsoft, along with Amazon, was ranked among the highest risk. Microsoft itself warned investors back in February that its AI offerings could damage the company’s reputation. 

“Why are companies like Microsoft and Amazon not denying that they’re currently developing these highly controversial weapons, which could decide to kill people without direct human involvement?” said Frank Slijper, lead author of PAX’s report.

A global campaign simply titled Campaign To Stop Killer Robots now includes 113 NGOs across 57 countries and has doubled in size over the past year.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Microsoft chief Brad Smith warns that killer robots are ‘unstoppable’ appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/09/23/microsoft-brad-smith-killer-robots-unstoppable/feed/ 0
EmoNet: Emotional neural network automatically categorises feelings https://news.deepgeniusai.com/2019/07/31/emonet-emotional-neural-network-categorises-feelings/ https://news.deepgeniusai.com/2019/07/31/emonet-emotional-neural-network-categorises-feelings/#respond Wed, 31 Jul 2019 16:06:21 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5885 A neural network called EmoNet has been designed to automatically categorise the feelings of an individual. EmoNet was created by researchers from the University of Colorado and Duke University and could one day help AIs to understand and react to human emotions. The neural network is capable of accurately classifying images into 11 emotions, although... Read more »

The post EmoNet: Emotional neural network automatically categorises feelings appeared first on AI News.

]]>
A neural network called EmoNet has been designed to automatically categorise the feelings of an individual.

EmoNet was created by researchers from the University of Colorado and Duke University and could one day help AIs to understand and react to human emotions.

The neural network is capable of accurately classifying images into 11 emotions, although some with a higher confidence than others.

‘Craving,’ ‘sexual desire,’ and ‘horror’ were able to be determined with a high confidence, while the AI struggled more with ‘confusion,’ ‘awe,’ and ‘surprise’ which were considered more abstract emotions.

A database of 2,185 videos representing 20 emotions was used to train the neural network. These are the specific emotions found in the clips:

  • Adoration
  • Aesthetic appreciation
  • Amusement
  • Anxiety
  • Awe
  • Boredom
  • Confusion
  • Craving
  • Disgust
  • Empathic pain
  • Entrancement
  • Excitement
  • Fear
  • Horror
  • Interest
  • Joy
  • Romance
  • Sadness
  • Sexual desire
  • Surprise

137,482 frames were extracted from the videos in order to train the neural network. 18 volunteers were then called in, and their brain activity measured when shown 112 different images, in order to further improve the model.

After it was trained, the researchers used 25,000 images to validate their results.

The AI categorised the emotions in the images by not just analysing the faces in them, but also colour, spatial power spectra, and the presence of objects in the scene.

Potential applications for such technology is endless, but it’s easy to imagine it one day being of particular benefit in helping to detect mental health conditions and offer a potentially life-saving intervention.

However, we could be some way off before AI is trusted for such applications as mental health. A separate paper that was published earlier this month claimed that emotional recognition by AI can’t be trusted, based on a review of 1,000 other studies.

The full paper for EmoNet can be found here.

/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post EmoNet: Emotional neural network automatically categorises feelings appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/07/31/emonet-emotional-neural-network-categorises-feelings/feed/ 0
Samsung researchers create AI which generates realistic 3D renders of video scenes https://news.deepgeniusai.com/2019/07/30/samsung-ai-realistic-3d-renders-video-scenes/ https://news.deepgeniusai.com/2019/07/30/samsung-ai-realistic-3d-renders-video-scenes/#respond Tue, 30 Jul 2019 11:43:00 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5880 Samsung researchers create AI which generates realistic 3D renders of video scenes Three researchers at Samsung have created an AI which can generate realistic 3D renders of video scenes. In a paper detailing the neural network behind the AI, the researchers explained the inefficient process of creating virtual scenes today: “Creating virtual models of real... Read more »

The post Samsung researchers create AI which generates realistic 3D renders of video scenes appeared first on AI News.

]]>
Samsung researchers create AI which generates realistic 3D renders of video scenes

Three researchers at Samsung have created an AI which can generate realistic 3D renders of video scenes.

In a paper detailing the neural network behind the AI, the researchers explained the inefficient process of creating virtual scenes today:

“Creating virtual models of real scenes usually involves a lengthy pipeline of operations. Such modeling usually starts with a scanning process, where the photometric properties are captured using camera images and the raw scene geometry is captured using depth scanners or dense stereo matching.

The latter process usually provides noisy and incomplete point cloud that needs to be further processed by applying certain surface reconstruction and meshing approaches. Given the mesh, the texturing and material estimation processes determine the photometric properties of surface fragments and store them in the form of 2D parameterized maps, such as texture maps, bump maps, view-dependent textures, surface lightfields.

Finally, generating photorealistic views of the modeled scene involves computationally-heavy rendering process such as ray tracing and/or radiance transfer estimation.”

A video input is converted into points which represent the geometry of the scene. These geometry points are then rendered into computer graphics using a neural network, vastly speeding up the process of rendering a photorealistic 3D scene.

Here’s the result in a video of a 3D scene created by the AI:

Such a solution could one-day help game development, especially video game counterparts of movies that are already being filmed. Footage from a film set could provide a replica 3D environment for game developers to create interactive experiences in. Or, perhaps, you could relive events like your wedding day using just an old video and a VR headset.

Before such a point is reached, some advancements still need to be made. Current scenes cannot be altered and any large deviations from the original viewpoint results in artifacts. Still, it’s a fascinating early insight at what could be possible in a not-so-distant future.

You can read the full paper here or find the project’s Github page here.

/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Samsung researchers create AI which generates realistic 3D renders of video scenes appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/07/30/samsung-ai-realistic-3d-renders-video-scenes/feed/ 0