neural network – AI News https://news.deepgeniusai.com Artificial Intelligence News Wed, 06 Jan 2021 18:32:32 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png neural network – AI News https://news.deepgeniusai.com 32 32 OpenAI’s latest neural network creates images from written descriptions https://news.deepgeniusai.com/2021/01/06/openai-latest-neural-network-creates-images-written-descriptions/ https://news.deepgeniusai.com/2021/01/06/openai-latest-neural-network-creates-images-written-descriptions/#comments Wed, 06 Jan 2021 18:28:28 +0000 https://news.deepgeniusai.com/?p=10142 OpenAI has debuted its latest jaw-dropping innovation, an image-generating neural network called DALL·E. DALL·E is a 12-billion parameter version of GPT-3 which is trained to generate images from text descriptions. “We find that DALL·E is able to create plausible images for a great variety of sentences that explore the compositional structure of language,“ OpenAI explains.... Read more »

The post OpenAI’s latest neural network creates images from written descriptions appeared first on AI News.

]]>
OpenAI has debuted its latest jaw-dropping innovation, an image-generating neural network called DALL·E.

DALL·E is a 12-billion parameter version of GPT-3 which is trained to generate images from text descriptions.

“We find that DALL·E is able to create plausible images for a great variety of sentences that explore the compositional structure of language,“ OpenAI explains.

Generated images can range from drawings, to objects, and even manipulated real-world photos. Here are some examples of each provided by OpenAI:

Just as OpenAI’s GPT-3 text generator caused alarm about implications such as helping to create fake news for the kinds of disinformation campaigns recently seen around COVID-19, 5G, and attempting to influence various democratic processes—similar concerns will be raised about the company’s latest innovation.

People are increasingly aware of fake news and not to believe everything they read, especially from unknown sources without good track records. However, as humans, we’re still used to believing what we can see with our eyes. Fake news with fake supporting imagery is a rather convincing combination.

Much like it argued with GPT-3, OpenAI essentially says that – by putting the technology out there as responsibly as possible – it helps to raise awareness and drives research into how the implications can be tackled before such neural networks are inevitably created and used by malicious parties.

“We recognise that work involving generative models has the potential for significant, broad societal impacts,” OpenAI said.

“In the future, we plan to analyse how models like DALL·E relate to societal issues like economic impact on certain work processes and professions, the potential for bias in the model outputs, and the longer-term ethical challenges implied by this technology.”

Technological advancements will almost always be used for damaging purposes—but often the benefits outweigh the risks. I’d wager you could write pages about the good and bad sides of the internet, but overall it’s a pretty fantastic thing.

When it comes down to it: If the “good guys” don’t build it, you can be sure the bad ones will.

(Image Credit: Justin Jay Wang/OpenAI)

The post OpenAI’s latest neural network creates images from written descriptions appeared first on AI News.

]]>
https://news.deepgeniusai.com/2021/01/06/openai-latest-neural-network-creates-images-written-descriptions/feed/ 1
Researchers achieve 94% power reduction for on-device AI tasks https://news.deepgeniusai.com/2020/09/17/researchers-achieve-power-reduction-on-device-ai-tasks/ https://news.deepgeniusai.com/2020/09/17/researchers-achieve-power-reduction-on-device-ai-tasks/#respond Thu, 17 Sep 2020 15:47:52 +0000 https://news.deepgeniusai.com/?p=9859 Researchers from Applied Brain Research (ABR) have achieved significantly reduced power consumption for a range of AI-powered devices. ABR designed a new neural network called the Legendre Memory Unit (LMU). With LMU, on-device AI tasks – such as those on speech-enabled devices like wearables, smartphones, and smart speakers – can take up to 94 percent... Read more »

The post Researchers achieve 94% power reduction for on-device AI tasks appeared first on AI News.

]]>
Researchers from Applied Brain Research (ABR) have achieved significantly reduced power consumption for a range of AI-powered devices.

ABR designed a new neural network called the Legendre Memory Unit (LMU). With LMU, on-device AI tasks – such as those on speech-enabled devices like wearables, smartphones, and smart speakers – can take up to 94 percent less power.

The reduction in power consumption achieved through LMU will be particularly beneficial to smaller form-factor devices such as smartwatches; which struggle with small batteries. IoT devices which carry out AI tasks – but may have to last months, if not years, before they’re replaced – should also benefit.

LMU is described as a Recurrent Neural Network (RNN) which enables lower power and more accurate processing of time-varying signals.

ABR says the LMU can be used to build AI networks for all time-varying tasks—such as speech processing, video analysis, sensor monitoring, and control systems.

The AI industry’s current go-to model is the Long-Short-Term-Memory (LSTM) network. LSTM was first proposed back in 1995 and is used for most popular speech recognition and translation services today like those from Google, Amazon, Facebook, and Microsoft.

Last year, researchers from the University of Waterloo debuted LMU as an alternative RNN to LSTM. Those researchers went on to form ABR, which now consists of 20 employees.

Peter Suma, co-CEO of Applied Brain Research, said in an email:

“We are a University of Waterloo spinout from the Theoretical Neuroscience Lab at UW. We looked at how the brain processes signals in time and created an algorithm based on how “time-cells” in your brain work.

We called the new AI, a Legendre-Memory-Unit (LMU) after a mathematical tool we used to model the time cells. The LMU is mathematically proven to be optimal at processing signals. You cannot do any better. Over the coming years, this will make all forms of temporal AI better.”

ABR debuted a paper in late-2019 during the NeurIPS conference which demonstrated that LMU is 1,000,000x more accurate than the LSTM while encoding 100x more time-steps.

In terms of size, the LMU model is also smaller. LMU uses 500 parameters versus the LSTM’s 41,000 (a 98 percent reduction in network size.)

“We implemented our speech recognition with the LMU and it lowered the power used for command word processing to ~8 millionths of a watt, which is 94 percent less power than the best on the market today,” says Suma. “For full speech, we got the power down to 4 milli-watts, which is about 70 percent smaller than the best out there.”

Suma says the next step for ABR is to work on video, sensor and drone control AI processing—to also make them smaller and better.

A full whitepaper detailing LMU and its benefits can be found on preprint repository arXiv here.

The post Researchers achieve 94% power reduction for on-device AI tasks appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/09/17/researchers-achieve-power-reduction-on-device-ai-tasks/feed/ 0
Over 1,000 researchers sign letter opposing ‘crime predicting’ AI https://news.deepgeniusai.com/2020/06/24/over-1000-researchers-sign-letter-crime-predicting-ai/ https://news.deepgeniusai.com/2020/06/24/over-1000-researchers-sign-letter-crime-predicting-ai/#respond Wed, 24 Jun 2020 12:24:25 +0000 https://news.deepgeniusai.com/?p=9706 More than 1,000 researchers, academics, and experts have signed an open letter opposing the use of AI to predict crime. Anyone who has watched the sci-fi classic Minority Report will be concerned about attempts to predict crime before it happens. In an ideal scenario, crime prediction could help determine where to allocate police resources –... Read more »

The post Over 1,000 researchers sign letter opposing ‘crime predicting’ AI appeared first on AI News.

]]>
More than 1,000 researchers, academics, and experts have signed an open letter opposing the use of AI to predict crime.

Anyone who has watched the sci-fi classic Minority Report will be concerned about attempts to predict crime before it happens. In an ideal scenario, crime prediction could help determine where to allocate police resources – but the reality will be very different.

The researchers are speaking out ahead of an imminent publication titled ‘A Deep Neural Network Model to Predict Criminality Using Image Processing’. In the paper, the authors claim to be able to predict whether a person will become a criminal based on automated facial recognition.

“By automating the identification of potential threats without bias, our aim is to produce tools for crime prevention, law enforcement, and military applications that are less impacted by implicit biases and emotional responses,” says Harrisburg University Professor and co-author of the paper Nathaniel J.S. Ashby.

“Our next step is finding strategic partners to advance this mission.”

Finding willing partners may prove to be a challenge. Signatories of the open letter include employees working on AI from tech giants including Microsoft, Google, and Facebook.

In their letter, the signatories highlight the many issues of today’s AI technologies which make dabbling in crime prediction so dangerous.

Chief among the concerns is the well-documented racial bias of algorithms. Every current facial recognition system is more accurate when detecting white males and often incorrectly flags members of the BAME community as criminals more often when used in a law enforcement setting.

However, even if the inaccuracies with facial recognition algorithms are addressed, the researchers highlight the problems with the current justice system which have been put in the spotlight in recent weeks following the murder of George Floyd.

In their letter, the researchers explain:

“Research of this nature — and its accompanying claims to accuracy — rest on the assumption that data regarding criminal arrest and conviction can serve as reliable, neutral indicators of underlying criminal activity. Yet these records are far from neutral.

As numerous scholars have demonstrated, historical court and arrest data reflect the policies and practices of the criminal justice system. These data reflect who police choose to arrest, how judges choose to rule, and which people are granted longer or more lenient sentences.

Countless studies have shown that people of color are treated more harshly than similarly situated white people at every stage of the legal system, which results in serious distortions in the data. Thus, any software built within the existing criminal legal framework will inevitably echo those same prejudices and fundamental inaccuracies when it comes to determining if a person has the ‘face of a criminal.’”

Among the co-authors of the disputed paper is Jonathan W. Korn, a Ph.D. student who is highlighted as an NYPD veteran. Korn says that AI which can predict criminality would be “a significant advantage for law enforcement agencies.”

While such a system would make the lives of law enforcement officers easier, it would do so at the cost of privacy and the automation of racial profiling.

“Machine learning programs are not neutral; research agendas and the data sets they work with often inherit dominant cultural beliefs about the world,” warn the letter’s authors.

“The uncritical acceptance of default assumptions inevitably leads to discriminatory design in algorithmic systems, reproducing ideas which normalise social hierarchies and legitimise violence against marginalised groups.”

You can find the full open letter here.

(Photo by Bill Oxford on Unsplash)

The post Over 1,000 researchers sign letter opposing ‘crime predicting’ AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/24/over-1000-researchers-sign-letter-crime-predicting-ai/feed/ 0
Nvidia comes out on top in first MLPerf inference benchmarks https://news.deepgeniusai.com/2019/11/07/nvidia-comes-out-on-top-in-first-mlperf-inference-benchmarks/ https://news.deepgeniusai.com/2019/11/07/nvidia-comes-out-on-top-in-first-mlperf-inference-benchmarks/#respond Thu, 07 Nov 2019 11:19:57 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6172 The first benchmark results from the MLPerf consortium have been released and Nvidia is a clear winner for inference performance. For those unaware, inference takes a deep learning model and processes incoming data however it’s been trained to. MLPerf is a consortium which aims to provide “fair and useful” standardised benchmarks for inference performance. MLPerf... Read more »

The post Nvidia comes out on top in first MLPerf inference benchmarks appeared first on AI News.

]]>
The first benchmark results from the MLPerf consortium have been released and Nvidia is a clear winner for inference performance.

For those unaware, inference takes a deep learning model and processes incoming data however it’s been trained to.

MLPerf is a consortium which aims to provide “fair and useful” standardised benchmarks for inference performance. MLPerf can be thought of as doing for inference what SPEC does for benchmarking CPUs and general system performance.

The consortium has released its first benchmarking results, a painstaking effort involving over 30 companies and over 200 engineers and practitioners. MLPerf’s first call for submissions led to over 600 measurements spanning 14 companies and 44 systems. 

However, for datacentre inference, only four of the processors are commercially-available:

  • Intel Xeon P9282
  • Habana Goya
  • Google TPUv3
  • Nvidia Turing

Nvidia wasted no time in boasting of its performance beating the three other processors across various neural networks in both server and offline scenarios:

The easiest direct comparisons are possible in the ImageNet ResNet-50 v1.6 offline scenario where the greatest number of major players and startups submitted results.

In that scenario, Nvidia once again boasted the best performance on a per-processor basis with its Titan RTX GPU. Despite the 2x Google Cloud TPU v3-8 submission using eight Intel Skylake processors, it had a similar performance to the SCAN 3XS DBP T496X2 Fluid which used four Titan RTX cards (65,431.40 vs 66,250.40 inputs/second).

Ian Buck, GM and VP of Accelerated Computing at NVIDIA, said:

“AI is at a tipping point as it moves swiftly from research to large-scale deployment for real applications.

AI inference is a tremendous computational challenge. Combining the industry’s most advanced programmable accelerator, the CUDA-X suite of AI algorithms and our deep expertise in AI computing, NVIDIA can help datacentres deploy their large and growing body of complex AI models.”

However, it’s worth noting that the Titan RTX doesn’t support ECC memory so – despite its sterling performance – this omission may prevent its use in some datacentres.

Another interesting takeaway when comparing the Cloud TPU results against Nvidia is the performance difference when moving from offline to server scenarios.

  • Google Cloud TPU v3 offline: 32,716.00
  • Google Cloud TPU v3 server: 16,014.29
  • Nvidia SCAN 3XS DBP T496X2 Fluid offline: 66,250.40
  • Nvidia SCAN 3XS DBP T496X2 Fluid server: 60,030.57

As you can see, the Cloud TPU system performance is slashed by over a half when used in a server scenario. The SCAN 3XS DBP T496X2 Fluid system performance only drops around 10 percent in comparison.

You can peruse MLPerf’s full benchmark results here.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Nvidia comes out on top in first MLPerf inference benchmarks appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/11/07/nvidia-comes-out-on-top-in-first-mlperf-inference-benchmarks/feed/ 0
Intel unwraps its first chip for AI and calls it Spring Hill https://news.deepgeniusai.com/2019/08/21/intel-ai-powered-chip-spring-hill/ https://news.deepgeniusai.com/2019/08/21/intel-ai-powered-chip-spring-hill/#respond Wed, 21 Aug 2019 10:17:07 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5956 Intel has unwrapped its first processor that is designed for artificial intelligence and is planned for use in data centres. The new Nervana Neural Network Processor for Inference (NNP-I) processor has a more approachable codename of Spring Hill. Spring Hill is a modified 10nm Ice Lake processor which sits on a PCB and slots into... Read more »

The post Intel unwraps its first chip for AI and calls it Spring Hill appeared first on AI News.

]]>
Intel has unwrapped its first processor that is designed for artificial intelligence and is planned for use in data centres.

The new Nervana Neural Network Processor for Inference (NNP-I) processor has a more approachable codename of Spring Hill.

Spring Hill is a modified 10nm Ice Lake processor which sits on a PCB and slots into an M.2 port typically used for storage.

According to Intel, the use of a modified Ice Lake processor allows Spring Hill to handle large workloads and consume minimal power. Two compute cores and the graphics engine have been removed from the standard Ice Lake design to accommodate 12 Inference Compute Engines (ICE).

In a summary, Intel detailed six main benefits it expects from Spring Hill:

  1. Best in class perf/power efficiency for major data inference workloads.
  2. Scalable performance at wide power range.
  3. High degree of programmability w/o compromising perf/power efficiency.
  4. Data centre at scale.
  5. Spring Hill solution – Silicon and SW stack – sampling with definitional partners/customers on multiple real-life topologies.
  6. Next two generations in planning/design.

Intel’s first chip for AI comes after the company invested in several Isreali artificial intelligence startups including Habana Labs and NeuroBlade. The investments formed part of Intel’s strategy called ‘AI Everywhere’ which aims to increase the firm’s presence in the market.

Naveen Rao, Intel vice president and general manager, Artificial Intelligence Products Group, said:

“To get to a future state of ‘AI everywhere,’ we’ll need to address the crush of data being generated and ensure enterprises are empowered to make efficient use of their data, processing it where it’s collected when it makes sense and making smarter use of their upstream resources.

Data centers and the cloud need to have access to performant and scalable general purpose computing and specialized acceleration for complex AI applications. In this future vision of AI everywhere, a holistic approach is needed—from hardware to software to applications.”

Facebook has said it will be using Intel’s new Spring Hill processor. Intel already has two more generations of the NNP-I in development.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Intel unwraps its first chip for AI and calls it Spring Hill appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/08/21/intel-ai-powered-chip-spring-hill/feed/ 0
EmoNet: Emotional neural network automatically categorises feelings https://news.deepgeniusai.com/2019/07/31/emonet-emotional-neural-network-categorises-feelings/ https://news.deepgeniusai.com/2019/07/31/emonet-emotional-neural-network-categorises-feelings/#respond Wed, 31 Jul 2019 16:06:21 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5885 A neural network called EmoNet has been designed to automatically categorise the feelings of an individual. EmoNet was created by researchers from the University of Colorado and Duke University and could one day help AIs to understand and react to human emotions. The neural network is capable of accurately classifying images into 11 emotions, although... Read more »

The post EmoNet: Emotional neural network automatically categorises feelings appeared first on AI News.

]]>
A neural network called EmoNet has been designed to automatically categorise the feelings of an individual.

EmoNet was created by researchers from the University of Colorado and Duke University and could one day help AIs to understand and react to human emotions.

The neural network is capable of accurately classifying images into 11 emotions, although some with a higher confidence than others.

‘Craving,’ ‘sexual desire,’ and ‘horror’ were able to be determined with a high confidence, while the AI struggled more with ‘confusion,’ ‘awe,’ and ‘surprise’ which were considered more abstract emotions.

A database of 2,185 videos representing 20 emotions was used to train the neural network. These are the specific emotions found in the clips:

  • Adoration
  • Aesthetic appreciation
  • Amusement
  • Anxiety
  • Awe
  • Boredom
  • Confusion
  • Craving
  • Disgust
  • Empathic pain
  • Entrancement
  • Excitement
  • Fear
  • Horror
  • Interest
  • Joy
  • Romance
  • Sadness
  • Sexual desire
  • Surprise

137,482 frames were extracted from the videos in order to train the neural network. 18 volunteers were then called in, and their brain activity measured when shown 112 different images, in order to further improve the model.

After it was trained, the researchers used 25,000 images to validate their results.

The AI categorised the emotions in the images by not just analysing the faces in them, but also colour, spatial power spectra, and the presence of objects in the scene.

Potential applications for such technology is endless, but it’s easy to imagine it one day being of particular benefit in helping to detect mental health conditions and offer a potentially life-saving intervention.

However, we could be some way off before AI is trusted for such applications as mental health. A separate paper that was published earlier this month claimed that emotional recognition by AI can’t be trusted, based on a review of 1,000 other studies.

The full paper for EmoNet can be found here.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post EmoNet: Emotional neural network automatically categorises feelings appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/07/31/emonet-emotional-neural-network-categorises-feelings/feed/ 0
Samsung researchers create AI which generates realistic 3D renders of video scenes https://news.deepgeniusai.com/2019/07/30/samsung-ai-realistic-3d-renders-video-scenes/ https://news.deepgeniusai.com/2019/07/30/samsung-ai-realistic-3d-renders-video-scenes/#respond Tue, 30 Jul 2019 11:43:00 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5880 Samsung researchers create AI which generates realistic 3D renders of video scenes Three researchers at Samsung have created an AI which can generate realistic 3D renders of video scenes. In a paper detailing the neural network behind the AI, the researchers explained the inefficient process of creating virtual scenes today: “Creating virtual models of real... Read more »

The post Samsung researchers create AI which generates realistic 3D renders of video scenes appeared first on AI News.

]]>
Samsung researchers create AI which generates realistic 3D renders of video scenes

Three researchers at Samsung have created an AI which can generate realistic 3D renders of video scenes.

In a paper detailing the neural network behind the AI, the researchers explained the inefficient process of creating virtual scenes today:

“Creating virtual models of real scenes usually involves a lengthy pipeline of operations. Such modeling usually starts with a scanning process, where the photometric properties are captured using camera images and the raw scene geometry is captured using depth scanners or dense stereo matching.

The latter process usually provides noisy and incomplete point cloud that needs to be further processed by applying certain surface reconstruction and meshing approaches. Given the mesh, the texturing and material estimation processes determine the photometric properties of surface fragments and store them in the form of 2D parameterized maps, such as texture maps, bump maps, view-dependent textures, surface lightfields.

Finally, generating photorealistic views of the modeled scene involves computationally-heavy rendering process such as ray tracing and/or radiance transfer estimation.”

A video input is converted into points which represent the geometry of the scene. These geometry points are then rendered into computer graphics using a neural network, vastly speeding up the process of rendering a photorealistic 3D scene.

Here’s the result in a video of a 3D scene created by the AI:

Such a solution could one-day help game development, especially video game counterparts of movies that are already being filmed. Footage from a film set could provide a replica 3D environment for game developers to create interactive experiences in. Or, perhaps, you could relive events like your wedding day using just an old video and a VR headset.

Before such a point is reached, some advancements still need to be made. Current scenes cannot be altered and any large deviations from the original viewpoint results in artifacts. Still, it’s a fascinating early insight at what could be possible in a not-so-distant future.

You can read the full paper here or find the project’s Github page here.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Samsung researchers create AI which generates realistic 3D renders of video scenes appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/07/30/samsung-ai-realistic-3d-renders-video-scenes/feed/ 0
MIT’s AI uses wireless signals to detect movement through walls https://news.deepgeniusai.com/2018/06/13/mit-ai-wireless-detect-movement-walls/ https://news.deepgeniusai.com/2018/06/13/mit-ai-wireless-detect-movement-walls/#respond Wed, 13 Jun 2018 11:46:25 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3308 Researchers from MIT CSAIL have developed an AI capable of detecting movement through walls using just RF wireless signals. CSAIL (Computer Science and Artificial Intelligence Laboratory) is based at the Massachusetts Institute of Technology with the goal of ‘pioneering new approaches to computing that will bring about positive changes in the way people around the... Read more »

The post MIT’s AI uses wireless signals to detect movement through walls appeared first on AI News.

]]>
Researchers from MIT CSAIL have developed an AI capable of detecting movement through walls using just RF wireless signals.

CSAIL (Computer Science and Artificial Intelligence Laboratory) is based at the Massachusetts Institute of Technology with the goal of ‘pioneering new approaches to computing that will bring about positive changes in the way people around the globe live, play, and work.’

The researchers’ latest development, RF-Pose, uses a neural network in combination with simple RF wireless signals to sense the movement of people behind obstacles such as walls. Furthermore, it can even determine their posture.

You can see a video of RF-Pose in action below:

RF-Pose’s neural network was trained using examples of people’s on-camera movement and how their bodies reflected the RF signals. Armed with this information, the AI was then able to determine movement and postures without the need for a camera and show them as stick figures.

There are several potential uses for this technology. One of the most exciting is for monitoring the safety of the elderly at home without having to install a privacy-invasive camera. Where a fall or other potential issue has been detected, an alert could be sent to a family member.

As with most technological advances, there is also the potential for abuse. It goes without saying the possible criminal possibilities to see if a home or business is currently unoccupied, or where the occupants are, without having to enter the building.

With this concern in mind, CSAIL has implemented a ‘consent mechanism’ safeguard which requires users to perform specific movements before tracking begins. However, other implementations — or a hacked version — could pose a worrying problem.

What are your thoughts on MIT’s RF-Pose development?

 

The post MIT’s AI uses wireless signals to detect movement through walls appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/06/13/mit-ai-wireless-detect-movement-walls/feed/ 0
Don’t be a(nother) Joker, let AI create a unique Halloween costume https://news.deepgeniusai.com/2017/10/30/ai-halloween/ https://news.deepgeniusai.com/2017/10/30/ai-halloween/#respond Mon, 30 Oct 2017 12:56:47 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=2628 As Halloween approaches, we’ll be seeing many Joker and Harley Quinn costumes dragged back out the depths of closets. Pennywise the clown is another obvious choice for this year. If you want something a bit more unique for a costume idea, an AI could help. Research scientist Janelle Shane decided to build a neural network... Read more »

The post Don’t be a(nother) Joker, let AI create a unique Halloween costume appeared first on AI News.

]]>
As Halloween approaches, we’ll be seeing many Joker and Harley Quinn costumes dragged back out the depths of closets. Pennywise the clown is another obvious choice for this year. If you want something a bit more unique for a costume idea, an AI could help.

Research scientist Janelle Shane decided to build a neural network fed with data of 4,500 costume names she crowdsourced from the internet (which is always a safe bet*)

Speaking to Business Insider, Shane claims the network’s creativity surprised her:

“I would argue that the Halloween costume neural network is actually right up there at coming up with creative things that humans love,” Shane said. “It can form its own rules about what it’s seeing rather than just memorising.”

Some prefer their Halloween costumes to have a sexy rather than scary theme, and it’s here where Shane’s AI — much like myself — couldn’t quite hit the mark. The AI came up with suggestions like ‘Sexy conchpaper’ and ‘Sexy Gumb Man’.

Perhaps those ideas have you hot under the collar… I won’t judge.

Some of the suggestions appear to be an unintentional stroke of genius from the AI. For example, it probably wasn’t aware of a somewhat clever pop culture link to Batman — otherwise known as The Dark Knight — when coming up with its ‘Shark Knight’ costume idea.

It’s not the first time Shane has used a neural network for coming up with new ideas. She’s previously received coverage for using an AI for the important task of naming craft beers. One beer, The Fine Stranger, is now even in production.

We’re unable to comment on whether the beer is as good as the name, but we’ll happily receive a sample for our vital research purposes if one is going. As for AI being used to reduce time spent on mundane tasks so we can get back to drinking beer — I mean, working… — we’re all for it.

*Obvious sarcasm.

Are there scenarios you would use AI for ideas and names?

 

The post Don’t be a(nother) Joker, let AI create a unique Halloween costume appeared first on AI News.

]]>
https://news.deepgeniusai.com/2017/10/30/ai-halloween/feed/ 0
Prowler.io aspires to build AI which makes human-like decisions https://news.deepgeniusai.com/2017/09/05/prowler-io-build-ai-human-like-decisions/ https://news.deepgeniusai.com/2017/09/05/prowler-io-build-ai-human-like-decisions/#comments Tue, 05 Sep 2017 15:23:02 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=2421 Cambridge-based AI startup Prowler has raised £10 million to help it build an AI which can make human-like decisions. Based on the comments made by Elon Musk and Vladimir Putin in our article yesterday, you’d be forgiven if this raises some concerns. AI like Prowler, however, could be what saves us from destruction. If you... Read more »

The post Prowler.io aspires to build AI which makes human-like decisions appeared first on AI News.

]]>
Cambridge-based AI startup Prowler has raised £10 million to help it build an AI which can make human-like decisions.

Based on the comments made by Elon Musk and Vladimir Putin in our article yesterday, you’d be forgiven if this raises some concerns. AI like Prowler, however, could be what saves us from destruction.

If you missed it, Musk voiced his concern about Putin’s comment that the nation which leads in AI “will become the ruler of the world.”

Some are concerned about an AI arms race and that, without human input, an AI relying solely on logic could decide on things such as launching preemptive strikes as being the most likely to achieve victory or protect its host nation. This decision would be devoid of empathy for the human, environmental, political, and long-term destabilising catastrophes this would create.

Mark Cuban, a billionaire and possible U.S. president runner in 2020, shares a concern about killer AI:

Cuban, however, points out it’s AIs not taking on human-like qualities which pose the biggest threat:

Most AI development today focuses on deep neural networks using vast amounts of data. This approach is vital for problem-solving as it is very effective, but for decision-making, Prowler argues it is limiting.

Building a decision-making AI

Prowler is building a decision-making AI which is based on probabilistic modelling, reinforcement learning, and game theory. For each of those areas, Prowler has experts in their field combining their knowledge to create an AI it hopes will make decisions as well as a human.

The company was co-founded by two alums from AI company VocalIQ, which was acquired 13 months after launch by Apple. Today, Cambridge Innovation Capital (CIC) announced it led a £10 million Series A funding round for Prowler.

“Prowler has assembled a world-class team of researchers to tackle some of the most intractable problems of our age,” comments Andrew Williamson, Investment Director at CIC, who is joining the Board of Prowler. “It is hugely exciting that the company is able to capitalise on the expertise in probabilistic modelling, principled machine learning and game theory available in Cambridge.”

“This investment allows us to expand our world-leading team of academics and developers, enhancing our research bandwidth and accelerating our technology into the market,” added Vishal Chatrath, CEO of Prowler. “As a team, we will use the funding to take the business to the next stage and we will continue to solve some of the world’s hardest machine learning problems.”

Despite popular belief, the closer we get to AI taking on human-like qualities could save us all. At the least, it will be far more efficient.

What are your thoughts about AI making human-like decisions?

 

The post Prowler.io aspires to build AI which makes human-like decisions appeared first on AI News.

]]>
https://news.deepgeniusai.com/2017/09/05/prowler-io-build-ai-human-like-decisions/feed/ 1