mit – AI News https://news.deepgeniusai.com Artificial Intelligence News Thu, 02 Jul 2020 15:43:07 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png mit – AI News https://news.deepgeniusai.com 32 32 MIT has removed a dataset which leads to misogynistic, racist AI models https://news.deepgeniusai.com/2020/07/02/mit-removed-dataset-misogynistic-racist-ai-models/ https://news.deepgeniusai.com/2020/07/02/mit-removed-dataset-misogynistic-racist-ai-models/#comments Thu, 02 Jul 2020 15:43:05 +0000 https://news.deepgeniusai.com/?p=9728 MIT has apologised for, and taken offline, a dataset which trains AI models with misogynistic and racist tendencies. The dataset in question is called 80 Million Tiny Images and was created in 2008. Designed for training AIs to detect objects, the dataset is a huge collection of pictures which are individually labelled based on what... Read more »

The post MIT has removed a dataset which leads to misogynistic, racist AI models appeared first on AI News.

]]>
MIT has apologised for, and taken offline, a dataset which trains AI models with misogynistic and racist tendencies.

The dataset in question is called 80 Million Tiny Images and was created in 2008. Designed for training AIs to detect objects, the dataset is a huge collection of pictures which are individually labelled based on what they feature.

Machine-learning models are trained using these images and their labels. An image of a street – when fed into an AI trained on such a dataset – could tell you about things it contains such as cars, streetlights, pedestrians, and bikes.

Two researchers – Vinay Prabhu, chief scientist at UnifyID, and Abeba Birhane, a PhD candidate at University College Dublin in Ireland – analysed the images and found thousands of concerning labels.

MIT’s training set was found to label women as “bitches” or “whores,” and people from BAME communities with the kind of derogatory terms I’m sure you don’t need me to write. The Register notes the dataset also contained close-up images of female genitalia labeled with the C-word.

The Register alerted MIT to the concerning issues found by Prabhu and Birhane with the dataset and the college promptly took it offline. MIT went a step further and urged anyone using the dataset to stop using it and delete any copies.

A statement on MIT’s website claims it was unaware of the offensive labels and they were “a consequence of the automated data collection procedure that relied on nouns from WordNet.”

The statement goes on to explain the 80 million images contained in the dataset, with sizes of just 32×32 pixels, means that manual inspection would be almost impossible and cannot guarantee all offensive images will be removed.

“Biases, offensive and prejudicial images, and derogatory terminology alienates an important part of our community – precisely those that we are making efforts to include. It also contributes to harmful biases in AI systems trained on such data,” wrote Antonio Torralba, Rob Fergus, and Bill Freeman from MIT.

“Additionally, the presence of such prejudicial images hurts efforts to foster a culture of inclusivity in the computer vision community. This is extremely unfortunate and runs counter to the values that we strive to uphold.”

You can find a full pre-print copy of Prabhu and Birhane’s paper here (PDF)

(Photo by Clay Banks on Unsplash)

The post MIT has removed a dataset which leads to misogynistic, racist AI models appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/02/mit-removed-dataset-misogynistic-racist-ai-models/feed/ 4
MIT’s AI paints a dire picture if social distancing is relaxed too soon https://news.deepgeniusai.com/2020/04/17/mit-ai-social-distancing-relaxed-too-soon/ https://news.deepgeniusai.com/2020/04/17/mit-ai-social-distancing-relaxed-too-soon/#comments Fri, 17 Apr 2020 12:37:25 +0000 https://news.deepgeniusai.com/?p=9553 According to an AI system built by MIT to predict the spread of COVID-19, relaxing social distancing rules too early would be catastrophic. Social distancing measures around the world appear to be having the desired effect. In many countries, the “curve” appears to be flattening with fewer deaths and hospital admissions per day. No healthcare... Read more »

The post MIT’s AI paints a dire picture if social distancing is relaxed too soon appeared first on AI News.

]]>
According to an AI system built by MIT to predict the spread of COVID-19, relaxing social distancing rules too early would be catastrophic.

Social distancing measures around the world appear to be having the desired effect. In many countries, the “curve” appears to be flattening with fewer deaths and hospital admissions per day.

No healthcare system in the world is prepared to handle a vast number of its population hospitalised at once. Even once relatively trivial ailments can become deadly if people cannot access the care they need. Until a vaccine is found, that’s why maintaining social distancing is vital even as lockdown measures ease.

With the curve now flattening, the conversation is switching to how lockdowns can be lifted safely. Contact-tracing apps, which keep track of everyone an individual passes and alerts them to self-isolate if they’ve been near anyone subsequently diagnosed with COVID-19, are expected to be key in easing measures.

MIT’s AI corroborates what many health officials are showing in their figures; that we should now be seeing new cases of COVID-19 levelling off in many countries.

“Our results unequivocally indicate that the countries in which rapid government interventions and strict public health measures for quarantine and isolation were implemented were successful in halting the spread of infection and prevent it from exploding exponentially,” the researchers wrote.

However, the situation could be similar to Singapore where lockdown measures almost completely flattened the curve before an early return to normal resulted in a massive resurgence in cases.

“Relaxing or reversing quarantine measures right now will lead to an exponential explosion in the infected case count, thus nullifying the role played by all measures implemented in the US since mid-March 2020.”

The team from MIT trained their AI using public data on COVID-19’s spread and how each government implemented various measures to contain it. It was trained on known data from January to March, and then was found to accurately predict the spread in April so far.

While the researchers’ work focused on COVID-19 epidemics in the US, Italy, South Korea, and Wuhan, there’s no reason to think that relaxing social distancing rules anywhere else in the world at this stage would be any less dire.

You can find the full paper from MIT here.

(Photo by engin akyurt on Unsplash)

The post MIT’s AI paints a dire picture if social distancing is relaxed too soon appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/04/17/mit-ai-social-distancing-relaxed-too-soon/feed/ 1
MIT researchers use AI to discover a welcome new antibiotic https://news.deepgeniusai.com/2020/02/21/mit-researchers-use-ai-to-discover-a-welcome-new-antibiotic/ https://news.deepgeniusai.com/2020/02/21/mit-researchers-use-ai-to-discover-a-welcome-new-antibiotic/#respond Fri, 21 Feb 2020 15:49:32 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6423 A team of MIT researchers have used AI to discover a welcome new antibiotic to help in the fight against increasing resistance. Using a machine learning algorithm, the MIT researchers were able to discover a new antibiotic compound which did not develop any resistance during a 30-day treatment period on mice. The algorithm was trained... Read more »

The post MIT researchers use AI to discover a welcome new antibiotic appeared first on AI News.

]]>
A team of MIT researchers have used AI to discover a welcome new antibiotic to help in the fight against increasing resistance.

Using a machine learning algorithm, the MIT researchers were able to discover a new antibiotic compound which did not develop any resistance during a 30-day treatment period on mice.

The algorithm was trained using around 2,500 molecules – including about 1,700 FDA-approved drugs and a set of 800 natural products – to seek out chemical features that make molecules effective at killing bacteria. 

After the model was trained, the researchers tested it on a library of about 6,000 compounds known as the Broad Institute’s Drug Repurposing Hub.

“We wanted to develop a platform that would allow us to harness the power of artificial intelligence to usher in a new age of antibiotic drug discovery,” explains Editor Collins, the Termeer Professor of Medical Engineering and Science in MIT’s Institute for Medical Engineering and Science (IMES) and Department of Biological Engineering.

“Our approach revealed this amazing molecule which is arguably one of the more powerful antibiotics that has been discovered.”

Antibiotic resistance is terrifying. Researchers have already discovered bacterias that are immune to current antibiotics and we’re very much in danger of illnesses that have become simple to treat becoming deadly once more.

Data from the Centers for Disease Control and Prevention (CDC) already indicates that antibiotic-resistant bacteria and antimicrobial-resistant fungi cause more than 2.8 million infections and 35,000 deaths a year in the United States alone.

“We’re facing a growing crisis around antibiotic resistance, and this situation is being generated by both an increasing number of pathogens becoming resistant to existing antibiotics, and an anaemic pipeline in the biotech and pharmaceutical industries for new antibiotics,” Collins says.

The recent coronavirus outbreak leaves many patients with pneumonia. With antibiotics, pneumonia is not often fatal nowadays unless a patient has a substantially weakened immune system. The current death toll for coronavirus would be much higher if antibiotic resistance essentially sets healthcare back to the 1930s.

MIT’s researchers claim their AI is able to check more than 100 million chemical compounds in a matter of days to pick out potential antibiotics that kill bacteria. This rapid checking reduces the time it takes to discover new lifesaving treatments and begins to swing the odds back in our favour.

The newly discovered molecule is called halicin – after the AI named Hal in the film 2001: A Space Odyssey – and has been found to be effective against E.coli. The team is now hoping to develop halicin for human use (a separate machine learning model has already indicated that it should have low toxicity to humans, so early signs are positive.)

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post MIT researchers use AI to discover a welcome new antibiotic appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/02/21/mit-researchers-use-ai-to-discover-a-welcome-new-antibiotic/feed/ 0
MIT software shows how NLP systems are snookered by simple synonyms https://news.deepgeniusai.com/2020/02/12/mit-software-shows-how-nlp-systems-are-snookered-by-simple-synonyms/ https://news.deepgeniusai.com/2020/02/12/mit-software-shows-how-nlp-systems-are-snookered-by-simple-synonyms/#respond Wed, 12 Feb 2020 11:48:11 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6411 Here’s an example of how artificial intelligence can still seriously lack behind some human attributes: tests have shown how natural language processing (NLP) systems can be tricked into misunderstanding text by merely swapping one word for a synonym. A research team at MIT developed software, called TextFooler, which looked for words which were most crucial... Read more »

The post MIT software shows how NLP systems are snookered by simple synonyms appeared first on AI News.

]]>
Here’s an example of how artificial intelligence can still seriously lack behind some human attributes: tests have shown how natural language processing (NLP) systems can be tricked into misunderstanding text by merely swapping one word for a synonym.

A research team at MIT developed software, called TextFooler, which looked for words which were most crucial to an NLP classifier and replaced them. The team offered an example:

“The characters, cast in impossibly contrived situations, are totally estranged from reality”, and
“The characters, cast in impossibly engineered circumstances, are fully estranged from reality”

No problem for a human to decipher. Yet the results on the AIs were startling. For instance BERT, Google’s neural net, was worse by a factor of up to seven at identifying whether reviews on Yelp were positive or negative.

Douglas Heaven, writing a roundup of the study for MIT Technology Review, explained why the research was important. “We have seen many examples of adversarial attacks, most often with image recognition systems, where tiny alterations to the input can flummox an AI and make it misclassify what it sees,” Heaven wrote. “TextFooler shows that this style of attack also breaks NLP, the AI behind virtual assistants – such as Siri, Alexa and Google Home – as well as other language classifiers like spam filters and hate-speech detectors.”

This publication has explored various methods where AI technologies are outstripping human efforts, such as detecting breast cancer, playing StarCraft, and public debating. In other fields, resistance – however futile – remains. In December it was reported that human drivers were still overall beating AIs at drone racing, although the chief technology officer of the Drone Race League predicted that 2023 would be the year where AI took over.

The end goal for software such as TextFooler, the researchers hope, is to make NLP systems more robust.

Postscript: For those reading from outside the British Isles, China, and certain Commonwealth countries – to ‘snooker’ someone, deriving from the sport of the same name, is to ‘leave one in a difficult position.’ The US equivalent is ‘behind the eight-ball’, although that would have of course thrown the headline out.

? Attend the co-located AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post MIT software shows how NLP systems are snookered by simple synonyms appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/02/12/mit-software-shows-how-nlp-systems-are-snookered-by-simple-synonyms/feed/ 0
Deepfake shows Nixon announcing the moon landing failed https://news.deepgeniusai.com/2020/02/06/deepfake-nixon-moon-landing-failed/ https://news.deepgeniusai.com/2020/02/06/deepfake-nixon-moon-landing-failed/#respond Thu, 06 Feb 2020 16:42:59 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6403 In the latest creepy deepfake, former US President Nixon is shown to announce that the first moon landing failed. Nixon was known to be a divisive figure but certainly recognisable. The video shows Nixon in the Oval Office, surrounded by flags, giving a presidential address to an eagerly awaiting world. However, unlike the actual first... Read more »

The post Deepfake shows Nixon announcing the moon landing failed appeared first on AI News.

]]>

In the latest creepy deepfake, former US President Nixon is shown to announce that the first moon landing failed.

Nixon was known to be a divisive figure but certainly recognisable. The video shows Nixon in the Oval Office, surrounded by flags, giving a presidential address to an eagerly awaiting world.

However, unlike the actual first moon landing – unless you’re a subscriber to conspiracy theories – this one failed.

“These brave men, Neil Armstrong and Edwin Aldrin, know that there is no hope for their recovery,” Nixon says in his trademark growl. “But they also know that there is hope for mankind in their sacrifice.”

Here are some excerpts from the full video:

What makes the video more haunting is that the speech itself is real. Although never broadcast, it was written for Nixon by speechwriter William Safire in the eventuality the moon landing did fail.

The deepfake was created by a team from MIT’s Center for Advanced Virtuality and put on display at the IDFA documentary festival in Amsterdam.

In order to recreate Nixon’s famous voice, the MIT team partnered with technicians from Ukraine and Israel and used advanced machine learning techniques.

We’ve covered many deepfakes here on AI News. While many are amusing, there are serious concerns that deepfakes could be used for malicious purposes such as blackmail or manipulation.

Ahead of the US presidential elections, some campaigners have worked to increase the awareness of deepfakes and get social media platforms to help tackle any dangerous videos.

Back in 2018, speaker Nancy Pelosi was the victim of a deepfake that went viral across social media which made her appear drunk and slurring her words. Pelosi criticised Facebook’s response, or lack thereof, and later told California’s KQED: “I think they have proven — by not taking down something they know is false — that they were willing enablers of the Russian interference in our election.”

As part of a bid to persuade the social media giant to change its policies on deepfakes, Israeli startup Canny AI created a deepfake of Facebook CEO Mark Zuckerberg – making it appear like he said: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

Last month, Facebook pledged to crack down on deepfakes ahead of the US presidential elections. However, the new rules don’t cover videos altered for parody or those edited “solely to omit or change the order of words,” which will not sound encouraging to those wanting a firm stance against potential voter manipulation.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Deepfake shows Nixon announcing the moon landing failed appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/02/06/deepfake-nixon-moon-landing-failed/feed/ 0
Microsoft and MIT develop AI to fix driverless car ‘blind spots’ https://news.deepgeniusai.com/2019/01/28/microsoft-mit-develop-ai-driverless-car/ https://news.deepgeniusai.com/2019/01/28/microsoft-mit-develop-ai-driverless-car/#respond Mon, 28 Jan 2019 16:18:30 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4846 Microsoft and MIT have partnered on a project to fix so-called virtual ‘blind spots’ which lead driverless cars to make errors. Roads, especially while shared with human drivers, are unpredictable places. Training a self-driving car for every possible situation is a monumental task. The AI developed by Microsoft and MIT compares the action taken by... Read more »

The post Microsoft and MIT develop AI to fix driverless car ‘blind spots’ appeared first on AI News.

]]>
Microsoft and MIT have partnered on a project to fix so-called virtual ‘blind spots’ which lead driverless cars to make errors.

Roads, especially while shared with human drivers, are unpredictable places. Training a self-driving car for every possible situation is a monumental task.

The AI developed by Microsoft and MIT compares the action taken by humans in a given scenario to what the driverless car’s own AI would do. Where the human decision is more optimal, the vehicle’s behaviour is updated for similar future occurrences.

Ramya Ramakrishnan, an author of the report, says:

“The model helps autonomous systems better know what they don’t know.

Many times, when these systems are deployed, their trained simulations don’t match the real-world setting [and] they could make mistakes, such as getting into accidents.

The idea is to use humans to bridge that gap between simulation and the real world, in a safe way, so we can reduce some of those errors.”

For example, if an emergency vehicle is approaching then a human driver should know to let them pass if safe to do so. These situations can get complex dependent on the surroundings.

On a country road, allowing the vehicle to pass could mean edging onto the grass. The last thing you, or the emergency services, want a driverless car to do is to handle all country roads the same and swerve off a cliff edge.

Humans can either ‘demonstrate’ the correct approach in the real world, or ‘correct’ by sitting at the wheel and taking over if the car’s actions are incorrect. A list of situations is compiled along with labels whether its actions were deemed acceptable or unacceptable.

The researchers have ensured a driverless car AI does not see its action as 100 percent safe even if the result has been so far. Using the Dawid-Skene machine learning algorithm, the AI uses probability calculations to spot patterns and determine if something is truly safe or still leaves the potential for error.

We’re yet to reach a point where the technology is ready for deployment. Thus far, the scientists have only tested it with video games. It offers a lot of promise, however, to help ensure driverless car AIs can one day safely respond to all situations.

? , , AI &

The post Microsoft and MIT develop AI to fix driverless car ‘blind spots’ appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/01/28/microsoft-mit-develop-ai-driverless-car/feed/ 0
Researchers get public to decide who to save in a driverless car crash https://news.deepgeniusai.com/2018/10/25/researchers-save-driverless-car-crash/ https://news.deepgeniusai.com/2018/10/25/researchers-save-driverless-car-crash/#comments Thu, 25 Oct 2018 16:40:43 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4130 Researchers have conducted an experiment intending to solve the ethical conundrum of who to save if a fatal driverless car crash is unavoidable. A driverless car AI will need to be programmed with decisions such as who to prioritise if it came down to choices such as between swerving and hitting a child on the... Read more »

The post Researchers get public to decide who to save in a driverless car crash appeared first on AI News.

]]>
Researchers have conducted an experiment intending to solve the ethical conundrum of who to save if a fatal driverless car crash is unavoidable.

A driverless car AI will need to be programmed with decisions such as who to prioritise if it came down to choices such as between swerving and hitting a child on the left, or an elderly person on the right.

It may seem a fairly simple choice for some – children have their whole life in front of them, the elderly have fewer years ahead. However, arguments could be made such as younger people often have a greater recovery chance so both people could ultimately survive.

This is a fairly simple example, but things could get even more controversial when taking into account things such as choosing between someone with a criminal record, or a law-abiding citizen.

No single person should be made to make such decisions, nobody wants to be accountable for explaining to a family member why their loved one was chosen to die over another.

In their paper, the researchers wrote:

“Never in the history of humanity have we allowed a machine to autonomously decide who should live and who should die, in a fraction of a second, without real-time supervision.

We are going to cross that bridge any time now, and it will not happen in a distant theatre of military operations; it will happen in that most mundane aspect of our lives, everyday transportation.

Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them.”

The best way forward is establishing what the majority feel should happen in such accidents for collective accountability.

Researchers from around the world conducted research involving millions of participants from more than 200 countries to answer hypothetical questions in an experiment called the Moral Machine.

Here are the results:

In the driverless car world, you’re relatively safe if you’re not:

    • A passenger
    • Male
    • Unhealthy
    • Considered poor / low status
    • Unlawful
    • Elderly
  • An animal

If you’re any of these, I suggest you start taking extra care crossing the road.

The research was conducted by researchers from Harvard University and MIT in the US, University of British Columbia in Canada, and the Université Toulouse Capitole in France.

 AI & >.

The post Researchers get public to decide who to save in a driverless car crash appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/10/25/researchers-save-driverless-car-crash/feed/ 1
MIT’s AI uses wireless signals to detect movement through walls https://news.deepgeniusai.com/2018/06/13/mit-ai-wireless-detect-movement-walls/ https://news.deepgeniusai.com/2018/06/13/mit-ai-wireless-detect-movement-walls/#respond Wed, 13 Jun 2018 11:46:25 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3308 Researchers from MIT CSAIL have developed an AI capable of detecting movement through walls using just RF wireless signals. CSAIL (Computer Science and Artificial Intelligence Laboratory) is based at the Massachusetts Institute of Technology with the goal of ‘pioneering new approaches to computing that will bring about positive changes in the way people around the... Read more »

The post MIT’s AI uses wireless signals to detect movement through walls appeared first on AI News.

]]>
Researchers from MIT CSAIL have developed an AI capable of detecting movement through walls using just RF wireless signals.

CSAIL (Computer Science and Artificial Intelligence Laboratory) is based at the Massachusetts Institute of Technology with the goal of ‘pioneering new approaches to computing that will bring about positive changes in the way people around the globe live, play, and work.’

The researchers’ latest development, RF-Pose, uses a neural network in combination with simple RF wireless signals to sense the movement of people behind obstacles such as walls. Furthermore, it can even determine their posture.

You can see a video of RF-Pose in action below:

RF-Pose’s neural network was trained using examples of people’s on-camera movement and how their bodies reflected the RF signals. Armed with this information, the AI was then able to determine movement and postures without the need for a camera and show them as stick figures.

There are several potential uses for this technology. One of the most exciting is for monitoring the safety of the elderly at home without having to install a privacy-invasive camera. Where a fall or other potential issue has been detected, an alert could be sent to a family member.

As with most technological advances, there is also the potential for abuse. It goes without saying the possible criminal possibilities to see if a home or business is currently unoccupied, or where the occupants are, without having to enter the building.

With this concern in mind, CSAIL has implemented a ‘consent mechanism’ safeguard which requires users to perform specific movements before tracking begins. However, other implementations — or a hacked version — could pose a worrying problem.

What are your thoughts on MIT’s RF-Pose development?

 

The post MIT’s AI uses wireless signals to detect movement through walls appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/06/13/mit-ai-wireless-detect-movement-walls/feed/ 0
MIT created a psychopathic AI based on Norman Bates https://news.deepgeniusai.com/2018/06/06/mit-psychopathic-ai/ https://news.deepgeniusai.com/2018/06/06/mit-psychopathic-ai/#comments Wed, 06 Jun 2018 11:23:01 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3253 While many researchers are calling for regulations to ensure safe and ethical AI development, MIT has created one with a psychopathic personality. The scientists, from MIT’s unconventional Media Lab, based their AI’s personality on serial killer Norman Bates from Hitchcock thriller Psycho. AI Norman is designed to caption images. Whereas most neural networks are trained... Read more »

The post MIT created a psychopathic AI based on Norman Bates appeared first on AI News.

]]>
While many researchers are calling for regulations to ensure safe and ethical AI development, MIT has created one with a psychopathic personality.

The scientists, from MIT’s unconventional Media Lab, based their AI’s personality on serial killer Norman Bates from Hitchcock thriller Psycho.

AI Norman is designed to caption images. Whereas most neural networks are trained on a range of images to reduce bias, poor Norman was exposed ‘to the darkest corners of Reddit.’

Anyone who has found themselves in these areas will feel for Norman. Whereas we can close our browsers, Norman had the equivalent of being tied up and forced to watch a barrage of the worst of humankind.

Needless to say, Norman gained a macabre view of the world.

MIT’s researchers ran Norman through a Rorschach test in order to see what it sees in comparison to an AI trained in a more standard manner:

norman ai mit bias psychopath

Fortunately, MIT promises its AI is only a warning about bias and its potential dangers. As its creators say, “when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it.”

Norman won’t have access to the big red button. Unfortunately, that won’t keep less predictable human psychopaths from it.

(See also: Open source tool for detecting bias in algorithms)

What are your thoughts on MIT’s psychopathic AI?

 

The post MIT created a psychopathic AI based on Norman Bates appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/06/06/mit-psychopathic-ai/feed/ 1
AI uses radio waves to diagnose sleep disorders https://news.deepgeniusai.com/2017/08/07/ai-radio-waves-diagnose-sleep-disorders/ https://news.deepgeniusai.com/2017/08/07/ai-radio-waves-diagnose-sleep-disorders/#respond Mon, 07 Aug 2017 12:47:46 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=2276 Researchers have developed an AI-based algorithm to improve the diagnosis and monitoring of sleep disorders using radio waves. Good sleep is vital for our mental and physical wellbeing. Diagnosing problems today, however, can be difficult as it requires patients to be fitted with electrodes and various sensors. The researchers from MIT and Massachusetts General Hospital... Read more »

The post AI uses radio waves to diagnose sleep disorders appeared first on AI News.

]]>
Researchers have developed an AI-based algorithm to improve the diagnosis and monitoring of sleep disorders using radio waves.

Good sleep is vital for our mental and physical wellbeing. Diagnosing problems today, however, can be difficult as it requires patients to be fitted with electrodes and various sensors.

The researchers from MIT and Massachusetts General Hospital used an AI algorithm to analyse radio signals around a subject. These readings are translated into the stages of sleep: awake, light, deep, or rapid eye movement.

“Imagine if your WiFi router knows when you are dreaming and can monitor whether you are having enough deep sleep, which is necessary for memory consolidation,” says Dina Katabi, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, who led the study. “Our vision is developing health sensors that will disappear into the background and capture physiological signals and important health metrics, without asking the user to change her behavior in any way.”

There are possible applications for utilising this data beyond healthcare. For example, all the lights in a home could switch off automatically when occupants fall asleep to conserve energy. If occupants wake in the night to use the bathroom, certain lights could be switched on to guide the way.

With more than 50 million current (known) sufferers of sleep disorders in America alone, this research could be groundbreaking. It will help to diagnose and monitor problems without cumbersome and expensive specialist equipment.

“The opportunity is very big because we don’t understand sleep well, and a high fraction of the population has sleep problems,” says Mingmin Zhao, an MIT graduate and the paper’s first author. “We have this technology that, if we can make it work, can move us from a world where we do sleep studies once every few months in the sleep lab to continuous sleep studies in the home.”

Katabi and Zhao worked on the study with Matt Bianchi, chief of the division of sleep medicine at MGH, and Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science and a member of the Institute for Data, Systems, and Society at MIT, and Shichao Yue, another MIT graduate student who is also a co-author on the paper.

AI beyond sleep disorders

Some of Katabi’s previous work alongside her fellow researchers at MIT also made use of radio waves. One laptop-sized box, which emits low-power RF signals, revealed vital signs including pulse and breathing rate. This could be used to monitor the elderly to alert medical professionals of worrying changes to their vitals.

Artificial intelligence using deep neural networks has made all of this possible. Extracting relevant information from the large datasets while removing erroneous results required the researchers to build their own algorithm.

“Our device allows you not only to remove all of these sensors that you put on the person and make it a much better experience that can be done at home, it also makes the job of the doctor and the sleep technologist much easier,” Katabi says. “They don’t have to go through the data and manually label it.”

The researchers will present their new sensor at the International Conference on Machine Learning on August 9th, 2017.

What health applications are you excited to see AI used for?

 

The post AI uses radio waves to diagnose sleep disorders appeared first on AI News.

]]>
https://news.deepgeniusai.com/2017/08/07/ai-radio-waves-diagnose-sleep-disorders/feed/ 0