algorithm – AI News https://news.deepgeniusai.com Artificial Intelligence News Mon, 14 Dec 2020 16:34:35 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png algorithm – AI News https://news.deepgeniusai.com 32 32 EU human rights agency issues report on AI ethical considerations https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/ https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/#respond Mon, 14 Dec 2020 16:34:34 +0000 https://news.deepgeniusai.com/?p=10117 The European Union’s Fundamental Rights Agency (FRA) has issued a report on AI which delves into the ethical considerations which must be made about the technology. FRA’s report is titled Getting The Future Right and opens with some of the ways AI is already making lives better—such as helping with cancer diagnosis, and even predicting... Read more »

The post EU human rights agency issues report on AI ethical considerations appeared first on AI News.

]]>
The European Union’s Fundamental Rights Agency (FRA) has issued a report on AI which delves into the ethical considerations which must be made about the technology.

FRA’s report is titled Getting The Future Right and opens with some of the ways AI is already making lives better—such as helping with cancer diagnosis, and even predicting where burglaries are likely to take place.

“The possibilities seem endless,” writes Michael O’Flaherty, Director of the FRA, in the report’s foreword. “But how can we fully uphold fundamental rights standards when using AI?”

The FRA interviewed over a hundred public administration officials, private company staff, and a diverse range of experts, in a bid to answer that question.

With evidence of algorithms having biases which could lead to automating societal issues like racial profiling—it’s a question that needs answering if the full potential of AI is going to be unlocked for the whole of society.

O’Flaherty says:

“AI is not infallible, it is made by people – and humans can make mistakes. That is why people need to be aware when AI is used, how it works and how to challenge automated decisions. The EU needs to clarify how existing rules apply to AI. And organisations need to assess how their technologies can interfere with people’s rights both in the development and use of AI.

“We have an opportunity to shape AI that not only respects our human and fundamental rights but that also protects and promotes them.”

AI is being used in almost every industry in some form or another—if not already, it will be soon.

Biases in AI are more dangerous in some industries than others. Policing is an obvious example, but in areas like financial services it could mean one person being given a loan or mortgage compared to another.

Without due transparency, these biases could happen without anyone knowing the reasons behind such decisions—it could simply be because someone grew up in a different neighbourhood. Each automated decision has a very real human impact.

The FRA calls for the EU to:

  • Make sure that AI respects ALL fundamental rights – AI can affect many rights – not just privacy or data protection. It can also discriminate or impede justice. Any future AI legislation has to consider this and create effective safeguards.
  • Guarantee that people can challenge decisions taken by AI – people need to know when AI is used and how it is used, as well as how and where to complain. Organisations using AI need to be able to explain how their systems take decisions.
  • Assess AI before and during its use to reduce negative impacts – private and public organisations should carry out assessments of how AI could harm fundamental rights.
  • Provide more guidance on data protection rules – the EU should further clarify how data protection rules apply to AI. More clarity is also needed on the implications of automated decision-making and the right to human review when AI is used.
  • Assess whether AI discriminates – awareness about the potential for AI to discriminate, and the impact of this, is relatively low. This calls for more research funding to look into the potentially discriminatory effects of AI so Europe can guard against it.
  • Create an effective oversight system – the EU should invest in a more ‘joined-up’ system to hold businesses and public administrations accountable when using AI. Authorities need to ensure that oversight bodies have adequate resources and skills to do the job.

The EU has increased its scrutiny of “big tech” companies like Google in recent years over concerns of invasive privacy practices and abusing their market positions. Last week, AI News reported that Google had controversially fired leading AI ethics researcher Timnit Gebru after she criticised her employer in an email.

Google chief executive Sundar Pichai wrote in a memo: “We need to accept responsibility for the fact that a prominent black, female leader with immense talent left Google unhappily.

“It’s incredibly important to me that our black, women, and under-represented Googlers know that we value you and you do belong at Google.”

Gebru gave an interview to the BBC this week in which she called Google and big tech “institutionally racist”. With that in mind, the calls made in the FRA’s report seem especially important to heed.

You can download a full copy of the FRA’s report here.

(Photo by Guillaume Périgois on Unsplash)

The post EU human rights agency issues report on AI ethical considerations appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/feed/ 0
ML algorithm predicts heart attacks with 90% accuracy https://news.deepgeniusai.com/2019/05/14/ml-algorithm-predicts-heart-attacks/ https://news.deepgeniusai.com/2019/05/14/ml-algorithm-predicts-heart-attacks/#respond Tue, 14 May 2019 15:59:56 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5645 A machine learning algorithm claims to predict heart attacks and death from heart disease with a degree of accuracy beating human practitioners. The algorithm claims to have a 90 percent accuracy. LogitBoost was trained on data from 950 chest pain patients –  from the data, 85 variables are calculated. Each of the patients have known... Read more »

The post ML algorithm predicts heart attacks with 90% accuracy appeared first on AI News.

]]>
A machine learning algorithm claims to predict heart attacks and death from heart disease with a degree of accuracy beating human practitioners.

The algorithm claims to have a 90 percent accuracy. LogitBoost was trained on data from 950 chest pain patients –  from the data, 85 variables are calculated.

Each of the patients have known outcomes after six years. Combined, this algorithm was able to identify patterns which indicates a higher chance of a heart attack or cardiac-related death.

Study author Dr Luis Eduardo Juarez-Orozco, said these advances go beyond medicine.

He said: “These advances are far beyond what has been done in medicine, where we need to be cautious about how we evaluate risk and outcomes.

“We have the data but we are not using it to its full potential yet.”

The findings were presented yesterday at the International Conference on Nuclear Cardiology and Cardiac CT (ICNC) in Lisbon, Portugal.

Dr Juarez-Orozco said: “Humans have a very hard time thinking further than three dimensions or four dimensions.

“The moment we jump into the fifth dimension we’re lost.

“Our study shows that very high dimensional patterns are more useful than single dimensional patterns to predict outcomes in individuals and for that we need machine learning.”

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post ML algorithm predicts heart attacks with 90% accuracy appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/05/14/ml-algorithm-predicts-heart-attacks/feed/ 0
Morality algorithm proves AI can also be friendly https://news.deepgeniusai.com/2018/01/22/morality-algorithm-proves-ai-friendly/ https://news.deepgeniusai.com/2018/01/22/morality-algorithm-proves-ai-friendly/#respond Mon, 22 Jan 2018 16:24:50 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=2820 We’re used to seeing headlines of AI beating us at our own games in adversarial roles, but a new study has proven they can also excel when it comes to cooperation and compromise. The study, from an international team of computer scientists, found AI can be programmed with a higher degree of morality than humans.... Read more »

The post Morality algorithm proves AI can also be friendly appeared first on AI News.

]]>
We’re used to seeing headlines of AI beating us at our own games in adversarial roles, but a new study has proven they can also excel when it comes to cooperation and compromise.

The study, from an international team of computer scientists, found AI can be programmed with a higher degree of morality than humans. The researchers set out to build a new type of algorithm for playing games which requires working together rather than simply winning at all cost.

Jacob Crandall, BYU Computer Science Professor and lead author of the study, comments:

“The end goal is that we understand the mathematics behind cooperation with people and what attributes artificial intelligence needs to develop social skills. AI needs to be able to respond to us and articulate what it’s doing. It has to be able to interact with other people.”

In many real world applications, AI will have to compromise and cooperate with both humans and other machines. The ability to program AIs with friendly traits is unlikely to come as much surprise, but evidence they can express them better than humans opens up new possibilities.

Crandall and his team created an algorithm called S# and tested its performance across a variety of two-player games. In most cases, the machine outperformed humans in the games.

“Two humans, if they were honest with each other and loyal, would have done as well as two machines,” says Crandall. “As it is, about half of the humans lied at some point. So essentially, this particular algorithm is learning that moral characteristics are good. It’s programmed to not lie, and it also learns to maintain cooperation once it emerges.”

Take the current negotiations between Britain and the EU as the former exits the bloc. Both sides claim to want an ‘orderly exit’, but it’s clear human emotions are involved which keeps leading to deadlocks in the talks despite time running out. AI negotiators could run through all the scenarios and find the best areas to compromise without any feelings of mistrust.

“In society, relationships break down all the time,” he said. “People that were friends for years all of a sudden become enemies. Because the machine is often actually better at reaching these compromises than we are, it can potentially teach us how to do this better.”

What are your thoughts on AI morality?

 

The post Morality algorithm proves AI can also be friendly appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/01/22/morality-algorithm-proves-ai-friendly/feed/ 0