study – AI News https://news.deepgeniusai.com Artificial Intelligence News Mon, 14 Dec 2020 16:34:35 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png study – AI News https://news.deepgeniusai.com 32 32 EU human rights agency issues report on AI ethical considerations https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/ https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/#respond Mon, 14 Dec 2020 16:34:34 +0000 https://news.deepgeniusai.com/?p=10117 The European Union’s Fundamental Rights Agency (FRA) has issued a report on AI which delves into the ethical considerations which must be made about the technology. FRA’s report is titled Getting The Future Right and opens with some of the ways AI is already making lives better—such as helping with cancer diagnosis, and even predicting... Read more »

The post EU human rights agency issues report on AI ethical considerations appeared first on AI News.

]]>
The European Union’s Fundamental Rights Agency (FRA) has issued a report on AI which delves into the ethical considerations which must be made about the technology.

FRA’s report is titled Getting The Future Right and opens with some of the ways AI is already making lives better—such as helping with cancer diagnosis, and even predicting where burglaries are likely to take place.

“The possibilities seem endless,” writes Michael O’Flaherty, Director of the FRA, in the report’s foreword. “But how can we fully uphold fundamental rights standards when using AI?”

The FRA interviewed over a hundred public administration officials, private company staff, and a diverse range of experts, in a bid to answer that question.

With evidence of algorithms having biases which could lead to automating societal issues like racial profiling—it’s a question that needs answering if the full potential of AI is going to be unlocked for the whole of society.

O’Flaherty says:

“AI is not infallible, it is made by people – and humans can make mistakes. That is why people need to be aware when AI is used, how it works and how to challenge automated decisions. The EU needs to clarify how existing rules apply to AI. And organisations need to assess how their technologies can interfere with people’s rights both in the development and use of AI.

“We have an opportunity to shape AI that not only respects our human and fundamental rights but that also protects and promotes them.”

AI is being used in almost every industry in some form or another—if not already, it will be soon.

Biases in AI are more dangerous in some industries than others. Policing is an obvious example, but in areas like financial services it could mean one person being given a loan or mortgage compared to another.

Without due transparency, these biases could happen without anyone knowing the reasons behind such decisions—it could simply be because someone grew up in a different neighbourhood. Each automated decision has a very real human impact.

The FRA calls for the EU to:

  • Make sure that AI respects ALL fundamental rights – AI can affect many rights – not just privacy or data protection. It can also discriminate or impede justice. Any future AI legislation has to consider this and create effective safeguards.
  • Guarantee that people can challenge decisions taken by AI – people need to know when AI is used and how it is used, as well as how and where to complain. Organisations using AI need to be able to explain how their systems take decisions.
  • Assess AI before and during its use to reduce negative impacts – private and public organisations should carry out assessments of how AI could harm fundamental rights.
  • Provide more guidance on data protection rules – the EU should further clarify how data protection rules apply to AI. More clarity is also needed on the implications of automated decision-making and the right to human review when AI is used.
  • Assess whether AI discriminates – awareness about the potential for AI to discriminate, and the impact of this, is relatively low. This calls for more research funding to look into the potentially discriminatory effects of AI so Europe can guard against it.
  • Create an effective oversight system – the EU should invest in a more ‘joined-up’ system to hold businesses and public administrations accountable when using AI. Authorities need to ensure that oversight bodies have adequate resources and skills to do the job.

The EU has increased its scrutiny of “big tech” companies like Google in recent years over concerns of invasive privacy practices and abusing their market positions. Last week, AI News reported that Google had controversially fired leading AI ethics researcher Timnit Gebru after she criticised her employer in an email.

Google chief executive Sundar Pichai wrote in a memo: “We need to accept responsibility for the fact that a prominent black, female leader with immense talent left Google unhappily.

“It’s incredibly important to me that our black, women, and under-represented Googlers know that we value you and you do belong at Google.”

Gebru gave an interview to the BBC this week in which she called Google and big tech “institutionally racist”. With that in mind, the calls made in the FRA’s report seem especially important to heed.

You can download a full copy of the FRA’s report here.

(Photo by Guillaume Périgois on Unsplash)

The post EU human rights agency issues report on AI ethical considerations appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/feed/ 0
State of European Tech: Investment in ‘deep tech’ like AI drops 13% https://news.deepgeniusai.com/2020/12/08/state-of-european-tech-investment-deep-tech-ai-drops-13-percent/ https://news.deepgeniusai.com/2020/12/08/state-of-european-tech-investment-deep-tech-ai-drops-13-percent/#comments Tue, 08 Dec 2020 12:43:11 +0000 https://news.deepgeniusai.com/?p=10073 The latest State of European Tech report highlights that investment in “deep tech” like AI has dropped 13 percent this year. Data from Dealroom was used for the State of European Tech report. Dealroom defines deep tech as 16 fields: Artificial Intelligence, Machine Learning, Big Data, Augmented Reality, Virtual Reality, Drones, Autonomous Driving, Blockchain, Nanotech,... Read more »

The post State of European Tech: Investment in ‘deep tech’ like AI drops 13% appeared first on AI News.

]]>
The latest State of European Tech report highlights that investment in “deep tech” like AI has dropped 13 percent this year.

Data from Dealroom was used for the State of European Tech report. Dealroom defines deep tech as 16 fields: Artificial Intelligence, Machine Learning, Big Data, Augmented Reality, Virtual Reality, Drones, Autonomous Driving, Blockchain, Nanotech, Robotics, Internet of Things, 3D Technology, Computer Vision, Connected Devices, Sensors Technology, and Recognition Technology (NLP, image, video, text, speech recognition).

In 2019, there was $10.2 billion capital invested in European deep tech. In 2020, that dropped to $8.9 billion:

I think it’s fair to say that 2020 has been a tough year for most people and businesses. Economic uncertainty – not just from COVID-19 but also trade wars, Brexit, and a rather tumultuous US presidential election – has naturally led to fewer investments and people tightening their wallets.

For just one example, innovative satellite firm OneWeb was forced to declare bankruptcy earlier this year after crucial funding it was close to securing was pulled during the peak of the pandemic. Fortunately, OneWeb was saved following an acquisition by the UK government and Bharti Global—but not all companies have been so fortunate.

Many European businesses will now be watching the close-to-collapse Brexit talks with hope that a deal can yet be salvaged to limit the shock to supply lines, prevent disruption to Europe’s leading financial hub, and help to build a friendly relationship going forward with a continued exchange of ideas and talent rather than years of bitterness and resentment.

The report shows the UK has retained its significant lead in European tech investment and startups this year:

Despite the uncertainties, the UK looks unlikely to lose its position as the hub of European technology anytime soon.

Investments in European tech as a whole should bounce back – along with the rest of the world – in 2021, with promising COVID-19 vaccines rolling out and hopefully some calm in geopolitics.

94 percent of survey respondents for the report stated they have either increased or maintained their appetite to invest in the European venture asset class. Furthermore, a record number of US institutions have participated in more than one investment round in Europe this year—up 36% since 2016.

You can find a full copy of the State of European Tech report here.

The post State of European Tech: Investment in ‘deep tech’ like AI drops 13% appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/08/state-of-european-tech-investment-deep-tech-ai-drops-13-percent/feed/ 1
CDEI launches a ‘roadmap’ for tackling algorithmic bias https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/ https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/#respond Fri, 27 Nov 2020 16:10:35 +0000 https://news.deepgeniusai.com/?p=10058 A review from the Centre for Data Ethics and Innovation (CDEI) has led to the creation of a “roadmap” for tackling algorithmic bias. The analysis was commissioned by the UK government in October 2018 and will receive a formal response. Algorithms bring substantial benefits to businesses and individuals able to use them effectively. However, increasing... Read more »

The post CDEI launches a ‘roadmap’ for tackling algorithmic bias appeared first on AI News.

]]>
A review from the Centre for Data Ethics and Innovation (CDEI) has led to the creation of a “roadmap” for tackling algorithmic bias.

The analysis was commissioned by the UK government in October 2018 and will receive a formal response.

Algorithms bring substantial benefits to businesses and individuals able to use them effectively. However, increasing evidence suggests biases are – often unconsciously – making their way into algorithms and creating an uneven playing field.

The CDEI is the UK government’s advisory body on the responsible use of AI and data-driven technology. CDEI has spent the past two years examining the issue of algorithmic bias and how it can be tackled.

Adrian Weller, Board Member for the Centre for Data Ethics and Innovation, said:

“It is vital that we work hard now to get this right as adoption of algorithmic decision-making increases. Government, regulators, and industry need to work together with interdisciplinary experts, stakeholders, and the public to ensure that algorithms are used to promote fairness, not undermine it.

The Centre for Data Ethics and Innovation has today set out a range of measures to help the UK to achieve this, with a focus on enhancing transparency and accountability in decision-making processes that have a significant impact on individuals.

Not only does the report propose a roadmap to tackle the risks, but it highlights the opportunity that good use of data presents to address historical unfairness and avoid new biases in key areas of life.”

The report focuses on four key sectors where algorithmic bias poses the biggest risk: policing, recruitment, financial services, and local government.

Today’s facial recognition algorithms are relatively effective when used on white males, but research has consistently shown how ineffective they are with darker skin colours and females. The error rate is, therefore, higher when facial recognition algorithms are used on some parts of society over others.

In June, Detroit Police chief Editor Craig said facial recognition would misidentify someone around 96 percent of the time—not particularly comforting when they’re being used to perform mass surveillance of protests.

Craig’s comments were made just days after the ACLU (American Civil Liberties Union) lodged a complaint against Detroit Police following the harrowing wrongful arrest of black male Robert Williams due to a facial recognition error.

And that’s just one example of where AI can unfairly impact some parts of society over another.

“Fairness is a highly prized human value,” the report’s preface reads. “Societies in which individuals can flourish need to be held together by practices and institutions that are regarded as fair.”

Ensuring fairness in algorithmic decision-making

Transparency is required for algorithms. In financial services, a business loan or mortgage could be rejected without transparency simply because a person was born in a poor neighbourhood. Job applications could be rejected not on a person’s actual skill but dependent on where they were educated.

Such biases exist in humans and our institutions today, but automating them at scale is a recipe for disaster. Removing bias from algorithms is not an easy task but if achieved would lead to increased fairness by taking human biases out of the equation.

“It is well established that there is a risk that algorithmic systems can lead to biased decisions, with perhaps the largest underlying cause being the encoding of existing human biases into algorithmic systems. But the evidence is far less clear on whether algorithmic decision-making tools carry more or less risk of bias than previous human decision-making processes. Indeed, there are reasons to think that better use of data can have a role in making decisions fairer, if done with appropriate care.

When changing processes that make life-affecting decisions about individuals we should always proceed with caution. It is important to recognise that algorithms cannot do everything. There are some aspects of decision-making where human judgement, including the ability to be sensitive and flexible to the unique circumstances of an individual, will remain crucial.”

The report’s authors examined the aforementioned four key sectors to determine their current “maturity levels” in algorithmic decision-making.

In recruitment, the authors found rapid growth in the use of algorithms to make decisions at all stages. They note that adequate data is being collected to monitor outcomes but found that understanding of how to avoid human biases creeping in is lacking.

“More guidance is needed on how to ensure that these tools do not unintentionally discriminate against groups of people, particularly when trained on historic or current employment data.”

The financial services industry has relied on data to make decisions for longer than arguably any other to determine things like how likely it is an individual can repay a debt.

“Specific groups are historically underrepresented in the financial system, and there is a risk that these historic biases could be entrenched further through algorithmic systems.”

CDEI found limited use of algorithmic decision-making in UK policing but found variance across forces with regards to both usage and managing ethical risks.

“The use of data analytics tools in policing carries significant risk. Without sufficient care, processes can lead to Review into bias in algorithmic decision-making: Executive summary Centre for Data Ethics and Innovation 8 outcomes that are biased against particular groups, or systematically unfair.

In many scenarios where these tools are helpful, there is still an important balance to be struck between automated decision-making and the application of professional judgement and discretion.”

Finally, in local government, CDEI noted an increased use of algorithms to inform decision-making but most are in their early stages of deployment. Such tools can be powerful assets for societal good – like helping to plan where resources should be allocated to maintain vital services – but can also carry significant risks.

“Evidence has shown that certain people are more likely to be overrepresented in data held by local authorities and this can then lead to biases in predictions and interventions.”

The CDEI makes a number of recommendations in its report but among them is:

  • Clear and mandatory transparency over how algorithms are used for public decision-making and steps taken to ensure the fair treatment of individuals.
  • Full accountability for organisations implementing such technologies.
  • Improving the diversity of roles involved with developing and deploying decision-making tools.
  • Updating model contracts and framework agreements for public sector procurement to incorporate minimum standards around the ethical use of AI.
  • The government working with regulators to provide clear guidance on the collection and use of protected characteristic data in outcome monitoring and decision-making processes. They should then encourage the use of that guidance and data to address current and historic bias in key sectors.
  • Ensuring that the Equality and Human Rights Commission has sufficient resources to investigate cases of alleged algorithmic discrimination.

CDEI is overseen by an independent board which is made up of experts from across industry, civil society, academia, and government; it is an advisory body and does not directly set policies. The Department for Digital, Culture, Media & Sport is consulting on whether a statutory status would help the CDEI to deliver its remit as part of the National Data Strategy.

You can find a full copy of the CDEI’s report into tackling algorithmic bias here (PDF)

(Photo by Matt Duncan on Unsplash)

The post CDEI launches a ‘roadmap’ for tackling algorithmic bias appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/feed/ 0
AI helps patients to get more rest while reducing staff workload https://news.deepgeniusai.com/2020/11/17/ai-patients-more-rest-reducing-staff-workload/ https://news.deepgeniusai.com/2020/11/17/ai-patients-more-rest-reducing-staff-workload/#comments Tue, 17 Nov 2020 15:17:04 +0000 https://news.deepgeniusai.com/?p=10028 A team from Feinstein Institutes for Research thinks AI could be key to helping patients get more rest while reducing the burden on healthcare staff. Everyone knows how important adequate sleep is for recovery. However, patients in pain – or just insomniacs like me – can struggle to get the sleep they need. “Rest is... Read more »

The post AI helps patients to get more rest while reducing staff workload appeared first on AI News.

]]>
A team from Feinstein Institutes for Research thinks AI could be key to helping patients get more rest while reducing the burden on healthcare staff.

Everyone knows how important adequate sleep is for recovery. However, patients in pain – or just insomniacs like me – can struggle to get the sleep they need.

“Rest is a critical element to a patient’s care, and it has been well-documented that disrupted sleep is a common complaint that could delay discharge and recovery,” said Theodoros Zanos, Assistant Professor at Feinstein Institutes’ Institute of Bioelectronic Medicine.

When a patient finally gets some shut-eye, the last thing they want is to be woken up to have their vitals checked—but such measurements are, well, vital.

In a paper published in Nature Partner Journals, the researchers detailed how they developed a deep-learning predictive tool which predicts a patient’s stability overnight. This prevents multiple unnecessary checks being carried out.

Vital sign measurements from 2.13 million patient visits at Northwell Health hospitals in New York between 2012 and 2019 were used to train the AI. Data included heart rate, systolic blood pressure, body temperature, respiratory rate, and age. A total of 24.3 million vital signs were used.

When tested, the AI misdiagnosed just two of 10,000 patients in overnight stays. The researchers noted how nurses on their usual rounds would be able to account for the two misdiagnosed cases.

According to the paper, around 20-35 percent of a nurse’s time is spent keeping records of patients’ vitals. Around 10 percent of their time is spent collecting vitals. On average, a nurse currently has to collect a patient’s vitals every four to five hours.

With that in mind, it’s little wonder medical staff feel so overburdened and stressed. These people want to provide the best care they can but only have two hands. Using AI to free up more time for their heroic duties while simultaneously improving patient care can only be a good thing.

The AI tool is being rolled out across several of Northwell Health’s hospitals.

The post AI helps patients to get more rest while reducing staff workload appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/17/ai-patients-more-rest-reducing-staff-workload/feed/ 1
IBM study highlights rapid uptake and satisfaction with AI chatbots https://news.deepgeniusai.com/2020/10/27/ibm-study-uptake-satisfaction-ai-chatbots/ https://news.deepgeniusai.com/2020/10/27/ibm-study-uptake-satisfaction-ai-chatbots/#respond Tue, 27 Oct 2020 11:03:20 +0000 https://news.deepgeniusai.com/?p=9975 A study by IBM released this week highlights the rapid uptake of AI chatbots in addition to increasing customer satisfaction. Most of us are hardwired to hate not speaking directly to a human when we have a problem—following years of irritating voicemail systems. However, perhaps the only thing worse is being on hold for an... Read more »

The post IBM study highlights rapid uptake and satisfaction with AI chatbots appeared first on AI News.

]]>
A study by IBM released this week highlights the rapid uptake of AI chatbots in addition to increasing customer satisfaction.

Most of us are hardwired to hate not speaking directly to a human when we have a problem—following years of irritating voicemail systems. However, perhaps the only thing worse is being on hold for an uncertain amount of time due to overwhelmed call centres.

Chatbots have come a long way and can now quickly handle most queries within minutes. Where a human is required, the reduced demand through using virtual agent technology (VAT) means customers can get the assistance they need more quickly.

The COVID-19 pandemic has greatly increased the adoption of VAT as businesses seek to maintain customer service through such a challenging time.

According to IBM’s study, 99 percent of organisations reported increased customer satisfaction by integrating virtual agents. Human agents also report increased satisfaction and IBM says those “who feel valued and empowered with the proper tools and support are more likely to deliver a better experience to customers.”

68 percent of leaders cite improving the human agent experience as being among their key reasons for adopting VAT. There’s also economic incentive, with the cost of replacing a dissatisfied agent who leaves a business estimated at as much as 33 percent of the exiting employee’s salary.

IBM claims that VAT performance in the past has only been studied through individual case studies. The company set out, alongside Oxford Economics, to change that by surveying 1,005 respondents from companies using VAT daily.

Businesses wondering whether virtual assistants are worth the investment may be interested to know that 96 percent of the respondents “exceeded, achieved, or expect to achieve” their anticipated return.

On average, companies which have implemented VAT have increased their revenue by three percent.

IBM is one of the leading providers of chatbots through its Watson Assistant solution. While there’s little reason to doubt the claims made in the report, it’s worth keeping in mind that it’s not entirely unbiased.

Watson Assistant has gone from strength-to-strength and appears to have been among the few things which benefited from the pandemic. Between February and August, Watson Assistant usage increased by 65 percent.

You can download a full copy of IBM’s report here.

(Photo by Volodymyr Hryshchenko on Unsplash)

The post IBM study highlights rapid uptake and satisfaction with AI chatbots appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/10/27/ibm-study-uptake-satisfaction-ai-chatbots/feed/ 0
AI uses data from Oura wearables to predict COVID-19 three days early https://news.deepgeniusai.com/2020/06/02/ai-data-oura-wearables-predict-covid19-three-days-early/ https://news.deepgeniusai.com/2020/06/02/ai-data-oura-wearables-predict-covid19-three-days-early/#respond Tue, 02 Jun 2020 12:00:59 +0000 https://news.deepgeniusai.com/?p=9664 Researchers have successfully used AI to analyse data from Oura’s wearable rings and predict COVID-19 symptoms three days early. The researchers, from WVU Medicine and the Rockefeller Neuroscience Institute, first announced the potentially groundbreaking project in April. At the time, the researchers found they could predict COVID-19 symptoms – including fever, cough, and fatigue –... Read more »

The post AI uses data from Oura wearables to predict COVID-19 three days early appeared first on AI News.

]]>
Researchers have successfully used AI to analyse data from Oura’s wearable rings and predict COVID-19 symptoms three days early.

The researchers, from WVU Medicine and the Rockefeller Neuroscience Institute, first announced the potentially groundbreaking project in April.

At the time, the researchers found they could predict COVID-19 symptoms – including fever, cough, and fatigue – up to 24 hours before their onset.

“The holistic and integrated neuroscience platform developed by the RNI continuously monitors the human operating system, which allows for the accurate prediction of the onset of viral infection symptoms associated with COVID-19,” said Ali Rezai, M.D., executive chair of the WVU Rockefeller Neuroscience Institute.

“We feel this platform will be integral to protecting our healthcare workers, first responders, and communities as we adjust to life in the COVID-19 era.”

Participants in the study were asked to log neurological symptoms like stress and anxiety in an app. The Oura ring, meanwhile, automatically tracks physiological data like body temperature, heart rate, and sleep patterns.

“We are hopeful that Oura’s technology will advance how people identify and understand our body’s most nuanced physiological signals and warning signs, as they relate to infectious diseases like COVID-19,” explained Harpreet Rai, CEO of Oura Health.

“Partnering with the Rockefeller Neuroscience Institute on this important study helps fulfil Oura’s vision of offering data for the public good and empowering individuals with the personal insights needed to lead healthier lives.”  

Using an AI prediction model, the researchers have improved their ability to track COVID-19 symptoms from 24 hours before their onset to three days.

The accuracy rate for the current system is 90 percent. While impressive, that does mean 100 people in every 1000 patients could be misdiagnosed if such a system was widely rolled out.

This isn’t the only research into the use of wearables to help tackle the COVID-19 pandemic – Fitbit is also conducting a large study into whether its popular wearables can detect markers which may indicate that a user is infected with the novel coronavirus and should therefore quarantine and seek a professional test.

With the COVID-19 pandemic looking set to disrupt our lives for the foreseeable future, it seems AI and wearables provide some hope of diagnosing cases earlier, limiting reinfection, and helping people return to some degree of normality.

The post AI uses data from Oura wearables to predict COVID-19 three days early appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/02/ai-data-oura-wearables-predict-covid19-three-days-early/feed/ 0
Study highlights just how poor AIs are at recognising non-cisgender people https://news.deepgeniusai.com/2019/10/18/study-highlights-just-how-poor-ais-are-at-recognising-non-cisgender-people/ https://news.deepgeniusai.com/2019/10/18/study-highlights-just-how-poor-ais-are-at-recognising-non-cisgender-people/#respond Fri, 18 Oct 2019 13:29:39 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6119 A study conducted by the University of Colorado Boulder has revealed just how bad AIs are at recognising non-cisgender people. The worrying problems AIs have with recognising racial minorities are becoming increasingly well-documented, but this new study is among the first to evaluate gender classifications. AI systems categorise people based on what they can “see”... Read more »

The post Study highlights just how poor AIs are at recognising non-cisgender people appeared first on AI News.

]]>
A study conducted by the University of Colorado Boulder has revealed just how bad AIs are at recognising non-cisgender people.

The worrying problems AIs have with recognising racial minorities are becoming increasingly well-documented, but this new study is among the first to evaluate gender classifications.

AI systems categorise people based on what they can “see” and often use stereotypical parameters (e.g. males don’t have long hair, females don’t have facial hair).

There are a huge number of gender categories: Facebook, for example, has around 71 options for its users. It’s perhaps asking too much to ever expect AI to exactly categorise everyone, but the researchers found concerning miscategorisations which have the potential to cause serious distress.

Morgan Klaus Scheuerman, a researcher who worked on the study, identifies as male. In the image below, Microsoft’s AI on the left correctly identifies him as male but IBM’s on the right identifies him as female:

Scheuerman says in testing his photos he was misclassified about half the time despite identifying as a “cisnormative” gender.

As long as the categorisation is approximately correct, it’s unlikely to cause too much distress. However, imagine someone who has spent years feeling they were the wrong gender, potentially faced bullying and harassment during their transition, perhaps even undertook surgeries and/or hormone treatments, and then an AI categorises them as the gender they were born as.

Scheuerman and his team tested 10 existing facial analysis and image labelling services. As you can see, most of the services currently stick to attempting to identify the cisnormative genders:

On average, the facial analysis systems performed best with images of cisgender women and worst on images of transgender men.

Here are the results for each service when categorising the genders:

Fashion trends evolve with time. Hairstyles, in particular, go through many phases. Men have opted for long hair (think of 70s/80s bands like Whitesnake, Guns n Roses, and Aerosmith) during some periods, while there are successful female models today like Ruth Bell who rock a buzzcut traditionally associated with males.

In a decade or so, it might even be popular for males to have a shaved face and females to have beards. AIs trained on images today would struggle to adapt and categorise such changes which poses yet another issue.

There’s generally a lot of problems with no solutions, but there needs to be. Communities like LGBT are already at higher risk of experiencing poor mental health through factors like societal discrimination and inequalities. Automating those problems will have devastating consequences.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Study highlights just how poor AIs are at recognising non-cisgender people appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/10/18/study-highlights-just-how-poor-ais-are-at-recognising-non-cisgender-people/feed/ 0
Report: Companies like Amazon and Microsoft are ‘putting world at risk’ of killer AI https://news.deepgeniusai.com/2019/08/22/report-companies-amazon-microsoft-world-risk-ai/ https://news.deepgeniusai.com/2019/08/22/report-companies-amazon-microsoft-world-risk-ai/#respond Thu, 22 Aug 2019 12:31:17 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5960 A survey of major players within the industry concludes that leading tech companies like Amazon and Microsoft are putting the world ‘at risk’ of killer AI. PAX, a Dutch NGO, ranked 50 firms based on three criteria: If technology they’re developing could be used for killer AI. Their involvement with military projects. If they’ve committed... Read more »

The post Report: Companies like Amazon and Microsoft are ‘putting world at risk’ of killer AI appeared first on AI News.

]]>
A survey of major players within the industry concludes that leading tech companies like Amazon and Microsoft are putting the world ‘at risk’ of killer AI.

PAX, a Dutch NGO, ranked 50 firms based on three criteria:

  1. If technology they’re developing could be used for killer AI.
  2. Their involvement with military projects.
  3. If they’ve committed to not being involved with military applications in the future.

Microsoft and Amazon are named among the world’s ‘highest risk’ tech companies putting the world at risk, while Google leads the way among large tech companies implementing proper safeguards.

Google’s ranking among the safest tech companies may be of surprise to some given the company’s reputation for mass data collection. Mountain View was also caught up in an outcry regarding its controversial ‘Project Maven’ contract with the Pentagon.

Project Maven was a contract Google had with the Pentagon to supply AI technology for military drones. Several high-profile employees resigned over the contract, while over 4,000 Google staff signed a petition demanding their management cease the project and never again “build warfare technology.”

Following the Project Maven backlash, Google CEO Sundar Pichai promised in a blog post the company will not develop technologies or weapons that cause harm, or anything which can be used for surveillance violating “internationally accepted norms” or “widely accepted principles of international law and human rights”.

Pichai’s promise not to be involved with such contracts in the future appears to have satisfied PAX in their rankings. Google has since attempted to improve its public image around its AI developments with things such as the creation of a dedicated ethics panel, but that backfired and collapsed quickly after featuring a member of a right-wing think tank and a defense drone mogul.

“Why are companies like Microsoft and Amazon not denying that they’re currently developing these highly controversial weapons, which could decide to kill people without direct human involvement?” said Frank Slijper, lead author of the report published this week.

Microsoft, which ranks among the highest risk tech companies in PAX’s list, warned investors back in February that its AI offerings could damage the company’s reputation. 

In a quarterly report, Microsoft wrote:

“Some AI scenarios present ethical issues. If we enable or offer AI solutions that are controversial because of their impact on human rights, privacy, employment, or other social issues, we may experience brand or reputational harm.”

Some of Microsoft’s forays into the technology have already proven troublesome, such as chatbot ‘Tay’ which became a racist, sexist, generally-rather-unsavoury character after internet users took advantage of its machine-learning capabilities.

Microsoft and Amazon are both currently bidding for a $10 billion Pentagon contract to provide cloud infrastructure for the US military.

“Tech companies need to be aware that unless they take measures, their technology could contribute to the development of lethal autonomous weapons,” comments Daan Kayser, PAX project leader on autonomous weapons. “Setting up clear, publicly-available policies is an essential strategy to prevent this from happening.”

You can find PAX’s full risk assessment of the companies here (PDF).

? , , , AI &

The post Report: Companies like Amazon and Microsoft are ‘putting world at risk’ of killer AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/08/22/report-companies-amazon-microsoft-world-risk-ai/feed/ 0
Google Assistant wins IQ test, but Alexa and Siri are catching up https://news.deepgeniusai.com/2019/08/19/google-assistant-iq-test-alexa-siri/ https://news.deepgeniusai.com/2019/08/19/google-assistant-iq-test-alexa-siri/#respond Mon, 19 Aug 2019 11:34:42 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5948 Google Assistant continues to lead the virtual assistant pack, but its rivals are close behind according to a new IQ study by Loup Ventures. Loup Ventures asked each of the three main virtual assistants – Google Assistant, Alexa, and Siri – a total of 800 questions. The assistants understood almost every question, even if not... Read more »

The post Google Assistant wins IQ test, but Alexa and Siri are catching up appeared first on AI News.

]]>
Google Assistant continues to lead the virtual assistant pack, but its rivals are close behind according to a new IQ study by Loup Ventures.

Loup Ventures asked each of the three main virtual assistants – Google Assistant, Alexa, and Siri – a total of 800 questions. The assistants understood almost every question, even if not all of the responses were correct/sufficient.

In terms of understanding the questions, these are the results:

  • Google Assistant – 100 percent
  • Alexa – 99.9 percent
  • Siri – 99.8 percent

Loup Ventures’ say their question set it designed to comprehensively test a virtual assistant’s ability and utility. Questions are broken down into five categories:

  1. Local – Where is the nearest coffee shop?
  2. Commerce – Order me more paper towels.
  3. Navigation – How do I get to Uptown on the bus?
  4. Information – Who do the Twins play tonight?
  5. Command – Remind me to call Jerome at 2 pm today.

This is the percentage of questions each assistant answered correctly:

  • Google Assistant – 92.9 percent
  • Siri – 83.1 percent
  • Alexa –  79.8 percent

The results are a huge improvement over Assistant, Alexa, and Siri’s results last year.

In 2018, Loup Ventures found Google Assistant answered the most questions with an 86 percent success rate. This was followed by Siri at 79 percent, while Alexa trailed behind at just 61 percent.

Alexa’s jump in answering the question correctly from 61 percent last year to almost 80 percent this year is the most commendable performance improvement, even if Amazon’s assistant is still in last place overall.

The researchers explained that they’ve stopped including Cortana in their tests due to a strategy change from Microsoft earlier this year.

Microsoft said in January that it’s no longer attempting to compete with Alexa or Google Assistant in areas like smart speakers, but instead is repositioning Cortana more like a skill that can be embedded in services where she can be of assistance.

? , , , AI &

The post Google Assistant wins IQ test, but Alexa and Siri are catching up appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/08/19/google-assistant-iq-test-alexa-siri/feed/ 0
UK gov is among the ‘most prepared’ for AI revolution https://news.deepgeniusai.com/2019/05/21/uk-gov-most-prepared-ai-revolution/ https://news.deepgeniusai.com/2019/05/21/uk-gov-most-prepared-ai-revolution/#respond Tue, 21 May 2019 15:48:23 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5670 The UK has retained its place among the most prepared governments to harness the opportunities presented by artificial intelligence. An index published today, compiled by Oxford Insights in partnership with the International Development Research Centre (IDRC) in Canada, places the UK as Europe’s leading nation and just second on the world stage. Margot Editor, Minister... Read more »

The post UK gov is among the ‘most prepared’ for AI revolution appeared first on AI News.

]]>
The UK has retained its place among the most prepared governments to harness the opportunities presented by artificial intelligence.

An index published today, compiled by Oxford Insights in partnership with the International Development Research Centre (IDRC) in Canada, places the UK as Europe’s leading nation and just second on the world stage.

Margot Editor, Minister for Digital and the Creative Industries, said:

“I’m delighted the UK government has been recognised as one of the best in the world in readiness for Artificial Intelligence.

AI is already having a positive impact across society – from detecting fraud and diagnosing medical conditions, to helping us discover new music – and we’re working hard to make the most of its vast opportunities while managing and mitigating the potential risks.

With our newly appointed AI Council, we will boost the growth and use of AI in the UK, by using the knowledge of experts from a range of sectors and encourage dialogue between industry, academia and the public sector, to realise the full potential of data-driven technologies to the economy.”

Singapore pipped the UK for number one, although both scored over 9.0 in the index’s rankings. The researchers used 11 input metrics to determine countries’ rankings which assess governance, infrastructure and data, skills and education, and government and public services.

Here are the index’s top 20 countries:

Western European governments make up the bulk of the top 20, with Germany just behind the UK in third place. There’s also Finland (5th), Sweden (6th), France (8th), Denmark (9th), Norway (12th), Netherlands (14th), Italy (15th), Austria (16th), and Switzerland (18th).

Seeing this many European governments, particularly EU nations, will be of surprise to some. Many believe Europe to be behind in AI due to strict regulations around things such as data collection.

One country often considered to be a leader in AI is China, in part a result of its mass data collection. However, China squeezes in the top 20. The researchers note this is due to limited data availability providing lower scores in metrics like infrastructure.

Richard Stirling, CEO at Oxford Insights, comments:

“It was not surprising that Singapore came top of the rankings, but the UK has also performed extremely well, and the government has demonstrated its commitment with initiatives such as the Artificial Intelligence Sector Deal in April 2018.

However, there is global competition in the AI space, and as our research highlights, other countries such as France, Germany and China have also announced significant investments and introduced AI strategies.  

If the UK is to stay ahead in the field, we must continue to support AI research, technologies, and companies with a clear national strategy and investment programme to support continuous innovation.”

Oxford Insights previously listed the UK as number one for AI readiness in a prior index examining 35 OECD countries, but the new index is much broader in scope. The new index analysed 194 countries using a vaster range of source data.

Just yesterday, Chinese technology giant Tencent announced it led a $100m (£78.4m) funding round for promising British AI startup Prowler. Dr Ling Ge, Chief European Representative at Tencent, said: “The UK is a global leader in AI and is increasingly becoming a focus for companies looking to invest in the sector.”

Last week, the UK government announced the names of board members appointed to its AI Council. The AI council features a range of industry talent from representatives of companies using it for their operations, to policymakers aiming to overcome adoption barriers while ensuring safe integration.

Digital Secretary, Jeremy Wright, stated: “Britain is already a leading authority in AI. We are home to some of the world’s finest academic institutions, landing record levels of investment to the sector, and attracting the best global tech talent, but we must not be complacent.”

Given the rate of AI news coming from the UK, it doesn’t seem there’s any danger of complacency.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post UK gov is among the ‘most prepared’ for AI revolution appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/05/21/uk-gov-most-prepared-ai-revolution/feed/ 0