Government – AI News https://news.deepgeniusai.com Artificial Intelligence News Mon, 14 Dec 2020 16:34:35 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Government – AI News https://news.deepgeniusai.com 32 32 EU human rights agency issues report on AI ethical considerations https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/ https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/#respond Mon, 14 Dec 2020 16:34:34 +0000 https://news.deepgeniusai.com/?p=10117 The European Union’s Fundamental Rights Agency (FRA) has issued a report on AI which delves into the ethical considerations which must be made about the technology. FRA’s report is titled Getting The Future Right and opens with some of the ways AI is already making lives better—such as helping with cancer diagnosis, and even predicting... Read more »

The post EU human rights agency issues report on AI ethical considerations appeared first on AI News.

]]>
The European Union’s Fundamental Rights Agency (FRA) has issued a report on AI which delves into the ethical considerations which must be made about the technology.

FRA’s report is titled Getting The Future Right and opens with some of the ways AI is already making lives better—such as helping with cancer diagnosis, and even predicting where burglaries are likely to take place.

“The possibilities seem endless,” writes Michael O’Flaherty, Director of the FRA, in the report’s foreword. “But how can we fully uphold fundamental rights standards when using AI?”

The FRA interviewed over a hundred public administration officials, private company staff, and a diverse range of experts, in a bid to answer that question.

With evidence of algorithms having biases which could lead to automating societal issues like racial profiling—it’s a question that needs answering if the full potential of AI is going to be unlocked for the whole of society.

O’Flaherty says:

“AI is not infallible, it is made by people – and humans can make mistakes. That is why people need to be aware when AI is used, how it works and how to challenge automated decisions. The EU needs to clarify how existing rules apply to AI. And organisations need to assess how their technologies can interfere with people’s rights both in the development and use of AI.

“We have an opportunity to shape AI that not only respects our human and fundamental rights but that also protects and promotes them.”

AI is being used in almost every industry in some form or another—if not already, it will be soon.

Biases in AI are more dangerous in some industries than others. Policing is an obvious example, but in areas like financial services it could mean one person being given a loan or mortgage compared to another.

Without due transparency, these biases could happen without anyone knowing the reasons behind such decisions—it could simply be because someone grew up in a different neighbourhood. Each automated decision has a very real human impact.

The FRA calls for the EU to:

  • Make sure that AI respects ALL fundamental rights – AI can affect many rights – not just privacy or data protection. It can also discriminate or impede justice. Any future AI legislation has to consider this and create effective safeguards.
  • Guarantee that people can challenge decisions taken by AI – people need to know when AI is used and how it is used, as well as how and where to complain. Organisations using AI need to be able to explain how their systems take decisions.
  • Assess AI before and during its use to reduce negative impacts – private and public organisations should carry out assessments of how AI could harm fundamental rights.
  • Provide more guidance on data protection rules – the EU should further clarify how data protection rules apply to AI. More clarity is also needed on the implications of automated decision-making and the right to human review when AI is used.
  • Assess whether AI discriminates – awareness about the potential for AI to discriminate, and the impact of this, is relatively low. This calls for more research funding to look into the potentially discriminatory effects of AI so Europe can guard against it.
  • Create an effective oversight system – the EU should invest in a more ‘joined-up’ system to hold businesses and public administrations accountable when using AI. Authorities need to ensure that oversight bodies have adequate resources and skills to do the job.

The EU has increased its scrutiny of “big tech” companies like Google in recent years over concerns of invasive privacy practices and abusing their market positions. Last week, AI News reported that Google had controversially fired leading AI ethics researcher Timnit Gebru after she criticised her employer in an email.

Google chief executive Sundar Pichai wrote in a memo: “We need to accept responsibility for the fact that a prominent black, female leader with immense talent left Google unhappily.

“It’s incredibly important to me that our black, women, and under-represented Googlers know that we value you and you do belong at Google.”

Gebru gave an interview to the BBC this week in which she called Google and big tech “institutionally racist”. With that in mind, the calls made in the FRA’s report seem especially important to heed.

You can download a full copy of the FRA’s report here.

(Photo by Guillaume Périgois on Unsplash)

The post EU human rights agency issues report on AI ethical considerations appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/feed/ 0
CDEI launches a ‘roadmap’ for tackling algorithmic bias https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/ https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/#respond Fri, 27 Nov 2020 16:10:35 +0000 https://news.deepgeniusai.com/?p=10058 A review from the Centre for Data Ethics and Innovation (CDEI) has led to the creation of a “roadmap” for tackling algorithmic bias. The analysis was commissioned by the UK government in October 2018 and will receive a formal response. Algorithms bring substantial benefits to businesses and individuals able to use them effectively. However, increasing... Read more »

The post CDEI launches a ‘roadmap’ for tackling algorithmic bias appeared first on AI News.

]]>
A review from the Centre for Data Ethics and Innovation (CDEI) has led to the creation of a “roadmap” for tackling algorithmic bias.

The analysis was commissioned by the UK government in October 2018 and will receive a formal response.

Algorithms bring substantial benefits to businesses and individuals able to use them effectively. However, increasing evidence suggests biases are – often unconsciously – making their way into algorithms and creating an uneven playing field.

The CDEI is the UK government’s advisory body on the responsible use of AI and data-driven technology. CDEI has spent the past two years examining the issue of algorithmic bias and how it can be tackled.

Adrian Weller, Board Member for the Centre for Data Ethics and Innovation, said:

“It is vital that we work hard now to get this right as adoption of algorithmic decision-making increases. Government, regulators, and industry need to work together with interdisciplinary experts, stakeholders, and the public to ensure that algorithms are used to promote fairness, not undermine it.

The Centre for Data Ethics and Innovation has today set out a range of measures to help the UK to achieve this, with a focus on enhancing transparency and accountability in decision-making processes that have a significant impact on individuals.

Not only does the report propose a roadmap to tackle the risks, but it highlights the opportunity that good use of data presents to address historical unfairness and avoid new biases in key areas of life.”

The report focuses on four key sectors where algorithmic bias poses the biggest risk: policing, recruitment, financial services, and local government.

Today’s facial recognition algorithms are relatively effective when used on white males, but research has consistently shown how ineffective they are with darker skin colours and females. The error rate is, therefore, higher when facial recognition algorithms are used on some parts of society over others.

In June, Detroit Police chief Editor Craig said facial recognition would misidentify someone around 96 percent of the time—not particularly comforting when they’re being used to perform mass surveillance of protests.

Craig’s comments were made just days after the ACLU (American Civil Liberties Union) lodged a complaint against Detroit Police following the harrowing wrongful arrest of black male Robert Williams due to a facial recognition error.

And that’s just one example of where AI can unfairly impact some parts of society over another.

“Fairness is a highly prized human value,” the report’s preface reads. “Societies in which individuals can flourish need to be held together by practices and institutions that are regarded as fair.”

Ensuring fairness in algorithmic decision-making

Transparency is required for algorithms. In financial services, a business loan or mortgage could be rejected without transparency simply because a person was born in a poor neighbourhood. Job applications could be rejected not on a person’s actual skill but dependent on where they were educated.

Such biases exist in humans and our institutions today, but automating them at scale is a recipe for disaster. Removing bias from algorithms is not an easy task but if achieved would lead to increased fairness by taking human biases out of the equation.

“It is well established that there is a risk that algorithmic systems can lead to biased decisions, with perhaps the largest underlying cause being the encoding of existing human biases into algorithmic systems. But the evidence is far less clear on whether algorithmic decision-making tools carry more or less risk of bias than previous human decision-making processes. Indeed, there are reasons to think that better use of data can have a role in making decisions fairer, if done with appropriate care.

When changing processes that make life-affecting decisions about individuals we should always proceed with caution. It is important to recognise that algorithms cannot do everything. There are some aspects of decision-making where human judgement, including the ability to be sensitive and flexible to the unique circumstances of an individual, will remain crucial.”

The report’s authors examined the aforementioned four key sectors to determine their current “maturity levels” in algorithmic decision-making.

In recruitment, the authors found rapid growth in the use of algorithms to make decisions at all stages. They note that adequate data is being collected to monitor outcomes but found that understanding of how to avoid human biases creeping in is lacking.

“More guidance is needed on how to ensure that these tools do not unintentionally discriminate against groups of people, particularly when trained on historic or current employment data.”

The financial services industry has relied on data to make decisions for longer than arguably any other to determine things like how likely it is an individual can repay a debt.

“Specific groups are historically underrepresented in the financial system, and there is a risk that these historic biases could be entrenched further through algorithmic systems.”

CDEI found limited use of algorithmic decision-making in UK policing but found variance across forces with regards to both usage and managing ethical risks.

“The use of data analytics tools in policing carries significant risk. Without sufficient care, processes can lead to Review into bias in algorithmic decision-making: Executive summary Centre for Data Ethics and Innovation 8 outcomes that are biased against particular groups, or systematically unfair.

In many scenarios where these tools are helpful, there is still an important balance to be struck between automated decision-making and the application of professional judgement and discretion.”

Finally, in local government, CDEI noted an increased use of algorithms to inform decision-making but most are in their early stages of deployment. Such tools can be powerful assets for societal good – like helping to plan where resources should be allocated to maintain vital services – but can also carry significant risks.

“Evidence has shown that certain people are more likely to be overrepresented in data held by local authorities and this can then lead to biases in predictions and interventions.”

The CDEI makes a number of recommendations in its report but among them is:

  • Clear and mandatory transparency over how algorithms are used for public decision-making and steps taken to ensure the fair treatment of individuals.
  • Full accountability for organisations implementing such technologies.
  • Improving the diversity of roles involved with developing and deploying decision-making tools.
  • Updating model contracts and framework agreements for public sector procurement to incorporate minimum standards around the ethical use of AI.
  • The government working with regulators to provide clear guidance on the collection and use of protected characteristic data in outcome monitoring and decision-making processes. They should then encourage the use of that guidance and data to address current and historic bias in key sectors.
  • Ensuring that the Equality and Human Rights Commission has sufficient resources to investigate cases of alleged algorithmic discrimination.

CDEI is overseen by an independent board which is made up of experts from across industry, civil society, academia, and government; it is an advisory body and does not directly set policies. The Department for Digital, Culture, Media & Sport is consulting on whether a statutory status would help the CDEI to deliver its remit as part of the National Data Strategy.

You can find a full copy of the CDEI’s report into tackling algorithmic bias here (PDF)

(Photo by Matt Duncan on Unsplash)

The post CDEI launches a ‘roadmap’ for tackling algorithmic bias appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/feed/ 0
South Korea wants to develop 50 types of AI chips by 2030 https://news.deepgeniusai.com/2020/10/13/south-korea-develop-50-types-ai-chips-2030/ https://news.deepgeniusai.com/2020/10/13/south-korea-develop-50-types-ai-chips-2030/#respond Tue, 13 Oct 2020 16:31:41 +0000 https://news.deepgeniusai.com/?p=9953 South Korea has set itself the ambitious national target of developing 50 types of AI chips within the next decade. The country’s ICT ministry made the announcement this week as South Korea positions itself to move beyond its historic foothold in memory chips into artificial intelligence semiconductors. South Korea is investing heavily in AI; especially... Read more »

The post South Korea wants to develop 50 types of AI chips by 2030 appeared first on AI News.

]]>
South Korea has set itself the ambitious national target of developing 50 types of AI chips within the next decade.

The country’s ICT ministry made the announcement this week as South Korea positions itself to move beyond its historic foothold in memory chips into artificial intelligence semiconductors.

South Korea is investing heavily in AI; especially in the hardware which makes it possible.

Around one trillion won ($871 million) will be spent on developing next-generation AI chips before 2029. The current plan is to be in a position to produce AI chips nationally by 2022 and build a 3,000-strong army of experts within the decade.

Last year, President Moon Jae-in announced a ‘National Strategy for Artificial Intelligence’ (PDF) and set out his desire for South Korea to lead in the technology.

In a foreword, President Moon Jae-in wrote:

“The era of the Fourth Industrial Revolution is indeed an age in which imagination can change the world. Korea is neither the first country to have ushered in the era of artificial intelligence nor the country with the best AI technology at present. However, the country has people capable of turning their imagination into reality and taking on challenges to pursue novelty.

Even in the throes of the 1997 Asian financial crisis, the country led the Internet Revolution and now boasts world-class manufacturing competitiveness, globally unmatched ICT infrastructure and abundant data concerning e-government.

If we link artificial intelligence primarily with the sectors in which we’ve accumulated extensive experience and competitiveness, such as manufacturing and semiconductors, we will be able to give birth to the smartest yet most human-like artificial intelligence. The Government will join forces with developers to help them fully utilize their imaginations and turn their ideas into reality.”

South Korea is home to tech giants such as Samsung and SK hynix which continue to offer global innovations. However, it’s understandable South Korea wants to ensure it secures a slice of what will be a lucrative market.

Analysts from McKinsey predict AI chips will generate around $67 billion in revenue by 2025 and capture around 20 percent of all semiconductor demand.

South Korea, for its part, wants to own 20 percent of the global AI chip market by the end of this decade.

The post South Korea wants to develop 50 types of AI chips by 2030 appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/10/13/south-korea-develop-50-types-ai-chips-2030/feed/ 0
Information Commissioner clears Cambridge Analytica of influencing Brexit https://news.deepgeniusai.com/2020/10/08/information-commissioner-cambridge-analytica-influencing-brexit/ https://news.deepgeniusai.com/2020/10/08/information-commissioner-cambridge-analytica-influencing-brexit/#respond Thu, 08 Oct 2020 16:32:57 +0000 https://news.deepgeniusai.com/?p=9938 A three-year investigation by the UK Information Commissioner’s office has cleared Cambridge Analytica of electoral interference. Cambridge Analytica was accused in March 2018 of using AI tools and big data to influence the results of the Brexit referendum and the US presidential election. Most objective observers probably felt the case was overblown, but it’s taken... Read more »

The post Information Commissioner clears Cambridge Analytica of influencing Brexit appeared first on AI News.

]]>
A three-year investigation by the UK Information Commissioner’s office has cleared Cambridge Analytica of electoral interference.

Cambridge Analytica was accused in March 2018 of using AI tools and big data to influence the results of the Brexit referendum and the US presidential election. Most objective observers probably felt the case was overblown, but it’s taken until now to be confirmed.

“From my review of the materials recovered by the investigation I have found no further evidence to change my earlier view that CA [Cambridge Analytica] was not involved in the EU referendum campaign in the UK,” wrote Information Commissioner Elizabeth Denham.

Cambridge Analytica did obtain a ton of user data—but through predominantly commercial means, and of mostly US voters. Such data is available to, and has also been purchased by, other electoral campaigns for targeted advertising purposes (the Remain campaigns in the UK actually outspent their Leave counterparts by £6 million.)

“CA were purchasing significant volumes of commercially available personal data (at one estimate over 130 billion data points), in the main about millions of US voters, to combine it with the Facebook derived insight information they had obtained from an academic at Cambridge University, Dr Aleksandr Kogan, and elsewhere,” wrote Denham.

The only real scandal was Facebook’s poor protection of users which allowed third-party apps to scrape their data—for which it was fined £500,000 by the UK’s data protection watchdog.

It seems the claims Cambridge Analytica used powerful AI tools were also rather overblown, with the information commissioner saying all they found were models “built from ‘off the shelf’ analytical tools”.

The information commissioner even found evidence that Cambridge Analytica’s own staff “were concerned about some of the public statements the leadership of the company were making about their impact and influence.”

Cambridge Analytica appears to have been a victim of those unable to accept democratic results combined with its own boasting of capabilities that weren’t actually that impressive.

You can read the full report here (PDF)

(Photo by Christian Lue on Unsplash)

The post Information Commissioner clears Cambridge Analytica of influencing Brexit appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/10/08/information-commissioner-cambridge-analytica-influencing-brexit/feed/ 0
How can AI-powered humanitarian engineering tackle the biggest threats facing our planet? https://news.deepgeniusai.com/2020/08/28/how-can-ai-powered-humanitarian-engineering-tackle-the-biggest-threats-facing-our-planet/ https://news.deepgeniusai.com/2020/08/28/how-can-ai-powered-humanitarian-engineering-tackle-the-biggest-threats-facing-our-planet/#respond Fri, 28 Aug 2020 20:40:53 +0000 https://news.deepgeniusai.com/?p=9834 Humanitarian engineering programs bring together engineers, policy makers, non-profit organisations, and local communities to leverage technology for the greater good of humanity. The intersection of technology, community, and sustainability offers a plethora of opportunities to innovate. We still live in an era where millions of people are under extreme poverty, lacking access to clean water,... Read more »

The post How can AI-powered humanitarian engineering tackle the biggest threats facing our planet? appeared first on AI News.

]]>
Humanitarian engineering programs bring together engineers, policy makers, non-profit organisations, and local communities to leverage technology for the greater good of humanity.

The intersection of technology, community, and sustainability offers a plethora of opportunities to innovate. We still live in an era where millions of people are under extreme poverty, lacking access to clean water, basic sanitation, electricity, internet, quality education, and healthcare.

Clearly, we need global solutions to tackle the grandest challenges facing our planet. So how can artificial intelligence (AI) assist in addressing key humanitarian and sustainable development challenges?

To begin with, the United Nations Sustainable Development Goals (SDGs) represent a collection of 17 global goals that aim to address pressing global challenges, achieve inclusive development, and foster peace and prosperity in a sustainable manner by 2030. AI enables the building of smart systems that imitate human intelligence to solve real-world problems.

Recent advancements in AI have radically changed the way we think, live, and collaborate. Our daily lives are centred around AI-powered solutions with smart speakers playing wakeup alarms, smart watches tracking steps in our morning walk, smart refrigerators recommending breakfast recipes, smart TVs providing personalised content recommendations, and navigation mobile apps recommending the best route based on real-time traffic. Clearly, the age of AI is here. How can we leverage this transformative technology to amplify the impact for social good?

Accelerating AI-powered social innovations

AI core capabilities like machine learning (ML), computer vision, natural language understanding, and speech recognition offer new approaches to address humanitarian challenges and amplify the positive impact on underserved communities. ML enables machines to process massive amounts of data, interconnect underlying patterns, and derive meaningful insights for decision making. ML techniques like deep learning offer the powerful capability to create sophisticated AI models based on artificial neural networks.

Such models can be used for numerous real-world situations, like pandemic forecasting. AI tools can model and predict the spread of outbreaks like Covid-19 in low-resource settings using recent outbreak trends, treatment data, and travel history. This will help governmental and healthcare agencies to identify high-risk areas, manage demand and supply of essential medical supplies, and formulate localised remedial measures to control an outbreak.

Computer vision techniques process visual information in digital images and videos to generate valuable inference. Trained AI models assist medical practitioners to examine clinical images and identify hidden patterns of malignant tumors supporting expediated decision-making and a treatment plan for patients. Most recently, smart speakers have extended their conversational AI capabilities for healthcare use cases like chronic illness management, prescription ordering, and urgent-care appointments.

This advancement opens up the possibility to drive healthcare innovations that will break down access barriers and deliver quality healthcare to a marginalised population. Similarly, global educational programs aimed to connect the digitally unconnected can leverage satellite images and ML algorithms to map school locations. AI-powered learning products are increasingly launched to provide personalised experiences to train young children in math and science.

The convergence of AI with the Internet of Things (IoT) facilitates rapid development of meaningful solutions for agriculture to monitor soil health, assess crop damage, and optimise use of pesticides. This empowers local farmers to model different scenarios and choose the right crop that is likely to maximise the quality and yield, and it contributes toward zero hunger and economic empowerment SDGs.

Decoding best program practices

To deliver high social impact, AI-driven humanitarian programs should follow a “bottom-up” approach. One should always work backwards from needs of the end-user, drive clarity on the targeted community/user, their major pain points, the opportunity to innovate, and expected user experience.

Most importantly, always check whether AI is relevant to the problem at hand or investigate if a meaningful alternative approach exists. Understand how an AI-powered solution will deliver value to various stakeholders involved and positively contribute toward achieving SDG for local communities. Define a suite of metrics to measure various dimensions of program success. Data acquisition is central to building robust AI models that require access to meaningful and quality data.

Delivering effective AI solutions to the humanitarian landscape requires a clear understanding of the data required and relevant sources to acquire them. For instance, satellite images, electronic health records, census data, educational records, and public datasets are used to solve problems in education, healthcare, and climate change. Partnership with key field players is important for addressing data gaps for domains with sparsely available data.

Responsible use of AI in humanitarian programs can be achieved by enforcing standards and best practices to implement fairness, inclusiveness, security, and privacy controls. Always check models and datasets for bias and negative experiences. Techniques like data visualisation and clustering can evaluate a dataset’s distribution for fair representation of various stakeholders’ dimensions. Routine updates to training and testing datasets is essential to fairly account for diversity in users’ growing needs and usage patterns. Safeguard sensitive user information by implementing privacy controls like encrypting user data at rest and in transit, limit access to user data and critical production systems based on least-privilege access control, and enforce data retention and deletion policy on user datasets. Implement a robust threat model to handle possible system attacks and routine checks on infrastructure security vulnerabilities.

To conclude, AI-powered humanitarian programs offer a transformative opportunity to advance social innovations and build a better tomorrow for the benefit of humanity.

Photo by Elena Mozhvilo on Unsplash

? Attend the co-located 

The post How can AI-powered humanitarian engineering tackle the biggest threats facing our planet? appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/08/28/how-can-ai-powered-humanitarian-engineering-tackle-the-biggest-threats-facing-our-planet/feed/ 0
The White House is set to boost AI funding by 30 percent https://news.deepgeniusai.com/2020/08/19/white-house-boost-ai-funding-30-percent/ https://news.deepgeniusai.com/2020/08/19/white-house-boost-ai-funding-30-percent/#comments Wed, 19 Aug 2020 16:11:48 +0000 https://news.deepgeniusai.com/?p=9824 A budget proposal from the White House would boost funding for AI by around 30 percent as the US aims to retain its technological supremacy. Countries around the world are vastly increasing their budgets for AI, and with good reason. Just look at Gartner’s Hype Cycle released yesterday to see how important the technology is... Read more »

The post The White House is set to boost AI funding by 30 percent appeared first on AI News.

]]>
A budget proposal from the White House would boost funding for AI by around 30 percent as the US aims to retain its technological supremacy.

Countries around the world are vastly increasing their budgets for AI, and with good reason. Just look at Gartner’s Hype Cycle released yesterday to see how important the technology is expected to be over the next decade.

Russian president Vladimir Putin famously said back in 2017 that the nation which leads in AI “will become the ruler of the world”. Putin said that AI offers unprecedented power, including military power, to any government that leads in the field.

China, the third global superpower, has also embarked on a major national AI strategy. In July 2017, The State Council of China released the “New Generation Artificial Intelligence Development Plan” to build a domestic AI industry worth around $150 billion over the next few years and to become the leading AI power by 2030.

Naturally, the US isn’t going to give that top podium spot to China without a fight.

The White House has proposed (PDF) a 30 percent hike in spending on AI and quantum computing. Around $1.5 billion would be allocated to AI funding and $699 million to quantum technology.

According to a report published by US national security think tank Center for a New American Security (CNAS), Chinese officials see an AI ‘arms race’ as a threat to global peace.

The fear of the CNAS is that integrating AI into military resources and communications may breach current international norms and lead to conflict-by-accident.

China and the US have been vying to become the top destination for AI investments. Figures published by ABI Research at the end of last year suggested that the US reclaimed the top spot for AI investments back from China, which overtook the Americans the year prior. ABI expects the US to reach a 70 percent share of global AI investments.

Lian Jye Su, Principal Analyst at ABI Research, said: 

“The United States is reaping the rewards from its diversified AI investment strategy. 

Top AI startups in the United States come from various sectors, including self-driving cars, industrial manufacturing, robotics process automation, data analytics, and cybersecurity.”

The UK, unable to match the levels of funding allocated to AI research as the likes of the US and China, is taking a different approach.

An index compiled by Oxford Insights last year ranked the UK number one for AI readiness in Europe and only second on the world stage behind Singapore. The US is in fourth place, while China only just makes the top 20.

The UK has focused on AI policy and harnessing the talent from its world-leading universities to ensure the country is ready to embrace the technology’s opportunities.

A dedicated AI council in the UK features:

  • Ocado’s Chief Technology Officer, Paul Clarke
  • Dame Patricia Hodgson, Board Member of the Centre for Data Ethics and Innovation 
  • The Alan Turing Institute Chief Executive, Professor Adrian Smith
  • AI for good founder Kriti Sharma
  • UKRI chief executive Mark Walport
  • Founding Director of the Edinburgh Centre for Robotics, Professor David Lane

British Digital Secretary Jeremy Wright stated: “Britain is already a leading authority in AI. We are home to some of the world’s finest academic institutions, landing record levels of investment to the sector, and attracting the best global tech talent. But we must not be complacent.”

Growing cooperation between the UK and US in a number of technological endeavours could help to harness the strengths of both nations if similarly applied to AI, helping to maintain the countries’ leaderships in the field.

(Photo by Louis Velazquez on Unsplash)

The post The White House is set to boost AI funding by 30 percent appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/08/19/white-house-boost-ai-funding-30-percent/feed/ 1
DARPA’s AI-powered jet fight will be held virtually due to COVID-19 https://news.deepgeniusai.com/2020/08/10/darpa-ai-jet-fight-online-covid-19/ https://news.deepgeniusai.com/2020/08/10/darpa-ai-jet-fight-online-covid-19/#respond Mon, 10 Aug 2020 15:06:40 +0000 https://news.deepgeniusai.com/?p=9803 An upcoming event to display and test AI-powered jet fighters will now be held virtually due to COVID-19. “We are still excited to see how the AI algorithms perform against each other as well as a Weapons School-trained human and hope that fighter pilots from across the Air Force, Navy, and Marine Corps, as well... Read more »

The post DARPA’s AI-powered jet fight will be held virtually due to COVID-19 appeared first on AI News.

]]>
An upcoming event to display and test AI-powered jet fighters will now be held virtually due to COVID-19.

“We are still excited to see how the AI algorithms perform against each other as well as a Weapons School-trained human and hope that fighter pilots from across the Air Force, Navy, and Marine Corps, as well as military leaders and members of the AI tech community will register and watch online,” said Col. Dan Javorsek, program manager in DARPA’s Strategic Technology Office.

“It’s been amazing to see how far the teams have advanced AI for autonomous dogfighting in less than a year.”

DARPA (Defense Advanced Research Projects Agency) is using the AlphaDogfight Trial event to recruit more AI developers for its Air Combat Evolution (ACE) program.

The upcoming event is the final in a series of three and will finish with a bang as the AI-powered F-16 fighter planes virtually take on a human pilot.

“Regardless of whether the human or machine wins the final dogfight, the AlphaDogfight Trials is all about increasing trust in AI,” Javorsek added.

“If the champion AI earns the respect of an F-16 pilot, we’ll have come one step closer to achieving effective human-machine teaming in air combat, which is the goal of the ACE program.”

The first event was held in November last year with early algorithms:

A second event was held in January this year demonstrating the vast improvements made with the algorithms over a relatively short period of time. The algorithms took on adversaries created by the Johns Hopkins University Applied Physics Lab:

The third and final event will be streamed live from the Applied Physics Lab (APL) from August 18th-20th.

Eight teams will fly against five APL-developed adversary AI algorithms on day one. On day two, teams will fly against each other in a round-robin tournament.

Day three is when things get most exciting, with the top four teams competing in a single-elimination tournament for the AlphaDogfight Trials Championship. The winning team’s AI will then fly against a real F-16 pilot to test the AI’s abilities against a human.

ACE envisions future air combat eventually being conducted without putting human pilots at risk. In the meantime, DARPA hopes the initiative will help improve human pilots’ trust in fighting alongside AI.

Prior registration is required to view the event. Non-US citizens must register prior to August 11th while Americans have until August 17th.

You can register for the event here.

(Image Credit: DARPA)

The post DARPA’s AI-powered jet fight will be held virtually due to COVID-19 appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/08/10/darpa-ai-jet-fight-online-covid-19/feed/ 0
UK and Australia launch joint probe into Clearview AI’s mass data scraping https://news.deepgeniusai.com/2020/07/10/uk-australia-probe-clearview-ai-mass-data-scraping/ https://news.deepgeniusai.com/2020/07/10/uk-australia-probe-clearview-ai-mass-data-scraping/#respond Fri, 10 Jul 2020 14:49:51 +0000 https://news.deepgeniusai.com/?p=9745 The UK and Australia have launched a joint probe into the controversial “data scraping” practices of Clearview AI. Clearview AI has repeatedly made headlines, and rarely for good reason. The company’s facial recognition technology is impressive but relies on scraping billions of people’s data from across the web. “Common law has never recognised a right... Read more »

The post UK and Australia launch joint probe into Clearview AI’s mass data scraping appeared first on AI News.

]]>
The UK and Australia have launched a joint probe into the controversial “data scraping” practices of Clearview AI.

Clearview AI has repeatedly made headlines, and rarely for good reason. The company’s facial recognition technology is impressive but relies on scraping billions of people’s data from across the web.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland argued recently.

Regulators in the UK and Australia seem to have a different perspective than Ekeland and have announced a joint probe into Clearview AI’s practices.

“The Office of the Australian Information Commissioner (OAIC) and the UK’s Information Commissioner’s Office (ICO) have opened a joint investigation into the personal information handling practices of Clearview Inc., focusing on the company’s use of ‘scraped’ data and biometrics of individuals,” the ICO wrote in a statement.

“The investigation highlights the importance of enforcement cooperation in protecting the personal information of Australian and UK citizens in a globalized data environment.”

A similar probe was launched by the EU’s privacy watchdog last month.

The European Data Protection Board ruled that any use of the service by law enforcement in Europe would “likely not be consistent with the EU data protection regime” and that it “has doubts as to whether any Union or Member State law provides a legal basis for using a service such as the one offered by Clearview AI.”

Clearview AI’s facial recognition system is used by over 2,200 law enforcement agencies around the world. A recent leak suggests it’s even being used by commercial businesses like Best Buy and Macy’s. In May, Clearview said it would stop working with non–law enforcement entities.

The American Civil Liberties Union (ACLU) launched a lawsuit against Clearview AI in May after calling it a “nightmare scenario” for privacy.

Aside from the company’s practices, concerns have been raised about Clearview AI’s extensive ties with the far-right. Ekeland himself has gained notoriety as “The Troll’s Lawyer” for defending clients such as neo-Nazi troll Andrew Auernheimer.

“Companies like Clearview will end privacy as we know it, and must be stopped,” said Nathan Freed Wessler, senior staff attorney with the ACLU’s Speech, Privacy, and Technology Project.

(Photo by The Creative Exchange on Unsplash)

The post UK and Australia launch joint probe into Clearview AI’s mass data scraping appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/10/uk-australia-probe-clearview-ai-mass-data-scraping/feed/ 0
Detroit Police chief says AI face recognition doesn’t work ‘96% of the time’ https://news.deepgeniusai.com/2020/06/30/detroit-police-chief-ai-face-recognition/ https://news.deepgeniusai.com/2020/06/30/detroit-police-chief-ai-face-recognition/#respond Tue, 30 Jun 2020 09:45:29 +0000 https://news.deepgeniusai.com/?p=9720 Detroit Police chief Editor Craig has acknowledged that AI-powered face recognition doesn’t work the vast majority of times. “If we would use the software only [for subject identification], we would not solve the case 95-97 percent of the time,” Craig said. “If we were just to use the technology by itself to identify someone, I... Read more »

The post Detroit Police chief says AI face recognition doesn’t work ‘96% of the time’ appeared first on AI News.

]]>
Detroit Police chief Editor Craig has acknowledged that AI-powered face recognition doesn’t work the vast majority of times.

“If we would use the software only [for subject identification], we would not solve the case 95-97 percent of the time,” Craig said. “If we were just to use the technology by itself to identify someone, I would say 96 percent of the time it would misidentify.”

Craig’s comments arrive just days after the ACLU (American Civil Liberties Union) lodged a complaint against the Detroit police following the harrowing wrongful arrest of black male Robert Williams due to a facial recognition error.

Detroit Police arrested Williams for allegedly stealing five watches valued at $3800 from a store in October 2018. A blurry CCTV image was matched by a facial recognition algorithm to Williams’ driver’s license photo.

Current AI algorithms are known to have a racism issue. Extensive studies have repeatedly shown that facial recognition algorithms are almost 100 percent accurate when used on white males, but have serious problems when it comes to darker skin colours and the fairer sex.

This racism issue was shown again this week after an AI designed to upscale blurry photos, such as those often taken from security cameras, was applied to a variety of people from the BAME communities.

Here’s a particularly famous one:

And some other examples:

Last week, Boston followed in the footsteps of an increasing number of cities like San Francisco, Oakland, and California in banning facial recognition technology over human rights concerns.

“Facial recognition is inherently dangerous and inherently oppressive. It cannot be reformed or regulated. It must be abolished,” said Evan Greer, deputy director of the digital rights group Fight for the Future.

Over the other side of the pond, facial recognition tests in the UK so far have also been nothing short of a complete failure. An initial trial at the 2016 Notting Hill Carnival led to not a single person being identified. A follow-up trial the following year led to no legitimate matches but 35 false positives.

An independent report into the Met Police’s facial recognition trials, conducted last year by Professor Peter Fussey and Dr Daragh Murray, concluded that it was only verifiably accurate in just 19 percent of cases.

The next chilling step for AI in surveillance is using it to predict crime. Following news of an imminent publication called ‘A Deep Neural Network Model to Predict Criminality Using Image Processing,’ over 1000 experts signed an open letter last week opposing the use of AI for such purposes.

“Machine learning programs are not neutral; research agendas and the data sets they work with often inherit dominant cultural beliefs about the world,” warned the letter’s authors.

The acknowledgement from Detroit’s police chief that current facial recognition technologies do not work in around 96 percent of cases should be reason enough to halt its use, especially for law enforcement, at least until serious improvements are made.

(Photo by Joshua Hoehne on Unsplash)

The post Detroit Police chief says AI face recognition doesn’t work ‘96% of the time’ appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/30/detroit-police-chief-ai-face-recognition/feed/ 0
The ACLU uncovers the first known wrongful arrest due to AI error https://news.deepgeniusai.com/2020/06/25/aclu-uncovers-wrongful-arrest-ai-error/ https://news.deepgeniusai.com/2020/06/25/aclu-uncovers-wrongful-arrest-ai-error/#respond Thu, 25 Jun 2020 12:05:26 +0000 https://news.deepgeniusai.com/?p=9711 The ACLU (American Civil Liberties Union) has forced the police to acknowledge a wrongful arrest due to an erroneous algorithm. While it’s been suspected that documented racial bias with facial recognition algorithms has led to false arrests, it’s been difficult to prove. On Wednesday, the ACLU lodged a complaint against the Detroit police after black... Read more »

The post The ACLU uncovers the first known wrongful arrest due to AI error appeared first on AI News.

]]>
The ACLU (American Civil Liberties Union) has forced the police to acknowledge a wrongful arrest due to an erroneous algorithm.

While it’s been suspected that documented racial bias with facial recognition algorithms has led to false arrests, it’s been difficult to prove.

On Wednesday, the ACLU lodged a complaint against the Detroit police after black male Robert Williams was arrested on his front lawn “as his wife Melissa looked on and as his daughters wept from the trauma”. Williams was held in a “crowded and filthy” cell overnight without being given any reason.

Detroit Police arrested Williams for allegedly stealing five watches valued at $3800 from a store in October 2018. A blurry CCTV image was matched by a facial recognition algorithm to Williams’ driver’s license photo.

During an interrogation the day after his arrest, the police admitted that “the computer must have gotten it wrong”. Williams was kept incarcerated until after dark “at which point he was released out the front door, on a cold and rainy January night, where he was forced to wait outside on the curb for approximately an hour while his wife scrambled to find child care for the children so that she could come pick him up.”

Speaking to the NY Times, a Detroit police spokesperson said the department “does not make arrests based solely on facial recognition,” and claims witness interviews and a photo lineup were used.

However, a response from the Wayne County prosecutor’s office confirms the department used facial recognition to identify Williams using the security footage and an eyewitness to the crime was not shown the alleged photo lineup.

In its complaint, the ACLU demands that Detroit police end the use of facial recognition “as the facts of Mr. Williams’ case prove both that the technology is flawed and that DPD investigators are not competent in making use of such technology.”

This week, Boston became the latest city to ban facial recognition technology for municipal use. Boston follows an increasing number of cities like San Francisco, Oakland, and California who’ve banned the technology over human rights concerns.

“Facial recognition is inherently dangerous and inherently oppressive. It cannot be reformed or regulated. It must be abolished,” said Evan Greer, deputy director of the digital rights group Fight for the Future.

“Boston just became the latest major city to stop the use of this extraordinary and toxic surveillance technology. Every other city should follow suit.”

Cases like Mr Williams’ are certainly strengthening such calls. Over 1,000 experts signed an open letter this week against the use of AI for the next chilling step, crime prediction.

(Photo by ev on Unsplash)

The post The ACLU uncovers the first known wrongful arrest due to AI error appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/25/aclu-uncovers-wrongful-arrest-ai-error/feed/ 0