government – AI News https://news.deepgeniusai.com Artificial Intelligence News Mon, 11 Jan 2021 17:12:10 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png government – AI News https://news.deepgeniusai.com 32 32 Police use of Clearview AI’s facial recognition increased 26% after Capitol raid https://news.deepgeniusai.com/2021/01/11/police-use-clearview-ai-facial-recognition-increased-26-capitol-raid/ https://news.deepgeniusai.com/2021/01/11/police-use-clearview-ai-facial-recognition-increased-26-capitol-raid/#respond Mon, 11 Jan 2021 17:12:08 +0000 https://news.deepgeniusai.com/?p=10153 Clearview AI reports that police use of the company’s highly-controversial facial recognition system jumped 26 percent following the raid on the Capitol. The facial recognition system relies on scraping the data of people from across the web without their explicit consent, a practice which has naturally raised some eyebrows—including the ACLU’s which called it a... Read more »

The post Police use of Clearview AI’s facial recognition increased 26% after Capitol raid appeared first on AI News.

]]>
Clearview AI reports that police use of the company’s highly-controversial facial recognition system jumped 26 percent following the raid on the Capitol.

The facial recognition system relies on scraping the data of people from across the web without their explicit consent, a practice which has naturally raised some eyebrows—including the ACLU’s which called it a “nightmare scenario” for privacy.

Around three billion images are said to have been scraped for Clearview AI’s system.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland once argued.

Whether you call them protestors or domestic terrorists, the Trump supporters who raided the US Capitol Building last week – incited by the president to halt democracy and overturn the votes of millions of Americans – committed clear criminal offences that were bipartisanly condemned.

In comments to New York Times, Clearview AI CEO Hoan Ton-That claimed the company’s witnesses “a 26 percent increase of searches over our usual weekday search volume” on January 7th, following the riots.

Given the number of individuals involved, law enforcement has a gargantuan task to identify and locate the people that went far beyond exercising their right to peaceful protest and invaded a federal building, caused huge amounts of damage, and threatened elected representatives and staff.

The FBI has issued public appeals, but it’s little surprise that law enforcement is turning to automated means—regardless of the controversy. According to Clearview AI, approximately 2,400 agencies across the US use the company’s facial recognition technology.

Last year, the UK and Australia launched a joint probe into Clearview AI’s practices.

“The Office of the Australian Information Commissioner (OAIC) and the UK’s Information Commissioner’s Office (ICO) have opened a joint investigation into the personal information handling practices of Clearview Inc., focusing on the company’s use of ‘scraped’ data and biometrics of individuals,” the ICO wrote in a statement.

A similar probe was also launched by the EU’s privacy watchdog. The European Data Protection Board ruled that any use of the service by law enforcement in Europe would “likely not be consistent with the EU data protection regime” and that it “has doubts as to whether any Union or Member State law provides a legal basis for using a service such as the one offered by Clearview AI.”

Clearview AI has already been forced to suspend operations in Canada after the federal Office of the Privacy Commissioner of Canada opened an investigation into the company.

While Clearview AI’s facial recognition tech continues to have widespread use in the US, some police departments have taken the independent decision to ban officers from using such systems due to the well-documented inaccuracies which particularly affect minority communities.

The post Police use of Clearview AI’s facial recognition increased 26% after Capitol raid appeared first on AI News.

]]>
https://news.deepgeniusai.com/2021/01/11/police-use-clearview-ai-facial-recognition-increased-26-capitol-raid/feed/ 0
CDEI launches a ‘roadmap’ for tackling algorithmic bias https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/ https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/#respond Fri, 27 Nov 2020 16:10:35 +0000 https://news.deepgeniusai.com/?p=10058 A review from the Centre for Data Ethics and Innovation (CDEI) has led to the creation of a “roadmap” for tackling algorithmic bias. The analysis was commissioned by the UK government in October 2018 and will receive a formal response. Algorithms bring substantial benefits to businesses and individuals able to use them effectively. However, increasing... Read more »

The post CDEI launches a ‘roadmap’ for tackling algorithmic bias appeared first on AI News.

]]>
A review from the Centre for Data Ethics and Innovation (CDEI) has led to the creation of a “roadmap” for tackling algorithmic bias.

The analysis was commissioned by the UK government in October 2018 and will receive a formal response.

Algorithms bring substantial benefits to businesses and individuals able to use them effectively. However, increasing evidence suggests biases are – often unconsciously – making their way into algorithms and creating an uneven playing field.

The CDEI is the UK government’s advisory body on the responsible use of AI and data-driven technology. CDEI has spent the past two years examining the issue of algorithmic bias and how it can be tackled.

Adrian Weller, Board Member for the Centre for Data Ethics and Innovation, said:

“It is vital that we work hard now to get this right as adoption of algorithmic decision-making increases. Government, regulators, and industry need to work together with interdisciplinary experts, stakeholders, and the public to ensure that algorithms are used to promote fairness, not undermine it.

The Centre for Data Ethics and Innovation has today set out a range of measures to help the UK to achieve this, with a focus on enhancing transparency and accountability in decision-making processes that have a significant impact on individuals.

Not only does the report propose a roadmap to tackle the risks, but it highlights the opportunity that good use of data presents to address historical unfairness and avoid new biases in key areas of life.”

The report focuses on four key sectors where algorithmic bias poses the biggest risk: policing, recruitment, financial services, and local government.

Today’s facial recognition algorithms are relatively effective when used on white males, but research has consistently shown how ineffective they are with darker skin colours and females. The error rate is, therefore, higher when facial recognition algorithms are used on some parts of society over others.

In June, Detroit Police chief Editor Craig said facial recognition would misidentify someone around 96 percent of the time—not particularly comforting when they’re being used to perform mass surveillance of protests.

Craig’s comments were made just days after the ACLU (American Civil Liberties Union) lodged a complaint against Detroit Police following the harrowing wrongful arrest of black male Robert Williams due to a facial recognition error.

And that’s just one example of where AI can unfairly impact some parts of society over another.

“Fairness is a highly prized human value,” the report’s preface reads. “Societies in which individuals can flourish need to be held together by practices and institutions that are regarded as fair.”

Ensuring fairness in algorithmic decision-making

Transparency is required for algorithms. In financial services, a business loan or mortgage could be rejected without transparency simply because a person was born in a poor neighbourhood. Job applications could be rejected not on a person’s actual skill but dependent on where they were educated.

Such biases exist in humans and our institutions today, but automating them at scale is a recipe for disaster. Removing bias from algorithms is not an easy task but if achieved would lead to increased fairness by taking human biases out of the equation.

“It is well established that there is a risk that algorithmic systems can lead to biased decisions, with perhaps the largest underlying cause being the encoding of existing human biases into algorithmic systems. But the evidence is far less clear on whether algorithmic decision-making tools carry more or less risk of bias than previous human decision-making processes. Indeed, there are reasons to think that better use of data can have a role in making decisions fairer, if done with appropriate care.

When changing processes that make life-affecting decisions about individuals we should always proceed with caution. It is important to recognise that algorithms cannot do everything. There are some aspects of decision-making where human judgement, including the ability to be sensitive and flexible to the unique circumstances of an individual, will remain crucial.”

The report’s authors examined the aforementioned four key sectors to determine their current “maturity levels” in algorithmic decision-making.

In recruitment, the authors found rapid growth in the use of algorithms to make decisions at all stages. They note that adequate data is being collected to monitor outcomes but found that understanding of how to avoid human biases creeping in is lacking.

“More guidance is needed on how to ensure that these tools do not unintentionally discriminate against groups of people, particularly when trained on historic or current employment data.”

The financial services industry has relied on data to make decisions for longer than arguably any other to determine things like how likely it is an individual can repay a debt.

“Specific groups are historically underrepresented in the financial system, and there is a risk that these historic biases could be entrenched further through algorithmic systems.”

CDEI found limited use of algorithmic decision-making in UK policing but found variance across forces with regards to both usage and managing ethical risks.

“The use of data analytics tools in policing carries significant risk. Without sufficient care, processes can lead to Review into bias in algorithmic decision-making: Executive summary Centre for Data Ethics and Innovation 8 outcomes that are biased against particular groups, or systematically unfair.

In many scenarios where these tools are helpful, there is still an important balance to be struck between automated decision-making and the application of professional judgement and discretion.”

Finally, in local government, CDEI noted an increased use of algorithms to inform decision-making but most are in their early stages of deployment. Such tools can be powerful assets for societal good – like helping to plan where resources should be allocated to maintain vital services – but can also carry significant risks.

“Evidence has shown that certain people are more likely to be overrepresented in data held by local authorities and this can then lead to biases in predictions and interventions.”

The CDEI makes a number of recommendations in its report but among them is:

  • Clear and mandatory transparency over how algorithms are used for public decision-making and steps taken to ensure the fair treatment of individuals.
  • Full accountability for organisations implementing such technologies.
  • Improving the diversity of roles involved with developing and deploying decision-making tools.
  • Updating model contracts and framework agreements for public sector procurement to incorporate minimum standards around the ethical use of AI.
  • The government working with regulators to provide clear guidance on the collection and use of protected characteristic data in outcome monitoring and decision-making processes. They should then encourage the use of that guidance and data to address current and historic bias in key sectors.
  • Ensuring that the Equality and Human Rights Commission has sufficient resources to investigate cases of alleged algorithmic discrimination.

CDEI is overseen by an independent board which is made up of experts from across industry, civil society, academia, and government; it is an advisory body and does not directly set policies. The Department for Digital, Culture, Media & Sport is consulting on whether a statutory status would help the CDEI to deliver its remit as part of the National Data Strategy.

You can find a full copy of the CDEI’s report into tackling algorithmic bias here (PDF)

(Photo by Matt Duncan on Unsplash)

The post CDEI launches a ‘roadmap’ for tackling algorithmic bias appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/feed/ 0
South Korea wants to develop 50 types of AI chips by 2030 https://news.deepgeniusai.com/2020/10/13/south-korea-develop-50-types-ai-chips-2030/ https://news.deepgeniusai.com/2020/10/13/south-korea-develop-50-types-ai-chips-2030/#respond Tue, 13 Oct 2020 16:31:41 +0000 https://news.deepgeniusai.com/?p=9953 South Korea has set itself the ambitious national target of developing 50 types of AI chips within the next decade. The country’s ICT ministry made the announcement this week as South Korea positions itself to move beyond its historic foothold in memory chips into artificial intelligence semiconductors. South Korea is investing heavily in AI; especially... Read more »

The post South Korea wants to develop 50 types of AI chips by 2030 appeared first on AI News.

]]>
South Korea has set itself the ambitious national target of developing 50 types of AI chips within the next decade.

The country’s ICT ministry made the announcement this week as South Korea positions itself to move beyond its historic foothold in memory chips into artificial intelligence semiconductors.

South Korea is investing heavily in AI; especially in the hardware which makes it possible.

Around one trillion won ($871 million) will be spent on developing next-generation AI chips before 2029. The current plan is to be in a position to produce AI chips nationally by 2022 and build a 3,000-strong army of experts within the decade.

Last year, President Moon Jae-in announced a ‘National Strategy for Artificial Intelligence’ (PDF) and set out his desire for South Korea to lead in the technology.

In a foreword, President Moon Jae-in wrote:

“The era of the Fourth Industrial Revolution is indeed an age in which imagination can change the world. Korea is neither the first country to have ushered in the era of artificial intelligence nor the country with the best AI technology at present. However, the country has people capable of turning their imagination into reality and taking on challenges to pursue novelty.

Even in the throes of the 1997 Asian financial crisis, the country led the Internet Revolution and now boasts world-class manufacturing competitiveness, globally unmatched ICT infrastructure and abundant data concerning e-government.

If we link artificial intelligence primarily with the sectors in which we’ve accumulated extensive experience and competitiveness, such as manufacturing and semiconductors, we will be able to give birth to the smartest yet most human-like artificial intelligence. The Government will join forces with developers to help them fully utilize their imaginations and turn their ideas into reality.”

South Korea is home to tech giants such as Samsung and SK hynix which continue to offer global innovations. However, it’s understandable South Korea wants to ensure it secures a slice of what will be a lucrative market.

Analysts from McKinsey predict AI chips will generate around $67 billion in revenue by 2025 and capture around 20 percent of all semiconductor demand.

South Korea, for its part, wants to own 20 percent of the global AI chip market by the end of this decade.

The post South Korea wants to develop 50 types of AI chips by 2030 appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/10/13/south-korea-develop-50-types-ai-chips-2030/feed/ 0
The White House is set to boost AI funding by 30 percent https://news.deepgeniusai.com/2020/08/19/white-house-boost-ai-funding-30-percent/ https://news.deepgeniusai.com/2020/08/19/white-house-boost-ai-funding-30-percent/#comments Wed, 19 Aug 2020 16:11:48 +0000 https://news.deepgeniusai.com/?p=9824 A budget proposal from the White House would boost funding for AI by around 30 percent as the US aims to retain its technological supremacy. Countries around the world are vastly increasing their budgets for AI, and with good reason. Just look at Gartner’s Hype Cycle released yesterday to see how important the technology is... Read more »

The post The White House is set to boost AI funding by 30 percent appeared first on AI News.

]]>
A budget proposal from the White House would boost funding for AI by around 30 percent as the US aims to retain its technological supremacy.

Countries around the world are vastly increasing their budgets for AI, and with good reason. Just look at Gartner’s Hype Cycle released yesterday to see how important the technology is expected to be over the next decade.

Russian president Vladimir Putin famously said back in 2017 that the nation which leads in AI “will become the ruler of the world”. Putin said that AI offers unprecedented power, including military power, to any government that leads in the field.

China, the third global superpower, has also embarked on a major national AI strategy. In July 2017, The State Council of China released the “New Generation Artificial Intelligence Development Plan” to build a domestic AI industry worth around $150 billion over the next few years and to become the leading AI power by 2030.

Naturally, the US isn’t going to give that top podium spot to China without a fight.

The White House has proposed (PDF) a 30 percent hike in spending on AI and quantum computing. Around $1.5 billion would be allocated to AI funding and $699 million to quantum technology.

According to a report published by US national security think tank Center for a New American Security (CNAS), Chinese officials see an AI ‘arms race’ as a threat to global peace.

The fear of the CNAS is that integrating AI into military resources and communications may breach current international norms and lead to conflict-by-accident.

China and the US have been vying to become the top destination for AI investments. Figures published by ABI Research at the end of last year suggested that the US reclaimed the top spot for AI investments back from China, which overtook the Americans the year prior. ABI expects the US to reach a 70 percent share of global AI investments.

Lian Jye Su, Principal Analyst at ABI Research, said: 

“The United States is reaping the rewards from its diversified AI investment strategy. 

Top AI startups in the United States come from various sectors, including self-driving cars, industrial manufacturing, robotics process automation, data analytics, and cybersecurity.”

The UK, unable to match the levels of funding allocated to AI research as the likes of the US and China, is taking a different approach.

An index compiled by Oxford Insights last year ranked the UK number one for AI readiness in Europe and only second on the world stage behind Singapore. The US is in fourth place, while China only just makes the top 20.

The UK has focused on AI policy and harnessing the talent from its world-leading universities to ensure the country is ready to embrace the technology’s opportunities.

A dedicated AI council in the UK features:

  • Ocado’s Chief Technology Officer, Paul Clarke
  • Dame Patricia Hodgson, Board Member of the Centre for Data Ethics and Innovation 
  • The Alan Turing Institute Chief Executive, Professor Adrian Smith
  • AI for good founder Kriti Sharma
  • UKRI chief executive Mark Walport
  • Founding Director of the Edinburgh Centre for Robotics, Professor David Lane

British Digital Secretary Jeremy Wright stated: “Britain is already a leading authority in AI. We are home to some of the world’s finest academic institutions, landing record levels of investment to the sector, and attracting the best global tech talent. But we must not be complacent.”

Growing cooperation between the UK and US in a number of technological endeavours could help to harness the strengths of both nations if similarly applied to AI, helping to maintain the countries’ leaderships in the field.

(Photo by Louis Velazquez on Unsplash)

The post The White House is set to boost AI funding by 30 percent appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/08/19/white-house-boost-ai-funding-30-percent/feed/ 1
AI firm used by Vote Leave awarded seven contracts over 18 months https://news.deepgeniusai.com/2020/05/04/ai-firm-vote-leave-seven-contracts-18-months/ https://news.deepgeniusai.com/2020/05/04/ai-firm-vote-leave-seven-contracts-18-months/#comments Mon, 04 May 2020 16:08:43 +0000 https://news.deepgeniusai.com/?p=9581 Controversial AI startup Faculty has been awarded seven contracts by the British government over the last 18 months. Faculty gained notoriety for assisting the Vote Leave campaign during the UK’s referendum on whether to leave the EU. While the Remain campaigns vastly outspent their Leave counterparts and had the backing of the former government; the... Read more »

The post AI firm used by Vote Leave awarded seven contracts over 18 months appeared first on AI News.

]]>
Controversial AI startup Faculty has been awarded seven contracts by the British government over the last 18 months.

Faculty gained notoriety for assisting the Vote Leave campaign during the UK’s referendum on whether to leave the EU. While the Remain campaigns vastly outspent their Leave counterparts and had the backing of the former government; the use of innovative AI tools by the leave side helped to reduce those advantages of their opponents.

Reducing costs and inefficiencies are two of the core benefits of deploying AI technologies. The British government is clearly impressed by Faculty’s work and has given the firm seven contracts worth almost £1 million.

Faculty has deep links within the British government – including, of course, with former Vote Leave campaign director and Boris Johnson’s senior advisor, Dominic Cummings.

Cummings has frequently attacked Whitehall and what he believed to be a London-centred political system which ignored voters’ concerns, particularly in Northern England and the Midlands. He’s described his political views as “not Tory (Conservative), libertarian, ‘populist’ or anything else,” but has been integral in shaking up the whole system.

In January 2016, Cummings said: “Extremists are on the rise in Europe and are being fuelled unfortunately by the Euro project and by the centralisation of power in Brussels. It is increasingly important that Britain offers an example of civilised, democratic, liberal self-government.”

From the moment Cummings stepped into Downing Street, he’s set about disrupting the status quo. In July last year, Cummings was pictured wearing a t-shirt with the logo of OpenAI – the Elon Musk-founded startup created to develop “friendly” and ethical AI. In a blog run by Cummings, he’s frequently written about the use of disruptive technologies like AI.

Ben Warner, the brother of Faculty’s CEO Marc Warner, was reportedly recruited to Downing Street last year by Cummings and formerly worked as a senior employee at the AI startup. Warner and Cummings recently made headlines for attending the Sage meetings which guide the government’s response to COVID-19.

According to documents seen by The Guardian, Faculty is reportedly at the centre of the COVID-19 response and is helping to build predictive models around the outbreak. One document even suggests a computer simulation was considered to assess the impact of a policy of “targeted herd immunity,” but it never took place.

One of Faculty’s contracts was from the Department for Business, Energy and Industrial Strategy and tasked the firm with helping to monitor the impact of COVID-19 on industry. The contract was worth £264k.

Faculty was awarded £250k from the Department for Digital, Culture, Media, and Sport last year in 2019 to launch a cross-government review into the adoption of AI technologies. The aim was “to identify the most significant opportunities to introduce AI across government with the aim of increasing productivity and improving the quality of public services.”

That same year, Faculty received two contracts from the Office for Artificial Intelligence and Government Digital Service (GDS) worth £185k. Another contract, worth around £125k, was granted to offer advice on bias mitigation in finance and recruitment to the Centre for Data Ethics and Innovation (CDEI).

The year prior, in 2018, Faculty was awarded £600k to help track terrorist videos online.

It’s clear that the UK government believes heavily in the use of AI to improve processes. Right now, however, it all seems to focus predominantly around the one company. Faculty says that all government contracts are in-line with procurement rules and follows proper processes.

(Image Credit: EU referendum by fernando butcher under CC BY 2.0 license)

The post AI firm used by Vote Leave awarded seven contracts over 18 months appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/05/04/ai-firm-vote-leave-seven-contracts-18-months/feed/ 2
UK govt ‘failing on openness’ around public sector AI – but specific regulator not the answer https://news.deepgeniusai.com/2020/02/11/uk-govt-failing-on-openness-around-public-sector-ai-but-specific-regulator-not-the-answer/ https://news.deepgeniusai.com/2020/02/11/uk-govt-failing-on-openness-around-public-sector-ai-but-specific-regulator-not-the-answer/#respond Tue, 11 Feb 2020 12:11:31 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6407 The UK does not need a specific regulator for artificial intelligence (AI), according to a new government report – yet more clarity needs to be given around usage and ethics in the public sector. The report, ‘Artificial Intelligence and Public Standards: A Review by the Committee on Standards in Public Life’ (pdf, no opt-in, 78... Read more »

The post UK govt ‘failing on openness’ around public sector AI – but specific regulator not the answer appeared first on AI News.

]]>
The UK does not need a specific regulator for artificial intelligence (AI), according to a new government report – yet more clarity needs to be given around usage and ethics in the public sector.

The report, ‘Artificial Intelligence and Public Standards: A Review by the Committee on Standards in Public Life’ (pdf, no opt-in, 78 pages), said the government was ‘failing on openness’, although adding that fears over ‘black box AI’, whereby data produces results through unexplainable methods, were largely misplaced.

“Public sector organisations are not sufficiently transparent about their use of AI and it is too difficult to find out where machine learning is currently being used in government,” the report notes. “It is too early to judge if public sector bodies are successfully upholding accountability.”

The report advocated the use of the Nolan Principles – seven ethical standards expected of public office holders – in bringing through AI for the UK public sector, arguing they did not need reformulating. Yet in three areas – openness, accountability, and objectivity – the report said current standards fell short.

Of the 15 overall recommendations the report made, many focused around preparation, ethics and transparency:

  • The public needs to understand the high level ethical principles that govern the use of AI in the public sector (currently the FAST SUM Principles, the OECD AI Principles, and the Data Ethics Framework)
  • All public sector organisations should publish a statement on how their use of AI complies with relevant laws and regulations before they are deployed in public service delivery
  • A specific AI regulator is not needed, however a regulatory assurance body should be formed to identify gaps in the regulatory landscape and provide advice to individual regulators and government on the issues associated with AI
  • The government should use its purchasing power in the market to set procurement requirements to ensure private companies developing AI solutions for the public sector meet the right standards
  • The government should consider how an AI impact assessment requirement could be integrated into existing processes to evaluate the potential effects of AI on public standards
  • The government should establish guidelines for public bodies about the declaration and disclosure of their AI systems

Commenting after the report’s release Alex Guillen, technology strategist at IT services provider Insight, noted the recommendations were feasible, but put this alongside a word of caution.

“Introducing AI into government while still following the Nolan Principles should be perfectly possible,” Guillen told AI News. “First, the public sector needs to remember that currently, the most effective uses of technologies such as AI and machine learning act to enhance, rather than replace, human workers. Helping public sector workers make more informed decisions, or act faster, will not only improve public services; it will help satisfy the 69% of people polled who said they would be more comfortable with public bodies using AI if humans were making the final judgement on any decision.

“However, regardless of how AI is used in the public sector, it needs to be treated as an employee – and given the training and information it needs in order to do its job,” Guillen added. “As with any technology that relies on data, garbage in means garbage out. From facial recognition in policing to helping diagnose and treat patients on the NHS, AI needs the right data and the right governance to avoid causing more problems than it solves.

“Otherwise any implementation will be dogged by ethical concerns and accusations of data bias or discrimination.”

? Attend the co-located AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post UK govt ‘failing on openness’ around public sector AI – but specific regulator not the answer appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/02/11/uk-govt-failing-on-openness-around-public-sector-ai-but-specific-regulator-not-the-answer/feed/ 0
CBI: UK tech dominance is ‘at risk’ due to public mistrust of AI https://news.deepgeniusai.com/2019/08/09/cbi-uk-tech-risk-public-mistrust-ai/ https://news.deepgeniusai.com/2019/08/09/cbi-uk-tech-risk-public-mistrust-ai/#respond Fri, 09 Aug 2019 00:52:03 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5920 Business industry group the CBI has warned that UK tech dominance is ‘at risk’ due to public mistrust of AI. In a report today, the CBI warns artificial intelligence companies of the need to ensure they’re approaching the technology in an ethical manner to help build trust. Among the measures suggested to achieve trust include... Read more »

The post CBI: UK tech dominance is ‘at risk’ due to public mistrust of AI appeared first on AI News.

]]>
Business industry group the CBI has warned that UK tech dominance is ‘at risk’ due to public mistrust of AI.

In a report today, the CBI warns artificial intelligence companies of the need to ensure they’re approaching the technology in an ethical manner to help build trust.

Among the measures suggested to achieve trust include ensuring customers know how their data is being used by AI, what decisions are being taken, and challenging unfair biases.

Overall, the CBI remains bullish on AI’s potential to add billions to the UK economy.

Felicity Burch, CBI Director of Digital and Innovation, said:

“At a time of slowing global growth, AI could add billions of pounds to the UK economy and transform our stuttering productivity performance.

The Government has set the right tone by establishing the Centre for Data Ethics & Innovation, but it’s up to business to put ethics into practice.

Ethics can be an intimidating word. But at the end of the day, meaningful ethics is similar to issues organisations already think about: effective governance, employee empowerment, and customer engagement.

The same actions that embed an ethical approach to AI will also make businesses more competitive. We know that diverse businesses are more likely to outperform their rivals. When it comes to AI, businesses who prioritise fairness and inclusion are more likely to create algorithms that make better decisions, giving them the edge.

With the global tech race heating up, the UK is well placed to lead the world in developing ethical AI, which uses data safely and drives progress in business and society.”

A previous study, conducted by PwC, estimates that UK GDP could be around 10 percent higher in 2030 due to AI – the equivalent of an additional £232bn.

Earlier this week, AI News reported on findings by Fountech.ai which highlights the scale of public distrust in artificial intelligence among UK adults.

In a survey of more than 2,000 people, Fountech.ai found over two-thirds (67%) are concerned about the impact of artificial intelligence on their careers.

Another finding in Fountech.ai’s study is that just over half (58%) find algorithms which make recommendations, such as what to buy or watch next, ‘creepy’.

The potential for AI is clear, but the studies conducted by the CBI and others show its full potential can only be unlocked by building public trust in the technology.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post CBI: UK tech dominance is ‘at risk’ due to public mistrust of AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/08/09/cbi-uk-tech-risk-public-mistrust-ai/feed/ 0
UK gov is among the ‘most prepared’ for AI revolution https://news.deepgeniusai.com/2019/05/21/uk-gov-most-prepared-ai-revolution/ https://news.deepgeniusai.com/2019/05/21/uk-gov-most-prepared-ai-revolution/#respond Tue, 21 May 2019 15:48:23 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5670 The UK has retained its place among the most prepared governments to harness the opportunities presented by artificial intelligence. An index published today, compiled by Oxford Insights in partnership with the International Development Research Centre (IDRC) in Canada, places the UK as Europe’s leading nation and just second on the world stage. Margot Editor, Minister... Read more »

The post UK gov is among the ‘most prepared’ for AI revolution appeared first on AI News.

]]>
The UK has retained its place among the most prepared governments to harness the opportunities presented by artificial intelligence.

An index published today, compiled by Oxford Insights in partnership with the International Development Research Centre (IDRC) in Canada, places the UK as Europe’s leading nation and just second on the world stage.

Margot Editor, Minister for Digital and the Creative Industries, said:

“I’m delighted the UK government has been recognised as one of the best in the world in readiness for Artificial Intelligence.

AI is already having a positive impact across society – from detecting fraud and diagnosing medical conditions, to helping us discover new music – and we’re working hard to make the most of its vast opportunities while managing and mitigating the potential risks.

With our newly appointed AI Council, we will boost the growth and use of AI in the UK, by using the knowledge of experts from a range of sectors and encourage dialogue between industry, academia and the public sector, to realise the full potential of data-driven technologies to the economy.”

Singapore pipped the UK for number one, although both scored over 9.0 in the index’s rankings. The researchers used 11 input metrics to determine countries’ rankings which assess governance, infrastructure and data, skills and education, and government and public services.

Here are the index’s top 20 countries:

Western European governments make up the bulk of the top 20, with Germany just behind the UK in third place. There’s also Finland (5th), Sweden (6th), France (8th), Denmark (9th), Norway (12th), Netherlands (14th), Italy (15th), Austria (16th), and Switzerland (18th).

Seeing this many European governments, particularly EU nations, will be of surprise to some. Many believe Europe to be behind in AI due to strict regulations around things such as data collection.

One country often considered to be a leader in AI is China, in part a result of its mass data collection. However, China squeezes in the top 20. The researchers note this is due to limited data availability providing lower scores in metrics like infrastructure.

Richard Stirling, CEO at Oxford Insights, comments:

“It was not surprising that Singapore came top of the rankings, but the UK has also performed extremely well, and the government has demonstrated its commitment with initiatives such as the Artificial Intelligence Sector Deal in April 2018.

However, there is global competition in the AI space, and as our research highlights, other countries such as France, Germany and China have also announced significant investments and introduced AI strategies.  

If the UK is to stay ahead in the field, we must continue to support AI research, technologies, and companies with a clear national strategy and investment programme to support continuous innovation.”

Oxford Insights previously listed the UK as number one for AI readiness in a prior index examining 35 OECD countries, but the new index is much broader in scope. The new index analysed 194 countries using a vaster range of source data.

Just yesterday, Chinese technology giant Tencent announced it led a $100m (£78.4m) funding round for promising British AI startup Prowler. Dr Ling Ge, Chief European Representative at Tencent, said: “The UK is a global leader in AI and is increasingly becoming a focus for companies looking to invest in the sector.”

Last week, the UK government announced the names of board members appointed to its AI Council. The AI council features a range of industry talent from representatives of companies using it for their operations, to policymakers aiming to overcome adoption barriers while ensuring safe integration.

Digital Secretary, Jeremy Wright, stated: “Britain is already a leading authority in AI. We are home to some of the world’s finest academic institutions, landing record levels of investment to the sector, and attracting the best global tech talent, but we must not be complacent.”

Given the rate of AI news coming from the UK, it doesn’t seem there’s any danger of complacency.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post UK gov is among the ‘most prepared’ for AI revolution appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/05/21/uk-gov-most-prepared-ai-revolution/feed/ 0
UK government announces board members of AI Council https://news.deepgeniusai.com/2019/05/16/uk-government-board-members-ai-council/ https://news.deepgeniusai.com/2019/05/16/uk-government-board-members-ai-council/#respond Thu, 16 May 2019 15:50:13 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5651 The UK government has announced the names of board members appointed to its dedicated AI Council. As one of the global leaders in AI, the global community will be looking to who the UK has appointed to its council and awaiting its guidance. Digital Secretary, Jeremy Wright, said: “Britain is already a leading authority in... Read more »

The post UK government announces board members of AI Council appeared first on AI News.

]]>
The UK government has announced the names of board members appointed to its dedicated AI Council.

As one of the global leaders in AI, the global community will be looking to who the UK has appointed to its council and awaiting its guidance.

Digital Secretary, Jeremy Wright, said:

“Britain is already a leading authority in AI. We are home to some of the world’s finest academic institutions, landing record levels of investment to the sector, and attracting the best global tech talent, but we must not be complacent.

Through our AI Council we will continue this momentum by leveraging the knowledge of experts from a range of sectors to provide leadership on the best use and adoption of artificial intelligence across the economy.

Under the leadership of Tabitha Goldstaub, the Council will represent the UK AI Sector on the international stage and help us put in place the right skills and practices to make the most of data-driven technologies.”

Here’s a list of those who’ve made the cut:

  • Paul Clarke (Ocado’s Chief Technology Officer)
  • Dame Patricia Hodgson (Board Member of the Centre for Data Ethics and Innovation)
  • Professor Adrian Smith (The Alan Turing Institute Chief Executive)
  • Kriti Sharma (Founder of AI for Good)
  • Mark Walport (UKRI Chief Executive)
  • Professor David Lane (Founding Director of the Edinburgh Centre for Robotics)

Each of the board members brings a vast amount of knowledge in AI. Clarke, for example, oversaw the implementation of AI at Ocado for personalising the shopping experience for shoppers while predicting demand and detecting fraud.

On the policy-side, Professor Smith of The Alan Turing Institute brings his organisation’s experience of identifying and overcoming barriers to AI adoption in society. This includes matters such as skills, consumer trust, and ensuring the protection of sensitive data.

Business Secretary Greg Clark said:

“The use of Artificial Intelligence is becoming integral to people’s everyday lives, from companies protecting their customers from fraud to smart devices in our homes.

The outstanding expertise of those joining our new AI Council will be invaluable as we look to develop this ever-changing industry into one that is world-leading, attracting the brightest and best to work in new highly-skilled jobs.

This AI Council follows our ground-breaking AI Sector Deal, and is a key part of our modern Industrial Strategy – investing now to secure the UK’s position on the world stage in these cutting edge technologies both now and long into the future.”

The creation of AI boards has often been deemed necessary, but some attempts so far have been met with mixed responses. Google famously created an AI ethics board earlier this year before swiftly disbanding it after backlash.

Hopefully, the UK’s attempt will fare better (and stick around for a bit longer!)

Related: Watch our interview with Ilya Feige, Head of Research at Faculty, discussing AI fairness and ethics boards at AI Expo Global 2019:

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post UK government announces board members of AI Council appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/05/16/uk-government-board-members-ai-council/feed/ 0
League table shows under 5% of councils are using AI https://news.deepgeniusai.com/2019/05/10/league-table-councils-using-ai/ https://news.deepgeniusai.com/2019/05/10/league-table-councils-using-ai/#respond Fri, 10 May 2019 10:59:43 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5619 A league table created by Transformation Network shows that under five percent of councils are using AI for their operations. Furthermore, the ‘vast majority’ have no plans to explore artificial intelligence technology in the future. The researchers conducted their study due to the dire outlook of councils. According to the Local Government Association, councils in... Read more »

The post League table shows under 5% of councils are using AI appeared first on AI News.

]]>
A league table created by Transformation Network shows that under five percent of councils are using AI for their operations. Furthermore, the ‘vast majority’ have no plans to explore artificial intelligence technology in the future.

The researchers conducted their study due to the dire outlook of councils. According to the Local Government Association, councils in England face a total funding gap of £8 billion by 2025.

A survey by the Local Government Information Unit (LGiU) found that most councils (97%) will look to increase council tax in 2019-2020 to fight against the funding gap.

Stephen Kelly, former COO of the UK Government and ex-CEO of Sage, said:

“Local government is facing a perfect storm. Service demands have never been higher combined with acute financial pressures after recent years where opportunities for savings have already been made. The best way to protect the future of local council services and the communities is through the smart use of technology, such as robotic process automation and AI.

Far from something to be feared, such technology can liberate employees from mundane, repetitive work and allow them to spend more time doing what people do best, and that’s providing front-line services to citizens.

There has never been a more exciting time to be leading an organisation, and I am sure that the CEO’s of these councils will step up to embrace the opportunities.”

Automation has the potential to save large amounts of money, but many councils appear wary to make the initial required investment when budgets are already strained. That mood could shift with successful rollouts.

Matthew Cain, Head of Digital and Data from the London Borough of Hackney, said:

“We’ve been able to show how a person doing a task that takes four minutes and 57 seconds, can be done by a robot in 27 seconds.

By running those videos alongside each other we have been able to build an incredibly compelling case for colleagues across our different services.”

Newcastle-Under-Lyme District Council, North Norfolk District Council, and Surrey County Council are the top three as it stands, while Wyre Forest District Council and Isle of Wight Council round out the bottom. Hackney is the highest ranked council in the capital.

Here are the top 44 authorities using AI today:

And here are the bottom 14 authorities:

Rankings were established following a year-long study based on freedom of information requests to each authority. Transformation Network says it’s open to any such authority reaching out to query its ranking which could result in an update.

The full league table can be downloaded here (PDF)

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post League table shows under 5% of councils are using AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/05/10/league-table-councils-using-ai/feed/ 0