report – AI News https://news.deepgeniusai.com Artificial Intelligence News Thu, 24 Dec 2020 10:09:18 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png report – AI News https://news.deepgeniusai.com 32 32 Google is telling its scientists to give AI a ‘positive’ spin https://news.deepgeniusai.com/2020/12/24/google-telling-scientists-give-ai-positive-spin/ https://news.deepgeniusai.com/2020/12/24/google-telling-scientists-give-ai-positive-spin/#respond Thu, 24 Dec 2020 10:09:16 +0000 https://news.deepgeniusai.com/?p=10136 Google has reportedly been telling its scientists to give AI a “positive” spin in research papers. Documents obtained by Reuters suggest that, in at least three cases, Google’s researchers were requested to refrain from being critical of AI technology. A “sensitive topics” review was established by Google earlier this year to catch papers which cast... Read more »

The post Google is telling its scientists to give AI a ‘positive’ spin appeared first on AI News.

]]>
Google has reportedly been telling its scientists to give AI a “positive” spin in research papers.

Documents obtained by Reuters suggest that, in at least three cases, Google’s researchers were requested to refrain from being critical of AI technology.

A “sensitive topics” review was established by Google earlier this year to catch papers which cast a negative light on AI ahead of their publication.

Google asks its scientists to consult with legal, policy, and public relations teams prior to publishing anything on topics which could be deemed sensitive like sentiment analysis and categorisations of people based on race and/or political affiliation.

The new review means that papers from Google’s expert researchers which raise questions about AI developments may never be published. Reuters says four staff researchers believe Google is interfering with studies into potential technology harms.

Google recently faced scrutiny after firing leading AI ethics researcher Timnit Gebru.

Gebru is considered a pioneer in the field and researched the risks and inequalities found in large language models. She claims to have been fired by Google over an unpublished paper and sending an email critical of the company’s practices.

In an internal email countering Gebru’s claims, Head of Google Research Jeff Dean wrote:

“We’ve approved dozens of papers that Timnit and/or the other Googlers have authored and then published, but as you know, papers often require changes during the internal review process (or are even deemed unsuitable for submission). 

Unfortunately, this particular paper was only shared with a day’s notice before its deadline — we require two weeks for this sort of review — and then instead of awaiting reviewer feedback, it was approved for submission and submitted.

A cross-functional team then reviewed the paper as part of our regular process and the authors were informed that it didn’t meet our bar for publication and were given feedback about why.”

While it’s one word against another, it’s not a great look for Google.

“Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues,” Reuters reported one of Google’s documents as saying.

On its public-facing website, Google says that its scientists have “substantial” freedom—but that’s increasingly appearing like it’s not the case.

(Photo by Mitchell Luo on Unsplash)

The post Google is telling its scientists to give AI a ‘positive’ spin appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/24/google-telling-scientists-give-ai-positive-spin/feed/ 0
EU human rights agency issues report on AI ethical considerations https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/ https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/#respond Mon, 14 Dec 2020 16:34:34 +0000 https://news.deepgeniusai.com/?p=10117 The European Union’s Fundamental Rights Agency (FRA) has issued a report on AI which delves into the ethical considerations which must be made about the technology. FRA’s report is titled Getting The Future Right and opens with some of the ways AI is already making lives better—such as helping with cancer diagnosis, and even predicting... Read more »

The post EU human rights agency issues report on AI ethical considerations appeared first on AI News.

]]>
The European Union’s Fundamental Rights Agency (FRA) has issued a report on AI which delves into the ethical considerations which must be made about the technology.

FRA’s report is titled Getting The Future Right and opens with some of the ways AI is already making lives better—such as helping with cancer diagnosis, and even predicting where burglaries are likely to take place.

“The possibilities seem endless,” writes Michael O’Flaherty, Director of the FRA, in the report’s foreword. “But how can we fully uphold fundamental rights standards when using AI?”

The FRA interviewed over a hundred public administration officials, private company staff, and a diverse range of experts, in a bid to answer that question.

With evidence of algorithms having biases which could lead to automating societal issues like racial profiling—it’s a question that needs answering if the full potential of AI is going to be unlocked for the whole of society.

O’Flaherty says:

“AI is not infallible, it is made by people – and humans can make mistakes. That is why people need to be aware when AI is used, how it works and how to challenge automated decisions. The EU needs to clarify how existing rules apply to AI. And organisations need to assess how their technologies can interfere with people’s rights both in the development and use of AI.

“We have an opportunity to shape AI that not only respects our human and fundamental rights but that also protects and promotes them.”

AI is being used in almost every industry in some form or another—if not already, it will be soon.

Biases in AI are more dangerous in some industries than others. Policing is an obvious example, but in areas like financial services it could mean one person being given a loan or mortgage compared to another.

Without due transparency, these biases could happen without anyone knowing the reasons behind such decisions—it could simply be because someone grew up in a different neighbourhood. Each automated decision has a very real human impact.

The FRA calls for the EU to:

  • Make sure that AI respects ALL fundamental rights – AI can affect many rights – not just privacy or data protection. It can also discriminate or impede justice. Any future AI legislation has to consider this and create effective safeguards.
  • Guarantee that people can challenge decisions taken by AI – people need to know when AI is used and how it is used, as well as how and where to complain. Organisations using AI need to be able to explain how their systems take decisions.
  • Assess AI before and during its use to reduce negative impacts – private and public organisations should carry out assessments of how AI could harm fundamental rights.
  • Provide more guidance on data protection rules – the EU should further clarify how data protection rules apply to AI. More clarity is also needed on the implications of automated decision-making and the right to human review when AI is used.
  • Assess whether AI discriminates – awareness about the potential for AI to discriminate, and the impact of this, is relatively low. This calls for more research funding to look into the potentially discriminatory effects of AI so Europe can guard against it.
  • Create an effective oversight system – the EU should invest in a more ‘joined-up’ system to hold businesses and public administrations accountable when using AI. Authorities need to ensure that oversight bodies have adequate resources and skills to do the job.

The EU has increased its scrutiny of “big tech” companies like Google in recent years over concerns of invasive privacy practices and abusing their market positions. Last week, AI News reported that Google had controversially fired leading AI ethics researcher Timnit Gebru after she criticised her employer in an email.

Google chief executive Sundar Pichai wrote in a memo: “We need to accept responsibility for the fact that a prominent black, female leader with immense talent left Google unhappily.

“It’s incredibly important to me that our black, women, and under-represented Googlers know that we value you and you do belong at Google.”

Gebru gave an interview to the BBC this week in which she called Google and big tech “institutionally racist”. With that in mind, the calls made in the FRA’s report seem especially important to heed.

You can download a full copy of the FRA’s report here.

(Photo by Guillaume Périgois on Unsplash)

The post EU human rights agency issues report on AI ethical considerations appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/feed/ 0
State of European Tech: Investment in ‘deep tech’ like AI drops 13% https://news.deepgeniusai.com/2020/12/08/state-of-european-tech-investment-deep-tech-ai-drops-13-percent/ https://news.deepgeniusai.com/2020/12/08/state-of-european-tech-investment-deep-tech-ai-drops-13-percent/#comments Tue, 08 Dec 2020 12:43:11 +0000 https://news.deepgeniusai.com/?p=10073 The latest State of European Tech report highlights that investment in “deep tech” like AI has dropped 13 percent this year. Data from Dealroom was used for the State of European Tech report. Dealroom defines deep tech as 16 fields: Artificial Intelligence, Machine Learning, Big Data, Augmented Reality, Virtual Reality, Drones, Autonomous Driving, Blockchain, Nanotech,... Read more »

The post State of European Tech: Investment in ‘deep tech’ like AI drops 13% appeared first on AI News.

]]>
The latest State of European Tech report highlights that investment in “deep tech” like AI has dropped 13 percent this year.

Data from Dealroom was used for the State of European Tech report. Dealroom defines deep tech as 16 fields: Artificial Intelligence, Machine Learning, Big Data, Augmented Reality, Virtual Reality, Drones, Autonomous Driving, Blockchain, Nanotech, Robotics, Internet of Things, 3D Technology, Computer Vision, Connected Devices, Sensors Technology, and Recognition Technology (NLP, image, video, text, speech recognition).

In 2019, there was $10.2 billion capital invested in European deep tech. In 2020, that dropped to $8.9 billion:

I think it’s fair to say that 2020 has been a tough year for most people and businesses. Economic uncertainty – not just from COVID-19 but also trade wars, Brexit, and a rather tumultuous US presidential election – has naturally led to fewer investments and people tightening their wallets.

For just one example, innovative satellite firm OneWeb was forced to declare bankruptcy earlier this year after crucial funding it was close to securing was pulled during the peak of the pandemic. Fortunately, OneWeb was saved following an acquisition by the UK government and Bharti Global—but not all companies have been so fortunate.

Many European businesses will now be watching the close-to-collapse Brexit talks with hope that a deal can yet be salvaged to limit the shock to supply lines, prevent disruption to Europe’s leading financial hub, and help to build a friendly relationship going forward with a continued exchange of ideas and talent rather than years of bitterness and resentment.

The report shows the UK has retained its significant lead in European tech investment and startups this year:

Despite the uncertainties, the UK looks unlikely to lose its position as the hub of European technology anytime soon.

Investments in European tech as a whole should bounce back – along with the rest of the world – in 2021, with promising COVID-19 vaccines rolling out and hopefully some calm in geopolitics.

94 percent of survey respondents for the report stated they have either increased or maintained their appetite to invest in the European venture asset class. Furthermore, a record number of US institutions have participated in more than one investment round in Europe this year—up 36% since 2016.

You can find a full copy of the State of European Tech report here.

The post State of European Tech: Investment in ‘deep tech’ like AI drops 13% appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/08/state-of-european-tech-investment-deep-tech-ai-drops-13-percent/feed/ 1
CDEI launches a ‘roadmap’ for tackling algorithmic bias https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/ https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/#respond Fri, 27 Nov 2020 16:10:35 +0000 https://news.deepgeniusai.com/?p=10058 A review from the Centre for Data Ethics and Innovation (CDEI) has led to the creation of a “roadmap” for tackling algorithmic bias. The analysis was commissioned by the UK government in October 2018 and will receive a formal response. Algorithms bring substantial benefits to businesses and individuals able to use them effectively. However, increasing... Read more »

The post CDEI launches a ‘roadmap’ for tackling algorithmic bias appeared first on AI News.

]]>
A review from the Centre for Data Ethics and Innovation (CDEI) has led to the creation of a “roadmap” for tackling algorithmic bias.

The analysis was commissioned by the UK government in October 2018 and will receive a formal response.

Algorithms bring substantial benefits to businesses and individuals able to use them effectively. However, increasing evidence suggests biases are – often unconsciously – making their way into algorithms and creating an uneven playing field.

The CDEI is the UK government’s advisory body on the responsible use of AI and data-driven technology. CDEI has spent the past two years examining the issue of algorithmic bias and how it can be tackled.

Adrian Weller, Board Member for the Centre for Data Ethics and Innovation, said:

“It is vital that we work hard now to get this right as adoption of algorithmic decision-making increases. Government, regulators, and industry need to work together with interdisciplinary experts, stakeholders, and the public to ensure that algorithms are used to promote fairness, not undermine it.

The Centre for Data Ethics and Innovation has today set out a range of measures to help the UK to achieve this, with a focus on enhancing transparency and accountability in decision-making processes that have a significant impact on individuals.

Not only does the report propose a roadmap to tackle the risks, but it highlights the opportunity that good use of data presents to address historical unfairness and avoid new biases in key areas of life.”

The report focuses on four key sectors where algorithmic bias poses the biggest risk: policing, recruitment, financial services, and local government.

Today’s facial recognition algorithms are relatively effective when used on white males, but research has consistently shown how ineffective they are with darker skin colours and females. The error rate is, therefore, higher when facial recognition algorithms are used on some parts of society over others.

In June, Detroit Police chief Editor Craig said facial recognition would misidentify someone around 96 percent of the time—not particularly comforting when they’re being used to perform mass surveillance of protests.

Craig’s comments were made just days after the ACLU (American Civil Liberties Union) lodged a complaint against Detroit Police following the harrowing wrongful arrest of black male Robert Williams due to a facial recognition error.

And that’s just one example of where AI can unfairly impact some parts of society over another.

“Fairness is a highly prized human value,” the report’s preface reads. “Societies in which individuals can flourish need to be held together by practices and institutions that are regarded as fair.”

Ensuring fairness in algorithmic decision-making

Transparency is required for algorithms. In financial services, a business loan or mortgage could be rejected without transparency simply because a person was born in a poor neighbourhood. Job applications could be rejected not on a person’s actual skill but dependent on where they were educated.

Such biases exist in humans and our institutions today, but automating them at scale is a recipe for disaster. Removing bias from algorithms is not an easy task but if achieved would lead to increased fairness by taking human biases out of the equation.

“It is well established that there is a risk that algorithmic systems can lead to biased decisions, with perhaps the largest underlying cause being the encoding of existing human biases into algorithmic systems. But the evidence is far less clear on whether algorithmic decision-making tools carry more or less risk of bias than previous human decision-making processes. Indeed, there are reasons to think that better use of data can have a role in making decisions fairer, if done with appropriate care.

When changing processes that make life-affecting decisions about individuals we should always proceed with caution. It is important to recognise that algorithms cannot do everything. There are some aspects of decision-making where human judgement, including the ability to be sensitive and flexible to the unique circumstances of an individual, will remain crucial.”

The report’s authors examined the aforementioned four key sectors to determine their current “maturity levels” in algorithmic decision-making.

In recruitment, the authors found rapid growth in the use of algorithms to make decisions at all stages. They note that adequate data is being collected to monitor outcomes but found that understanding of how to avoid human biases creeping in is lacking.

“More guidance is needed on how to ensure that these tools do not unintentionally discriminate against groups of people, particularly when trained on historic or current employment data.”

The financial services industry has relied on data to make decisions for longer than arguably any other to determine things like how likely it is an individual can repay a debt.

“Specific groups are historically underrepresented in the financial system, and there is a risk that these historic biases could be entrenched further through algorithmic systems.”

CDEI found limited use of algorithmic decision-making in UK policing but found variance across forces with regards to both usage and managing ethical risks.

“The use of data analytics tools in policing carries significant risk. Without sufficient care, processes can lead to Review into bias in algorithmic decision-making: Executive summary Centre for Data Ethics and Innovation 8 outcomes that are biased against particular groups, or systematically unfair.

In many scenarios where these tools are helpful, there is still an important balance to be struck between automated decision-making and the application of professional judgement and discretion.”

Finally, in local government, CDEI noted an increased use of algorithms to inform decision-making but most are in their early stages of deployment. Such tools can be powerful assets for societal good – like helping to plan where resources should be allocated to maintain vital services – but can also carry significant risks.

“Evidence has shown that certain people are more likely to be overrepresented in data held by local authorities and this can then lead to biases in predictions and interventions.”

The CDEI makes a number of recommendations in its report but among them is:

  • Clear and mandatory transparency over how algorithms are used for public decision-making and steps taken to ensure the fair treatment of individuals.
  • Full accountability for organisations implementing such technologies.
  • Improving the diversity of roles involved with developing and deploying decision-making tools.
  • Updating model contracts and framework agreements for public sector procurement to incorporate minimum standards around the ethical use of AI.
  • The government working with regulators to provide clear guidance on the collection and use of protected characteristic data in outcome monitoring and decision-making processes. They should then encourage the use of that guidance and data to address current and historic bias in key sectors.
  • Ensuring that the Equality and Human Rights Commission has sufficient resources to investigate cases of alleged algorithmic discrimination.

CDEI is overseen by an independent board which is made up of experts from across industry, civil society, academia, and government; it is an advisory body and does not directly set policies. The Department for Digital, Culture, Media & Sport is consulting on whether a statutory status would help the CDEI to deliver its remit as part of the National Data Strategy.

You can find a full copy of the CDEI’s report into tackling algorithmic bias here (PDF)

(Photo by Matt Duncan on Unsplash)

The post CDEI launches a ‘roadmap’ for tackling algorithmic bias appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/feed/ 0
IBM study highlights rapid uptake and satisfaction with AI chatbots https://news.deepgeniusai.com/2020/10/27/ibm-study-uptake-satisfaction-ai-chatbots/ https://news.deepgeniusai.com/2020/10/27/ibm-study-uptake-satisfaction-ai-chatbots/#respond Tue, 27 Oct 2020 11:03:20 +0000 https://news.deepgeniusai.com/?p=9975 A study by IBM released this week highlights the rapid uptake of AI chatbots in addition to increasing customer satisfaction. Most of us are hardwired to hate not speaking directly to a human when we have a problem—following years of irritating voicemail systems. However, perhaps the only thing worse is being on hold for an... Read more »

The post IBM study highlights rapid uptake and satisfaction with AI chatbots appeared first on AI News.

]]>
A study by IBM released this week highlights the rapid uptake of AI chatbots in addition to increasing customer satisfaction.

Most of us are hardwired to hate not speaking directly to a human when we have a problem—following years of irritating voicemail systems. However, perhaps the only thing worse is being on hold for an uncertain amount of time due to overwhelmed call centres.

Chatbots have come a long way and can now quickly handle most queries within minutes. Where a human is required, the reduced demand through using virtual agent technology (VAT) means customers can get the assistance they need more quickly.

The COVID-19 pandemic has greatly increased the adoption of VAT as businesses seek to maintain customer service through such a challenging time.

According to IBM’s study, 99 percent of organisations reported increased customer satisfaction by integrating virtual agents. Human agents also report increased satisfaction and IBM says those “who feel valued and empowered with the proper tools and support are more likely to deliver a better experience to customers.”

68 percent of leaders cite improving the human agent experience as being among their key reasons for adopting VAT. There’s also economic incentive, with the cost of replacing a dissatisfied agent who leaves a business estimated at as much as 33 percent of the exiting employee’s salary.

IBM claims that VAT performance in the past has only been studied through individual case studies. The company set out, alongside Oxford Economics, to change that by surveying 1,005 respondents from companies using VAT daily.

Businesses wondering whether virtual assistants are worth the investment may be interested to know that 96 percent of the respondents “exceeded, achieved, or expect to achieve” their anticipated return.

On average, companies which have implemented VAT have increased their revenue by three percent.

IBM is one of the leading providers of chatbots through its Watson Assistant solution. While there’s little reason to doubt the claims made in the report, it’s worth keeping in mind that it’s not entirely unbiased.

Watson Assistant has gone from strength-to-strength and appears to have been among the few things which benefited from the pandemic. Between February and August, Watson Assistant usage increased by 65 percent.

You can download a full copy of IBM’s report here.

(Photo by Volodymyr Hryshchenko on Unsplash)

The post IBM study highlights rapid uptake and satisfaction with AI chatbots appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/10/27/ibm-study-uptake-satisfaction-ai-chatbots/feed/ 0
AI dominates Gartner’s latest Hype Cycle for emerging technologies https://news.deepgeniusai.com/2020/08/18/ai-gartner-hype-cycle-emerging-technologies/ https://news.deepgeniusai.com/2020/08/18/ai-gartner-hype-cycle-emerging-technologies/#comments Tue, 18 Aug 2020 10:50:06 +0000 https://news.deepgeniusai.com/?p=9814 Gartner’s latest Hype Cycle has a distinct AI flavour, highlighting the technology’s importance over the next decade. Of the 30 emerging technologies featured in Gartner’s latest Hype Cycle, nine are directly related to artificial intelligence: Generative adversarial networks Adaptive machine learning Composite AI Generative AI Responsible AI AI-augmented development Embedded AI Trusted AI AI-augmented design... Read more »

The post AI dominates Gartner’s latest Hype Cycle for emerging technologies appeared first on AI News.

]]>
Gartner’s latest Hype Cycle has a distinct AI flavour, highlighting the technology’s importance over the next decade.

Of the 30 emerging technologies featured in Gartner’s latest Hype Cycle, nine are directly related to artificial intelligence:

  • Generative adversarial networks
  • Adaptive machine learning
  • Composite AI
  • Generative AI
  • Responsible AI
  • AI-augmented development
  • Embedded AI
  • Trusted AI
  • AI-augmented design

Most of the AI technologies are currently in the initial “Innovation Trigger” part of the Hype Cycle, where excitement builds the fastest.

Responsible AI, AI-augmented development, embedded AI, and Trusted AI, have all now reached the “Peak of Inflated Expectations” and will next move into the dreaded “Trough of Disillusionment” as disappointment sets in over what can realistically be achieved.

Only after the trough, which none of the AI technologies have yet reached, do we head into the areas of the Hype Cycle where adoption occurs with realistic expectations and the productivity rewards are reaped.

Gartner’s Hype Cycle covers the next decade. The current placings of most of the AI technologies on the Hype Cycle indicates that Gartner believes it won’t be until towards the end of the decade we’ll see the most benefits.

Brian Burke, VP of research at Gartner, comments:

“Emerging technologies are disruptive by nature, but the competitive advantage they provide is not yet well known or proven in the market. Most will take more than five years, and some more than 10 years, to reach the Plateau of Productivity.

But some technologies on the Hype Cycle will mature in the near term and technology innovation leaders must understand the opportunities for these technologies, particularly those with transformational or high impact.”

Two technologies which Gartner expects to fast-track through the Hype Cycle are health passports and social distancing technologies, due to their necessity amid the COVID-19 pandemic.

You can find the full Gartner report here (paywall)

(Photo by Verena Yunita Yapi on Unsplash)

The post AI dominates Gartner’s latest Hype Cycle for emerging technologies appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/08/18/ai-gartner-hype-cycle-emerging-technologies/feed/ 1
Microsoft: The UK must increase its AI skills, or risk falling behind https://news.deepgeniusai.com/2020/08/12/microsoft-uk-ai-skills-risk-falling-behind/ https://news.deepgeniusai.com/2020/08/12/microsoft-uk-ai-skills-risk-falling-behind/#comments Wed, 12 Aug 2020 13:46:27 +0000 https://news.deepgeniusai.com/?p=9809 A report from Microsoft warns that the UK faces an AI skills gap which may harm its global competitiveness. The research, titled AI Skills in the UK, shines a spotlight on some concerning issues. For its UK report, Microsoft used data from a global AI skills study featuring more than 12,000 people in 20 countries... Read more »

The post Microsoft: The UK must increase its AI skills, or risk falling behind appeared first on AI News.

]]>
A report from Microsoft warns that the UK faces an AI skills gap which may harm its global competitiveness.

The research, titled AI Skills in the UK, shines a spotlight on some concerning issues.

For its UK report, Microsoft used data from a global AI skills study featuring more than 12,000 people in 20 countries to see how the UK is doing in comparison to the rest of the world.

Most notably, compared to the rest of the world, the UK is seeing a higher failure rate for AI projects. 29 percent of AI ventures launched by UK businesses have generated no commercial value compared to the 19 percent average elsewhere in the world.

35 percent of British business leaders foresee an AI skills gap within two years, while 28 percent believe there already is one (above the global average of 24%).

However, it seems UK businesses aren’t helping to prepare employees with the skills they need. Just 17 percent of British employees have been part of AI reskilling efforts (compared to the global figure of 38 percent.)

Agata Nowakowska, AVP EMEA at Skillsoft, said:

“UK employers will have to address the growing digital skills gap within the workforce to ensure their business is able to fully leverage every digital transformation investment that’s made. With technologies like AI and cloud becoming as commonplace as word processing or email in the workplace, firms will need to ensure employees can use such tools and aren’t apprehensive about using them.

Organisations will need to think holistically about managing reskilling, upskilling and job transitioning. As the war for talent intensifies, employee development and talent pooling will become increasingly vital to building a modern workforce that’s adaptable and flexible. Addressing and easing workplace role transitions will require new training models and approaches that include on-the-job training and opportunities that support and signpost workers to opportunities to upgrade their skills.” 

Currently, a mere 32 percent of British employees feel their workplace is doing enough to prepare them for an AI-enabled future (compared to the global average of 42%)

“The most successful organisations will be the ones that transform both technically and culturally, equipping their people with the skills and knowledge to become the best competitive asset they have,” comments Simon Lambert, Chief Learning Officer for Microsoft UK.

“Human ingenuity is what will make the difference – AI technology alone will not be enough.”

AI brain drain

It’s well-documented that the UK suffers from a “brain drain” problem. The country’s renowned universities – like Oxford and Cambridge – produce globally desirable AI talent, but they’re often swooped up by Silicon Valley giants who are willing to pay much higher salaries than many British firms.

In one example, a senior professor from Imperial College London couldn’t understand why one of her students was not turning up to any classes. Most people wouldn’t pay £9,250 per year in tuition fees and not turn up. The professor called her student to find out why he’d completed three years but wasn’t turning up for his final year. She found that he was offered a six-figure salary at Apple. 

This problem also applies to teachers who are needed to pass their knowledge onto the future generations. Many are lured away from academia to work on groundbreaking projects with almost endless resources, less administrative duties, and be paid handsomely for it too.

Some companies, Microsoft included, have taken measures to address the brain drain problem. After all, a lack of AI talent harms the entire industry.

Dr Chris Bishop, Director of Microsoft’s Research Lab in Cambridge, said:

“One thing we’ve seen over the past few years is: because there are so many opportunities for people with skills in machine learning, particularly in industry, we’ve seen a lot of outflux of top academic talent to industry.

This concerns us because it’s those top academic professors and researchers who are responsible not just for doing research, but also for nurturing the next generation of talent in this field.”

Since 2018, Microsoft has funded a program for training the next generation of data scientists and machine-learning engineers called the Microsoft Research-Cambridge University Machine Learning Initiative.

Microsoft partners with universities to ensure it doesn’t steal talent, allows employees to continue roles in teaching, funds some related PhD scholarships, sends researchers to co-supervise students in universities, and offers paid internships to work alongside teams at Microsoft on projects.

You can find the full AI Skills in the UK report here.

(Photo by William Warby on Unsplash)

The post Microsoft: The UK must increase its AI skills, or risk falling behind appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/08/12/microsoft-uk-ai-skills-risk-falling-behind/feed/ 1
Jack Dorsey tells Andrew Yang that AI is ‘coming for programming jobs’ https://news.deepgeniusai.com/2020/05/26/jack-dorsey-andrew-yang-ai-programming-jobs/ https://news.deepgeniusai.com/2020/05/26/jack-dorsey-andrew-yang-ai-programming-jobs/#respond Tue, 26 May 2020 15:10:02 +0000 https://news.deepgeniusai.com/?p=9625 Twitter CEO Jack Dorsey recently told former 2020 US presidential candidate Andrew Yang that AI “is coming for programming jobs”. There is still fierce debate about the impact that artificial intelligence will have on jobs. Some believe that AI will replace many jobs and lead to the requirement of a Universal Basic Income (UBI), while... Read more »

The post Jack Dorsey tells Andrew Yang that AI is ‘coming for programming jobs’ appeared first on AI News.

]]>
Twitter CEO Jack Dorsey recently told former 2020 US presidential candidate Andrew Yang that AI “is coming for programming jobs”.

There is still fierce debate about the impact that artificial intelligence will have on jobs. Some believe that AI will replace many jobs and lead to the requirement of a Universal Basic Income (UBI), while others claim it will primarily offer assistance to help workers be more productive.

Dorsey is a respected technologist with a deep understanding of emerging technologies. Aside from creating Twitter, he also founded Square which is currently pushing the mass adoption of blockchain-based digital currencies such as Bitcoin and Ethereum.

Yang was seen as the presidential candidate for technologists before he suspended his campaign in February, with The New York Times calling him “The Internet’s Favorite Candidate” and his campaign was noted for its “tech-friendly” nature. The entrepreneur, lawyer, and philanthropist founded Venture for America, a non-profit which aimed to create jobs in cities most affected by the Great Recession. In March, Yang announced the creation of the Humanity Forward non-profit which is dedicated to promoting the ideas during his presidential campaign.

Jobs are now very much at threat once again; with the coronavirus wiping out all job gains since the Great Recession over a period of just four weeks. If emerging technologies such as AI do pose a risk to jobs, it could only compound the problem further.

In an episode of the Yang Speaks podcast, Dorsey warns that AI will pose a particular threat to entry-level programming jobs. However, even seasoned programmers will have their worth devalued.

“A lot of the goals of machine learning and deep learning is to write the software itself over time so a lot of entry-level programming jobs will just not be as relevant anymore,” Dorsey told Yang.

Yang is a proponent of a UBI. Dorsey said that such free cash payments could provide a “floor” for if people lose their jobs due to automation. Such free cash wouldn’t allow for luxurious items and holidays, but would ensure that people can keep a roof over their heads and food on the table.

UBI would provide workers with “peace of mind” that they can “feed their children while they are learning how to transition into this new world,” Dorsey explains.

Critics of UBI argue that such a permanent scheme would be expensive.

The UK is finding that out to some extent currently with its coronavirus furlough scheme. Under the scheme, the state will pay 80 percent of a worker’s salary to prevent job losses during the crisis. However, it’s costing approximately £14 billion per month and is expected to be wound down in the coming months due to being unsustainable.

However, some kind of UBI system is appearing increasingly needed.

In November, the Brookings Institute published a report (PDF) which highlights the risk AI poses to jobs. 

“Workers with graduate or professional degrees will be almost four times as exposed to AI as workers with just a high school degree. Holders of bachelor’s degrees will be the most exposed by education level, more than five times as exposed to AI than workers with just a high school degree,” the paper says.

In their analysis, the Brookings Institute ranked professions by their risk from AI exposure. Computer programmers ranked third, backing Dorsey’s prediction, just behind market research analysts and sales managers.

(Image Credit: Jack Dorsey by Thierry Ehrmann under CC BY 2.0 license)

The post Jack Dorsey tells Andrew Yang that AI is ‘coming for programming jobs’ appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/05/26/jack-dorsey-andrew-yang-ai-programming-jobs/feed/ 0
World’s oldest defence think tank concludes British spies need AI https://news.deepgeniusai.com/2020/04/28/world-oldest-defence-think-tank-british-spies-ai/ https://news.deepgeniusai.com/2020/04/28/world-oldest-defence-think-tank-british-spies-ai/#comments Tue, 28 Apr 2020 12:11:03 +0000 https://news.deepgeniusai.com/?p=9572 The Royal United Services Institute (RUSI) says in an intelligence report that British spies will need to use AI to counter threats. Based in Westminster, the RUSI is the world’s oldest think tank on international defence and security. Founded in 1831 by the first Duke of Wellington, Sir Arthur Wellesley, the RUSI remains a highly... Read more »

The post World’s oldest defence think tank concludes British spies need AI appeared first on AI News.

]]>
The Royal United Services Institute (RUSI) says in an intelligence report that British spies will need to use AI to counter threats.

Based in Westminster, the RUSI is the world’s oldest think tank on international defence and security. Founded in 1831 by the first Duke of Wellington, Sir Arthur Wellesley, the RUSI remains a highly respected institution that’s as relevant today as ever.

AI is rapidly advancing the capabilities of adversaries. In its report, the RUSI says that hackers – both state-sponsored and independent – are likely to use AI for cyberattacks on the web and political systems.

Adversaries “will undoubtedly seek to use AI to attack the UK”, the RUSI notes.

Threats could emerge in a variety of ways. Deepfakes, which use a neural network to generate convincing fake videos and images, are one example of a threat already being posed today. With the US elections coming up, there’s concerns deepfakes of political figures could be used for voter manipulation.

AI could also be used for powerful new malware which mutates to avoid detection. Such malware could even infect and take control of emerging technologies such as driverless cars, smart city infrastructure, and drones.

The RUSI believes that humans will struggle to counter AI threats alone and will need the assistance of automation.

“Adoption of AI is not just important to help intelligence agencies manage the technical challenge of information overload,” said Alexander Babuta, one of the report’s authors. “It is highly likely that malicious actors will use AI to attack the UK in numerous ways, and the intelligence community will need to develop new AI-based defence measures.”

GCHQ, the UK’s service which focuses on signals intelligence , commissioned the RUSI’s independent report. Ken McCallum, the new head of MI5 – the UK’s domestic counter-intelligence and security agency – has already said that greater use of AI will be one of his priorities.

The RUSI believes AI will be of little value for “predictive intelligence” to do things such as predicting when a terrorist act is likely to occur before it happens. Highlighting counter-terrorism specifically, the RUSI says such cases are too infrequent to look for patterns compared to other criminal acts. Reasons for terrorist acts can also change very quickly dependent on world events.

All of this raises concerns about the automation of discrimination. The RUSI calls for more of an “augmented” intelligence – whereby technology assists sifting through large amounts of data, but decisions are ultimately taken by humans – rather than leaving it all up to the machines.

In terms of global positioning, the RUSI recognises the UK’s strength in AI with talent emerging from the country’s world-leading universities and capabilities in the GCHQ, bodies like the Alan Turing Institute, the Centre for Data Ethics and Innovation, and even more in the private sector.

While it’s widely-acknowledged countries like the US and China have far more resources overall to throw at AI advancements, the RUSI believes the UK has the potential to be a leader in the technology within a much-needed ethical framework. However, they say it’s important not to be too preoccupied with the possible downsides.

“There is a risk of stifling innovation if we become overly-focused on hypothetical worst-case outcomes and speculations over some dystopian future AI-driven surveillance network,” argues Babuta.

“Legitimate ethical concerns will be overshadowed unless we focus on likely and realistic uses of AI in the short-to-medium term.”

You can find a copy of the RUSI’s full report here (PDF)

(Photo by Chris Yang on Unsplash)

The post World’s oldest defence think tank concludes British spies need AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/04/28/world-oldest-defence-think-tank-british-spies-ai/feed/ 1
CB Insights: The majority of promising AI startups are US-based https://news.deepgeniusai.com/2020/03/03/cb-insights-promising-ai-startups-us-based/ https://news.deepgeniusai.com/2020/03/03/cb-insights-promising-ai-startups-us-based/#respond Tue, 03 Mar 2020 16:30:50 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6437 A new report from CB Insights suggests the majority of promising AI startups are still based in the US. Despite increasing AI investments around the world, 65 percent of the top 100 AI startups were located in the US. However, some of the companies also had headquarters in other countries. It’s little surprise to see... Read more »

The post CB Insights: The majority of promising AI startups are US-based appeared first on AI News.

]]>
A new report from CB Insights suggests the majority of promising AI startups are still based in the US.

Despite increasing AI investments around the world, 65 percent of the top 100 AI startups were located in the US. However, some of the companies also had headquarters in other countries.

It’s little surprise to see the most promising AI companies based in the world’s largest economy, but some of the others are more surprising.

The UK and Canada were in joint second place as homes of the most promising AI startups, while China just slipped into third. Isreal took fourth place.

Aside from the UK, much of Europe had a poor showing in CB Insight’s research. Germany was the highest-ranking EU member in the report with just two promising AI startups.

AI relies heavily on data and the EU has some of the strictest data regulations in the world. Some experts believe this puts European companies at a significant disadvantage compared to their global counterparts. To help European AI startups overcome this challenge, the EU is currently considering a “single market” for data.

The UK continues to benefit from being the leading destination in Europe for foreign investment with the US being the country’s largest single investor. The UK is also home to the European headquarters of many Silicon Valley giants who benefit from the talent produced by the country’s renowned universities helping to address the global shortage of AI talent. Arguably, the most successful AI story out of the UK is that of DeepMind which was acquired by Google for over $500 million in 2014.

One of the 100 companies appearing on CB Insight’s top AI firms is InstaDeep, a UK-based AI R&D company that jointly published a research paper last year alongside DeepMind on a reinforcement learning algorithm called AlphaNPI.

Here are the other AI firms in this year’s top 100 sorted by category:

CB Insights analysed data on around 5,000 global AI startups for its research. According to the research firm, the healthcare category is currently showing the most promise followed by retail and warehousing.

4,300 startups in 80 countries have raised $83 billion since 2014, including $26.6 billion just last year, according to CB Insights. However, the share of investments into US-based AI firms dropped from 71 percent to 39 percent in that period.

You can find the full report from CB Insights here.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post CB Insights: The majority of promising AI startups are US-based appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/03/03/cb-insights-promising-ai-startups-us-based/feed/ 0