europe – AI News https://news.deepgeniusai.com Artificial Intelligence News Mon, 14 Dec 2020 16:34:35 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png europe – AI News https://news.deepgeniusai.com 32 32 EU human rights agency issues report on AI ethical considerations https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/ https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/#respond Mon, 14 Dec 2020 16:34:34 +0000 https://news.deepgeniusai.com/?p=10117 The European Union’s Fundamental Rights Agency (FRA) has issued a report on AI which delves into the ethical considerations which must be made about the technology. FRA’s report is titled Getting The Future Right and opens with some of the ways AI is already making lives better—such as helping with cancer diagnosis, and even predicting... Read more »

The post EU human rights agency issues report on AI ethical considerations appeared first on AI News.

]]>
The European Union’s Fundamental Rights Agency (FRA) has issued a report on AI which delves into the ethical considerations which must be made about the technology.

FRA’s report is titled Getting The Future Right and opens with some of the ways AI is already making lives better—such as helping with cancer diagnosis, and even predicting where burglaries are likely to take place.

“The possibilities seem endless,” writes Michael O’Flaherty, Director of the FRA, in the report’s foreword. “But how can we fully uphold fundamental rights standards when using AI?”

The FRA interviewed over a hundred public administration officials, private company staff, and a diverse range of experts, in a bid to answer that question.

With evidence of algorithms having biases which could lead to automating societal issues like racial profiling—it’s a question that needs answering if the full potential of AI is going to be unlocked for the whole of society.

O’Flaherty says:

“AI is not infallible, it is made by people – and humans can make mistakes. That is why people need to be aware when AI is used, how it works and how to challenge automated decisions. The EU needs to clarify how existing rules apply to AI. And organisations need to assess how their technologies can interfere with people’s rights both in the development and use of AI.

“We have an opportunity to shape AI that not only respects our human and fundamental rights but that also protects and promotes them.”

AI is being used in almost every industry in some form or another—if not already, it will be soon.

Biases in AI are more dangerous in some industries than others. Policing is an obvious example, but in areas like financial services it could mean one person being given a loan or mortgage compared to another.

Without due transparency, these biases could happen without anyone knowing the reasons behind such decisions—it could simply be because someone grew up in a different neighbourhood. Each automated decision has a very real human impact.

The FRA calls for the EU to:

  • Make sure that AI respects ALL fundamental rights – AI can affect many rights – not just privacy or data protection. It can also discriminate or impede justice. Any future AI legislation has to consider this and create effective safeguards.
  • Guarantee that people can challenge decisions taken by AI – people need to know when AI is used and how it is used, as well as how and where to complain. Organisations using AI need to be able to explain how their systems take decisions.
  • Assess AI before and during its use to reduce negative impacts – private and public organisations should carry out assessments of how AI could harm fundamental rights.
  • Provide more guidance on data protection rules – the EU should further clarify how data protection rules apply to AI. More clarity is also needed on the implications of automated decision-making and the right to human review when AI is used.
  • Assess whether AI discriminates – awareness about the potential for AI to discriminate, and the impact of this, is relatively low. This calls for more research funding to look into the potentially discriminatory effects of AI so Europe can guard against it.
  • Create an effective oversight system – the EU should invest in a more ‘joined-up’ system to hold businesses and public administrations accountable when using AI. Authorities need to ensure that oversight bodies have adequate resources and skills to do the job.

The EU has increased its scrutiny of “big tech” companies like Google in recent years over concerns of invasive privacy practices and abusing their market positions. Last week, AI News reported that Google had controversially fired leading AI ethics researcher Timnit Gebru after she criticised her employer in an email.

Google chief executive Sundar Pichai wrote in a memo: “We need to accept responsibility for the fact that a prominent black, female leader with immense talent left Google unhappily.

“It’s incredibly important to me that our black, women, and under-represented Googlers know that we value you and you do belong at Google.”

Gebru gave an interview to the BBC this week in which she called Google and big tech “institutionally racist”. With that in mind, the calls made in the FRA’s report seem especially important to heed.

You can download a full copy of the FRA’s report here.

(Photo by Guillaume Périgois on Unsplash)

The post EU human rights agency issues report on AI ethical considerations appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/feed/ 0
State of European Tech: Investment in ‘deep tech’ like AI drops 13% https://news.deepgeniusai.com/2020/12/08/state-of-european-tech-investment-deep-tech-ai-drops-13-percent/ https://news.deepgeniusai.com/2020/12/08/state-of-european-tech-investment-deep-tech-ai-drops-13-percent/#comments Tue, 08 Dec 2020 12:43:11 +0000 https://news.deepgeniusai.com/?p=10073 The latest State of European Tech report highlights that investment in “deep tech” like AI has dropped 13 percent this year. Data from Dealroom was used for the State of European Tech report. Dealroom defines deep tech as 16 fields: Artificial Intelligence, Machine Learning, Big Data, Augmented Reality, Virtual Reality, Drones, Autonomous Driving, Blockchain, Nanotech,... Read more »

The post State of European Tech: Investment in ‘deep tech’ like AI drops 13% appeared first on AI News.

]]>
The latest State of European Tech report highlights that investment in “deep tech” like AI has dropped 13 percent this year.

Data from Dealroom was used for the State of European Tech report. Dealroom defines deep tech as 16 fields: Artificial Intelligence, Machine Learning, Big Data, Augmented Reality, Virtual Reality, Drones, Autonomous Driving, Blockchain, Nanotech, Robotics, Internet of Things, 3D Technology, Computer Vision, Connected Devices, Sensors Technology, and Recognition Technology (NLP, image, video, text, speech recognition).

In 2019, there was $10.2 billion capital invested in European deep tech. In 2020, that dropped to $8.9 billion:

I think it’s fair to say that 2020 has been a tough year for most people and businesses. Economic uncertainty – not just from COVID-19 but also trade wars, Brexit, and a rather tumultuous US presidential election – has naturally led to fewer investments and people tightening their wallets.

For just one example, innovative satellite firm OneWeb was forced to declare bankruptcy earlier this year after crucial funding it was close to securing was pulled during the peak of the pandemic. Fortunately, OneWeb was saved following an acquisition by the UK government and Bharti Global—but not all companies have been so fortunate.

Many European businesses will now be watching the close-to-collapse Brexit talks with hope that a deal can yet be salvaged to limit the shock to supply lines, prevent disruption to Europe’s leading financial hub, and help to build a friendly relationship going forward with a continued exchange of ideas and talent rather than years of bitterness and resentment.

The report shows the UK has retained its significant lead in European tech investment and startups this year:

Despite the uncertainties, the UK looks unlikely to lose its position as the hub of European technology anytime soon.

Investments in European tech as a whole should bounce back – along with the rest of the world – in 2021, with promising COVID-19 vaccines rolling out and hopefully some calm in geopolitics.

94 percent of survey respondents for the report stated they have either increased or maintained their appetite to invest in the European venture asset class. Furthermore, a record number of US institutions have participated in more than one investment round in Europe this year—up 36% since 2016.

You can find a full copy of the State of European Tech report here.

The post State of European Tech: Investment in ‘deep tech’ like AI drops 13% appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/08/state-of-european-tech-investment-deep-tech-ai-drops-13-percent/feed/ 1
Information Commissioner clears Cambridge Analytica of influencing Brexit https://news.deepgeniusai.com/2020/10/08/information-commissioner-cambridge-analytica-influencing-brexit/ https://news.deepgeniusai.com/2020/10/08/information-commissioner-cambridge-analytica-influencing-brexit/#respond Thu, 08 Oct 2020 16:32:57 +0000 https://news.deepgeniusai.com/?p=9938 A three-year investigation by the UK Information Commissioner’s office has cleared Cambridge Analytica of electoral interference. Cambridge Analytica was accused in March 2018 of using AI tools and big data to influence the results of the Brexit referendum and the US presidential election. Most objective observers probably felt the case was overblown, but it’s taken... Read more »

The post Information Commissioner clears Cambridge Analytica of influencing Brexit appeared first on AI News.

]]>
A three-year investigation by the UK Information Commissioner’s office has cleared Cambridge Analytica of electoral interference.

Cambridge Analytica was accused in March 2018 of using AI tools and big data to influence the results of the Brexit referendum and the US presidential election. Most objective observers probably felt the case was overblown, but it’s taken until now to be confirmed.

“From my review of the materials recovered by the investigation I have found no further evidence to change my earlier view that CA [Cambridge Analytica] was not involved in the EU referendum campaign in the UK,” wrote Information Commissioner Elizabeth Denham.

Cambridge Analytica did obtain a ton of user data—but through predominantly commercial means, and of mostly US voters. Such data is available to, and has also been purchased by, other electoral campaigns for targeted advertising purposes (the Remain campaigns in the UK actually outspent their Leave counterparts by £6 million.)

“CA were purchasing significant volumes of commercially available personal data (at one estimate over 130 billion data points), in the main about millions of US voters, to combine it with the Facebook derived insight information they had obtained from an academic at Cambridge University, Dr Aleksandr Kogan, and elsewhere,” wrote Denham.

The only real scandal was Facebook’s poor protection of users which allowed third-party apps to scrape their data—for which it was fined £500,000 by the UK’s data protection watchdog.

It seems the claims Cambridge Analytica used powerful AI tools were also rather overblown, with the information commissioner saying all they found were models “built from ‘off the shelf’ analytical tools”.

The information commissioner even found evidence that Cambridge Analytica’s own staff “were concerned about some of the public statements the leadership of the company were making about their impact and influence.”

Cambridge Analytica appears to have been a victim of those unable to accept democratic results combined with its own boasting of capabilities that weren’t actually that impressive.

You can read the full report here (PDF)

(Photo by Christian Lue on Unsplash)

The post Information Commissioner clears Cambridge Analytica of influencing Brexit appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/10/08/information-commissioner-cambridge-analytica-influencing-brexit/feed/ 0
Nvidia and ARM will open ‘world-class’ AI centre in Cambridge https://news.deepgeniusai.com/2020/09/14/nvidia-arm-world-class-ai-centre-cambridge/ https://news.deepgeniusai.com/2020/09/14/nvidia-arm-world-class-ai-centre-cambridge/#respond Mon, 14 Sep 2020 12:52:49 +0000 https://news.deepgeniusai.com/?p=9848 Nvidia is already putting its $40 billion ARM acquisition to good use by opening a “world-class” AI centre in Cambridge. British chip designer ARM’s technology is at the heart of most mobile devices. Meanwhile, Nvidia’s GPUs are increasingly being used for AI computation in servers, desktops, and even things like self-driving vehicles. However, Nvidia was... Read more »

The post Nvidia and ARM will open ‘world-class’ AI centre in Cambridge appeared first on AI News.

]]>
Nvidia is already putting its $40 billion ARM acquisition to good use by opening a “world-class” AI centre in Cambridge.

British chip designer ARM’s technology is at the heart of most mobile devices. Meanwhile, Nvidia’s GPUs are increasingly being used for AI computation in servers, desktops, and even things like self-driving vehicles.

However, Nvidia was most interested in ARM’s presence in edge devices—which it estimates to be in the region of 180 billion.

Jensen Huang, CEO of Nvidia, said:

“ARM is an incredible company and it employs some of the greatest engineering minds in the world. But we believe we can make ARM even more incredible and take it to even higher levels.

We want to propel it — and the UK — to global AI leadership.”

There were concerns Nvidia’s acquisition would lead to job losses, but the company has promised to keep the business in the UK. The company says it’s planning to hire more staff and retain ARM’s iconic brand.

Nvidia is going further in its commitment to the UK by opening a new AI centre in Cambridge, which is home to an increasing number of exciting startups in the field such as FiveAI, Prowler.io, Fetch.ai, and Darktrace.

“We will create an open centre of excellence in the area once home to giants like Isaac Newton and Alan Turing, for whom key NVIDIA technologies are named.

Here, leading scientists, engineers and researchers from the UK and around the world will come to develop their ideas, collaborate and conduct their ground-breaking work in areas like healthcare, life sciences, self-driving cars, and other fields.”

The new centre will have five key features when it opens:

  • ARM/Nvidia-based supercomputer – set to be one of the most powerful AI supercomputers in the world.
  • Research Fellowships and Partnerships – Nvidia will use the centre to establish new UK-based research partnerships, expanding on successful relationships already established with King’s College and Oxford.
  • AI Training – Nvidia will make its AI curriculum available across the UK to help create job opportunities and prepare “the next generation of UK developers for AI leadership”
  • Startup Accelerator – With so many of the world’s most exciting AI companies launching in the UK, the Nvidia Inception accelerator will help startups succeed by providing access to the aforementioned supercomputer, connections to researchers from NVIDIA and partners, technical training, and marketing promotion.
  • Industry Collaboration – AI is still in its infancy but will impact every industry to some extent. Nvidia says its new research facility will be an open hub for industry collaboration, building on the company’s existing relationships with the likes of GSK, Oxford Nanopore, and other leaders in their fields.

The UK is Europe’s leader in AI and the British government is investing heavily in ensuring it maintains its pole position. Beyond funding, the UK is also aiming to ensure it’s among the best places to run an AI company.

Current EU rules, especially around data, are often seen as limiting the development of European AI companies when compared to elsewhere in the world. While the UK will have to avoid being accused of doing a so-called “bonfire of regulations” post-Brexit, data collection regulations is likely an area which will be relaxed.

In the UK’s historic trade deal signed with Japan last week, several enhancements were made over the blanket EU-Japan deal signed earlier this year. Among the perceived improvements is the “free flow of data” by not enforcing localisation requirements, and that algorithms can remain private.

UK trade secretary Liz Truss said: “The agreement we have negotiated – in record time and in challenging circumstances – goes far beyond the existing EU deal, as it secures new wins for British businesses in our great manufacturing, food and drink, and tech industries.”

Japan and the UK, as two global tech giants, are expected to deepen their collaboration in the coming years—building on the trade deal signed last week.

Shigeki Ishizuka, Chairman of the Japan Electronics and Information Technology Industries Association, said: “We are confident that this mutual relationship will be further strengthened as an ambitious agreement that will contribute to the promotion of cooperation in research and development, the promotion of innovation, and the further expansion of inter-company collaboration.”

Nvidia’s investment shows that it has confidence in the UK’s strong AI foundations continuing to gain momentum in the coming years.

(Photo by A Perry on Unsplash)

The post Nvidia and ARM will open ‘world-class’ AI centre in Cambridge appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/09/14/nvidia-arm-world-class-ai-centre-cambridge/feed/ 0
Google CEO: We need sensible AI regulation that does not limit its potential https://news.deepgeniusai.com/2020/01/21/google-ceo-sensible-ai-regulation-limit-potential/ https://news.deepgeniusai.com/2020/01/21/google-ceo-sensible-ai-regulation-limit-potential/#respond Tue, 21 Jan 2020 15:49:12 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6380 Google CEO Sundar Pichai has called for sensible AI regulation that does not limit the huge potential benefits to society. Writing in a FT editorial, Pichai said: “…there is no question in my mind that artificial intelligence needs to be regulated. It is too important not to.” Few people debate the need for AI regulation... Read more »

The post Google CEO: We need sensible AI regulation that does not limit its potential appeared first on AI News.

]]>
Google CEO Sundar Pichai has called for sensible AI regulation that does not limit the huge potential benefits to society.

Writing in a FT editorial, Pichai said: “…there is no question in my mind that artificial intelligence needs to be regulated. It is too important not to.”

Few people debate the need for AI regulation but there are differing opinions when it comes to how much. Overregulation limits innovation while lack of regulation can pose serious dangers – even existential depending on who you listen to.

Pichai says AI is “one of the most promising new technologies” that has “the potential to improve billions of lives,” but warns of the possible risks if development is left unchecked.

“History is full of examples of how technology’s virtues aren’t guaranteed,” Pichai wrote. “The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread.”

Google is one of the companies which people have voiced concerns about given its reach and questionable record when it comes to user privacy. Pichai’s words today will offer some comfort that Google’s leadership wants sensible regulation to guide its efforts.

So far, Google has shown how AI can be used for good. A study by Google, published in science journal Nature, showed how its AI model was able to spot breast cancer in mammograms with “greater accuracy, fewer false positives, and fewer false negatives than experts.”

Governments around the world are beginning to shape AI regulations. The UK, Europe’s leader in AI developments and investments, aims to focus on promoting ethical AI rather than attempt to match superpowers like China and the US in other areas.

In a report last year, the Select Committee on Artificial Intelligence recommended the UK capitalises on its “particular blend of national assets” to “forge a distinctive role for itself as a pioneer in ethical AI”.

The EU, which the UK leaves at the end of this month, recently published its own comprehensive proposals on AI regulation which many believe are too stringent. The US warned its European allies against overregulation of AI earlier this month.

In a statement released by the Office of Science and Technology Policy, the White House wrote:

“Europe and our allies should avoid heavy handed innovation-killing models, and instead consider a similar regulatory approach.

The best way to counter authoritarian uses of AI is to make sure America and our international partners remain the global hubs of innovation, shaping the evolution of technology in a manner consistent with our common values.”

Pichai refrains from denouncing either the White House’s calls for light AI regulation, or the EU’s plans for stringent rules. Instead, Pichai only calls for the need to balance “potential harms… with social opportunities.”

Google has certainly not been devoid of criticism over its forays into AI. The company was forced to back out from a Pentagon contract in 2018 called Project Maven over backlash about Google building AI technology for deploying and monitoring unmanned aerial vehicles (UAVs).

Following the decision to back out from Project Maven, Pichai outlined Google’s ethical principles when it comes to AI:

  1. Be socially beneficial.
  2. Avoid creating or reinforcing unfair bias.
  3. Be built and tested for safety.
  4. Be accountable to people.
  5. Incorporate privacy design principles.
  6. Uphold high standards of scientific excellence.
  7. Be made available for uses that accord with these principles.

Pichai promised the company “will work to limit potentially harmful or abusive applications” and will block the use of their technology if they “become aware of uses that are inconsistent” with the principles.

Time will tell whether Google will abide by its principles when it comes to AI, but it’s heartening to see Pichai call for sensible regulation to help enforce it across the industry.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Google CEO: We need sensible AI regulation that does not limit its potential appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/01/21/google-ceo-sensible-ai-regulation-limit-potential/feed/ 0
The White House warns European allies not to overregulate AI https://news.deepgeniusai.com/2020/01/07/white-house-warns-european-allies-overregulate-ai/ https://news.deepgeniusai.com/2020/01/07/white-house-warns-european-allies-overregulate-ai/#comments Tue, 07 Jan 2020 13:48:00 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6328 The White House has urged its European allies to avoid overregulation of AI to prevent Western innovation from being hindered. While the news has gone somewhat under the radar given recent events, the Americans are concerned that overregulation may cause Western nations to fall behind the rest of the world. In a statement released by... Read more »

The post The White House warns European allies not to overregulate AI appeared first on AI News.

]]>
The White House has urged its European allies to avoid overregulation of AI to prevent Western innovation from being hindered.

While the news has gone somewhat under the radar given recent events, the Americans are concerned that overregulation may cause Western nations to fall behind the rest of the world.

In a statement released by the Office of Science and Technology Policy, the White House wrote:

“Europe and our allies should avoid heavy handed innovation-killing models, and instead consider a similar regulatory approach.

The best way to counter authoritarian uses of AI is to make sure America and our international partners remain the global hubs of innovation, shaping the evolution of technology in a manner consistent with our common values.”

The UK is expected to retain its lead as the European hub for AI innovation with vast amounts of private and public sector investment, successful companies like DeepMind, and world class universities helping to address the global talent shortage. In Oxford Insights’ 2017 Government AI Readiness Index, the UK ranked number one due to areas such as digital skills training and data quality. The Index considers public service reform, economy and skills, and digital infrastructure.

Despite its European AI leadership, the UK would struggle to match the levels of funding afforded to firms residing in superpowers like the US and China. Many experts have suggested the UK should instead focus on leading in the ethical integration of AI and developing sensible regulations, an area it has much experience in.

Here’s a timeline of some recent work from the UK government towards this goal:

  • September 2016 – the House of Commons Science and Technology Committee published a 44-page report on “Robotics and Artificial Intelligence” which investigates the economic and social implications of employment changes, ethical and legal issues around safety, verification, bias, privacy, and accountability; and strategies to enhance research, funding, and innovation
  • January 2017 – an All Party Parliamentary Group on Artificial Intelligence (APPG AI) was established to address ethical issues, social impact, industry norms, and regulatory options for AI in parliament.
  • June 2017 – parliament established the Select Committee on AI to further consider the economic, ethical and social implications of advances in artificial intelligence, and to make recommendations. All written and oral evidence received by the committee can be seen here.
  • April 2018 – the aforementioned committee published a 183-page report, “AI in the UK: ready, willing and able?” which considers AI development and governance in the UK. It acknowledges that the UK cannot compete with the US or China in terms of funding or people but suggests the country may have a competitive advantage in considering the ethics of AI.
  • September 2018 – the UK government launched an experiment with the World Economic Forum to develop procurement policies for AI. The partnership will bring together diverse stakeholders to collectively develop guidelines to capitalise on governments’ buying power to support the responsible deployment and design of AI technologies.

Western nations are seen as being at somewhat of a disadvantage due to sensitivities around privacy. EU nations, in particular, have strict data collection regulations such as GDPR which limits the amount of data researchers can collect to train AIs.

“Very often we hear ‘Where are the British and European Googles and Facebooks?’ Well, it’s because of barriers like this which stop organisations like that being possible to grow and develop,” said Peter Wright, solicitor and managing director of Digital Law UK.

Dependent on the UK’s future trade arrangement with the EU, it could, of course, decide to chart its own regulatory path following Brexit.

Speaking to reporters in a call, US CTO Michael Kratsios said: “Pre-emptive and burdensome regulation does not only stifle economic innovation and growth, but also global competitiveness amid the rise of authoritarian governments that have no qualms with AI being used to track, surveil, and imprison their own people.”

In the same call, US deputy CTO Lynne Parker commented: “As countries around the world grapple with similar questions about the appropriate regulation of AI, the US AI regulatory principles demonstrate that America is leading the way to shape the evolution in a way that reflects our values of freedom, human rights, and civil liberties.

“The new European Commission has said they intend to release an AI regulatory document in the coming months. After a productive meeting with Commissioner Vestager in November, we encourage Europe to use the US AI principles as a framework. The best way to counter authoritarian uses of AI is to make America and our national partners remain the global hub of innovation, advancing our common values.”

A similar regulation to GDPR in California called CCPA was also signed into law in June 2018. “I think the examples in the US today at state and local level are examples of overregulation which you want to avoid on the national level,” said a government official.

The post The White House warns European allies not to overregulate AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/01/07/white-house-warns-european-allies-overregulate-ai/feed/ 1
ELLIS commits €220m in a bid to keep AI talent in Europe https://news.deepgeniusai.com/2019/12/11/ellis-bid-keep-ai-talent-europe/ https://news.deepgeniusai.com/2019/12/11/ellis-bid-keep-ai-talent-europe/#comments Wed, 11 Dec 2019 15:14:24 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6296 The European Laboratory for Learning and Intelligent Systems (ELLIS) has announced a new initiative designed to help keep AI talent in the continent. Retaining AI talent is difficult. AI News has already reported on academia’s struggle to keep researchers helping to equip the next generation with AI skills, but it’s not just the education industry... Read more »

The post ELLIS commits €220m in a bid to keep AI talent in Europe appeared first on AI News.

]]>
The European Laboratory for Learning and Intelligent Systems (ELLIS) has announced a new initiative designed to help keep AI talent in the continent.

Retaining AI talent is difficult. AI News has already reported on academia’s struggle to keep researchers helping to equip the next generation with AI skills, but it’s not just the education industry which is suffering.

The main problem is the limited supply of AI talent, mixed with high demand, is causing eye-wateringly high salaries. Those who can afford to pay such salaries, predominantly large American tech giants, are poaching talent from around the world.

Another reason talent is leaving Europe is due to increasingly crippling regulations that aren’t so extensive in countries like the US, and practically nonexistent in China. In an industry like AI, where data is everything, researchers are often able to achieve more outside of Europe.

ELLIS has selected 17 cities in 10 European countries, and Israel, where it hopes to establish AI research institutes focused on societal impacts.

Each institute will start with around half a dozen researchers and will be provided with at least €1.7 million funding annually for the next five years. Overall, the project will spend around €220 million.

ELLIS was inspired by the Canadian Institute For Advanced Research (CIFAR) initiative. In fact, it was at the Neural Information Processing Systems (NeurIPS) conference in Vancouver last year when top European AI researchers decided to create ELLIS.

During this year’s NeurIPS, ELLIS also signed a letter of intent with CIFAR’s program on Learning in Machines and Brains.

Yoshua Bengio, a Turing Award winner and co-director of the program in Learning in Machines & Brains of the Canadian Institute for Advanced Studies (CIFAR), said: “We need a constructive discussion about ethical uses of AI, and a necessary prerequisite of this is that the highest level of AI research is done in open societies with humanist values such as those of Canada and the European countries.”

“We have already signed a letter of intent to collaborate with the scientists driving this exciting new European initiative, and look forward to jointly promoting open AI research at the highest quality level.”

A number of European companies have already stated their support of this initiative, including Audi, AVL, Bayer, Bosch, DeepMind, Greiner, Porsche, Siemens as well as US companies such as Amazon, Facebook, Google, NVIDIA, Qualcomm and the Canadian startup Element AI.

The first ELLIS institutes are expected to open in spring 2020.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post ELLIS commits €220m in a bid to keep AI talent in Europe appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/12/11/ellis-bid-keep-ai-talent-europe/feed/ 1
Report: UK leads AI developments in Europe, Iran in Middle-East https://news.deepgeniusai.com/2019/11/11/report-uk-ai-developments-europe-iran-middle-east/ https://news.deepgeniusai.com/2019/11/11/report-uk-ai-developments-europe-iran-middle-east/#respond Mon, 11 Nov 2019 15:31:47 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6177 The latest Scimago Institutions Rankings (SIR) indicates the UK is leading AI developments in Europe while Iran is leading in the Middle-East. SIR has ranked global research and education institutions since 2009. The ranking is based on their performance and the number of articles they’ve published in highly-regarded publications. In the field of AI, the... Read more »

The post Report: UK leads AI developments in Europe, Iran in Middle-East appeared first on AI News.

]]>
The latest Scimago Institutions Rankings (SIR) indicates the UK is leading AI developments in Europe while Iran is leading in the Middle-East.

SIR has ranked global research and education institutions since 2009. The ranking is based on their performance and the number of articles they’ve published in highly-regarded publications.

In the field of AI, the UK ranks number one in Europe and fourth globally. Iran ranks number one in the Middle-East and is ninth overall among the 152 countries featured.

Here are the top 10 countries for AI in the latest SIR:

China is leading AI developments overall. The US is in second place and leads AI developments in the Western hemisphere.

Defense considerations

Despite widespread concern, it’s almost inevitable AI will increasingly creep into military applications. With that in mind, it’s hard not to consider ongoing tensions and how it might apply to future conflicts like those in the Middle-East.

Tensions with Iran, particularly with the US and UK, have been increasing in recent years – especially in the wake of American allegations that the Iranians haven’t been meeting their obligations under the nuclear deal reached in 2015. The US controversially pulled out of the treaty and imposed sanctions on the state.

Since then, a series of conflicts have occurred. An example of such a case where AI may play a key role in the future was Iran’s downing of a US drone in June.

AI superiority from the American side may have enabled the drone to take evasive action to avoid being shot down. On the other hand, AI technologies on the Iranian side could have automated the downing of any unauthorised aircraft.

Some have likened the race to AI superiority to the nuclear arms race so we can only hope it’s less devastating. However, increasing capabilities between new and age-old rivals won’t do anything to ease such concerns.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Report: UK leads AI developments in Europe, Iran in Middle-East appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/11/11/report-uk-ai-developments-europe-iran-middle-east/feed/ 0
Project FARM: AI will help to ensure you can still get your coffee fix https://news.deepgeniusai.com/2019/10/02/project-farm-ai-help-get-coffee-fix/ https://news.deepgeniusai.com/2019/10/02/project-farm-ai-help-get-coffee-fix/#respond Wed, 02 Oct 2019 13:15:57 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6074 Coffee farmers will receive some welcome assistance from AI on managing their crops amid tough conditions and growing demand. European researchers from Capgemini have developed a platform called Project FARM (Financial and Agricultural Recommendation Models) which aims to boost farmers’ yield, optimise the value chain, and bolster the global food supply. Project FARM is first... Read more »

The post Project FARM: AI will help to ensure you can still get your coffee fix appeared first on AI News.

]]>
Coffee farmers will receive some welcome assistance from AI on managing their crops amid tough conditions and growing demand.

European researchers from Capgemini have developed a platform called Project FARM (Financial and Agricultural Recommendation Models) which aims to boost farmers’ yield, optimise the value chain, and bolster the global food supply.

Project FARM is first going to be used in Kenya to assist coffee farmers. The platform was built in collaboration with East Africa-based social enterprise firm Agrics which provides local farmers with agricultural products and services on credit.

Julian van Velzen, a data analyst at Capgemini who leads Project FARM, said:

“By connecting farming communities with data science, and big data with traditional farming methods, the FARM platform is built to optimise the value chain and bring parties together as an ecosystem around one data-driven platform.

The platform can pave the way for bringing automated farming to small-scale farmers. With the increasing availability of open data and decreasing prices of sensors and satellite imagery, the future of farming is bright.”

AI is used to analyse farm data sourced from Agrics in addition to satellite data from Project Sobloo, a Copernicus Data and Information Access Service (DIAS). 

A dashboard provides insights to the farmer with tailor-made advice on how to optimise production. This advice can also be sent via SMS so, for example, an alert can be issued to take precautions if a crop-damaging thunderstorm is due the next day.

On the business-side, Agrics is able to use the data to foresee any risks that may impact each farmer and their investment.

Violanda de Man, Innovation Manager at Agrics East Africa, commented:

“Through our interactions with the farmers, we are on top of a huge reservoir of data. We can now turn this data into meaningful insights, which allows us to provide time and location-specific products and services to increase yield and lower risk at farm and value chain level.

Increased value chain effectiveness will help to directly improve the income and food security of rural populations.”

The global demand for food is expected to increase by 60 percent by 2050 and most of the world’s population is fed primarily by small farmers in developing countries. Supporting these farmers isn’t just the moral thing to do; it also helps to keep food on all of our plates.

Earlier this week, Fairtrade warned that coffee could become a luxury due to climate change affecting production. According to Catherine David, head of commercial partnerships at Fairtrade, issues like extreme temperatures, increased humidity, and pests are hitting farmers’ crops.

Meanwhile, a growing population is living longer and the demand for coffee is increasing. Combined with the expected production issues, the quality of coffee is expected to decrease while prices rocket.

I don’t know about you, but I need my coffee and would rather not have to take out a loan for my fix.

The post Project FARM: AI will help to ensure you can still get your coffee fix appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/10/02/project-farm-ai-help-get-coffee-fix/feed/ 0
Human-beating StarCraft 2 AI will compete anonymously in Europe https://news.deepgeniusai.com/2019/07/11/human-starcraft2-ai-compete-europe/ https://news.deepgeniusai.com/2019/07/11/human-starcraft2-ai-compete-europe/#respond Thu, 11 Jul 2019 16:13:45 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5826 DeepMind’s professional StarCraft 2-playing AI is set to play against human players in the European competitive ladder. StarCraft 2 is a complex real-time strategy game that can still throw surprises at you even years after playing. In other words, StarCraft 2 is great for testing an AI. Back in January, AI News reported DeepMind’s so-called... Read more »

The post Human-beating StarCraft 2 AI will compete anonymously in Europe appeared first on AI News.

]]>
DeepMind’s professional StarCraft 2-playing AI is set to play against human players in the European competitive ladder.

StarCraft 2 is a complex real-time strategy game that can still throw surprises at you even years after playing. In other words, StarCraft 2 is great for testing an AI.

Back in January, AI News reported DeepMind’s so-called ‘AlphaStar’ AI beat professional human eSports players Grzegorz Komincz and Dario Wunsch.

AlphaStar is now taking a virtual trip to Europe where it will begin playing a “small number” of games on the StarCraft 2 competitive ladder. Human players won’t even be aware they’re playing against the AI as it will be anonymised.

Blizzard explained the reasoning behind the anonymisation:

“Having AlphaStar play anonymously helps ensure that it is a controlled test so that the experimental versions of the agent experience gameplay as close to a normal 1v1 ladder match as possible.

It also helps ensure all games are played under the same conditions from match to match. DeepMind will release the research results in a peer-reviewed scientific paper along with replays of AlphaStar’s matches.”

AlphaStar was trained on historic game footage that StarCraft’s developer Blizzard has been releasing on a monthly basis. Five versions of the AI battled each other to hone their skills in training which equates to around 200 years for a human.

Multiple experimental variants of AlphaStar will take part in the test and it will play 1v1 matches only. The test will be opt-in, so players will have to click an in-game popup to get involved. Basically, it’s a voluntary ass-whooping.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Human-beating StarCraft 2 AI will compete anonymously in Europe appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/07/11/human-starcraft2-ai-compete-europe/feed/ 0