regulation – AI News https://news.deepgeniusai.com Artificial Intelligence News Fri, 10 Jul 2020 14:49:53 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png regulation – AI News https://news.deepgeniusai.com 32 32 UK and Australia launch joint probe into Clearview AI’s mass data scraping https://news.deepgeniusai.com/2020/07/10/uk-australia-probe-clearview-ai-mass-data-scraping/ https://news.deepgeniusai.com/2020/07/10/uk-australia-probe-clearview-ai-mass-data-scraping/#respond Fri, 10 Jul 2020 14:49:51 +0000 https://news.deepgeniusai.com/?p=9745 The UK and Australia have launched a joint probe into the controversial “data scraping” practices of Clearview AI. Clearview AI has repeatedly made headlines, and rarely for good reason. The company’s facial recognition technology is impressive but relies on scraping billions of people’s data from across the web. “Common law has never recognised a right... Read more »

The post UK and Australia launch joint probe into Clearview AI’s mass data scraping appeared first on AI News.

]]>
The UK and Australia have launched a joint probe into the controversial “data scraping” practices of Clearview AI.

Clearview AI has repeatedly made headlines, and rarely for good reason. The company’s facial recognition technology is impressive but relies on scraping billions of people’s data from across the web.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland argued recently.

Regulators in the UK and Australia seem to have a different perspective than Ekeland and have announced a joint probe into Clearview AI’s practices.

“The Office of the Australian Information Commissioner (OAIC) and the UK’s Information Commissioner’s Office (ICO) have opened a joint investigation into the personal information handling practices of Clearview Inc., focusing on the company’s use of ‘scraped’ data and biometrics of individuals,” the ICO wrote in a statement.

“The investigation highlights the importance of enforcement cooperation in protecting the personal information of Australian and UK citizens in a globalized data environment.”

A similar probe was launched by the EU’s privacy watchdog last month.

The European Data Protection Board ruled that any use of the service by law enforcement in Europe would “likely not be consistent with the EU data protection regime” and that it “has doubts as to whether any Union or Member State law provides a legal basis for using a service such as the one offered by Clearview AI.”

Clearview AI’s facial recognition system is used by over 2,200 law enforcement agencies around the world. A recent leak suggests it’s even being used by commercial businesses like Best Buy and Macy’s. In May, Clearview said it would stop working with non–law enforcement entities.

The American Civil Liberties Union (ACLU) launched a lawsuit against Clearview AI in May after calling it a “nightmare scenario” for privacy.

Aside from the company’s practices, concerns have been raised about Clearview AI’s extensive ties with the far-right. Ekeland himself has gained notoriety as “The Troll’s Lawyer” for defending clients such as neo-Nazi troll Andrew Auernheimer.

“Companies like Clearview will end privacy as we know it, and must be stopped,” said Nathan Freed Wessler, senior staff attorney with the ACLU’s Speech, Privacy, and Technology Project.

(Photo by The Creative Exchange on Unsplash)

The post UK and Australia launch joint probe into Clearview AI’s mass data scraping appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/10/uk-australia-probe-clearview-ai-mass-data-scraping/feed/ 0
Elon Musk wants more stringent AI regulation, including for Tesla https://news.deepgeniusai.com/2020/02/19/elon-musk-stringent-ai-regulation-tesla/ https://news.deepgeniusai.com/2020/02/19/elon-musk-stringent-ai-regulation-tesla/#respond Wed, 19 Feb 2020 13:28:24 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6419 Elon Musk has once again called for more stringent regulations around the development of AI technologies. The founder of Tesla and SpaceX has been one of the most vocal prominent figures in expressing concerns about AI – going as far as to call it humanity’s “biggest existential threat” if left unchecked. Of course, given the... Read more »

The post Elon Musk wants more stringent AI regulation, including for Tesla appeared first on AI News.

]]>
Elon Musk has once again called for more stringent regulations around the development of AI technologies.

The founder of Tesla and SpaceX has been one of the most vocal prominent figures in expressing concerns about AI – going as far as to call it humanity’s “biggest existential threat” if left unchecked.

Of course, given the nature of the companies Musk has founded, he is also well aware of AI’s potential.

Back in 2015, Musk co-founded OpenAI – an organisation founded with the aim of pursuing and promoting ethical AI development. Musk ended up leaving OpenAI in February last year over disagreements with the company’s work.

Earlier this week, Musk said that OpenAI should be more transparent and specifically said his confidence is “not high” in former Google engineer Dario Amodei when it comes to safety.

Responding to a piece by MIT Technology Review about OpenAI, Musk tweeted: “All orgs developing advanced AI should be regulated, including Tesla.”

In response to a further question of whether such regulations should be via individual governments or global institutions like the UN, Musk said he believes both.

Musk’s tweet generated some feedback from other prominent industry figures, including legendary Id Software founder John Carmack who recently stepped back from video game development to focus on independent AI research.

Carmack asked Musk: “How would you imagine that working for someone like me? Cloud vendors refuse to spawn larger clusters without a government approval? I would not be supportive.”

Coder Pranay Pathole shared a similar scepticism to Musk’s call as Carmack, saying: “Large companies ask for regulations acting all virtuous. What they are really doing is creating barriers for entry for new competition because only they can afford to comply with the new regulations.”

The debate over the extent of AI regulations and how they should be implemented will likely go on for some time – we can only hope to get them right before a disaster occurs. If you want to help Musk in building AI, he’s hosting a “super fun” hackathon at his place.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Elon Musk wants more stringent AI regulation, including for Tesla appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/02/19/elon-musk-stringent-ai-regulation-tesla/feed/ 0
Google CEO: We need sensible AI regulation that does not limit its potential https://news.deepgeniusai.com/2020/01/21/google-ceo-sensible-ai-regulation-limit-potential/ https://news.deepgeniusai.com/2020/01/21/google-ceo-sensible-ai-regulation-limit-potential/#respond Tue, 21 Jan 2020 15:49:12 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6380 Google CEO Sundar Pichai has called for sensible AI regulation that does not limit the huge potential benefits to society. Writing in a FT editorial, Pichai said: “…there is no question in my mind that artificial intelligence needs to be regulated. It is too important not to.” Few people debate the need for AI regulation... Read more »

The post Google CEO: We need sensible AI regulation that does not limit its potential appeared first on AI News.

]]>
Google CEO Sundar Pichai has called for sensible AI regulation that does not limit the huge potential benefits to society.

Writing in a FT editorial, Pichai said: “…there is no question in my mind that artificial intelligence needs to be regulated. It is too important not to.”

Few people debate the need for AI regulation but there are differing opinions when it comes to how much. Overregulation limits innovation while lack of regulation can pose serious dangers – even existential depending on who you listen to.

Pichai says AI is “one of the most promising new technologies” that has “the potential to improve billions of lives,” but warns of the possible risks if development is left unchecked.

“History is full of examples of how technology’s virtues aren’t guaranteed,” Pichai wrote. “The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread.”

Google is one of the companies which people have voiced concerns about given its reach and questionable record when it comes to user privacy. Pichai’s words today will offer some comfort that Google’s leadership wants sensible regulation to guide its efforts.

So far, Google has shown how AI can be used for good. A study by Google, published in science journal Nature, showed how its AI model was able to spot breast cancer in mammograms with “greater accuracy, fewer false positives, and fewer false negatives than experts.”

Governments around the world are beginning to shape AI regulations. The UK, Europe’s leader in AI developments and investments, aims to focus on promoting ethical AI rather than attempt to match superpowers like China and the US in other areas.

In a report last year, the Select Committee on Artificial Intelligence recommended the UK capitalises on its “particular blend of national assets” to “forge a distinctive role for itself as a pioneer in ethical AI”.

The EU, which the UK leaves at the end of this month, recently published its own comprehensive proposals on AI regulation which many believe are too stringent. The US warned its European allies against overregulation of AI earlier this month.

In a statement released by the Office of Science and Technology Policy, the White House wrote:

“Europe and our allies should avoid heavy handed innovation-killing models, and instead consider a similar regulatory approach.

The best way to counter authoritarian uses of AI is to make sure America and our international partners remain the global hubs of innovation, shaping the evolution of technology in a manner consistent with our common values.”

Pichai refrains from denouncing either the White House’s calls for light AI regulation, or the EU’s plans for stringent rules. Instead, Pichai only calls for the need to balance “potential harms… with social opportunities.”

Google has certainly not been devoid of criticism over its forays into AI. The company was forced to back out from a Pentagon contract in 2018 called Project Maven over backlash about Google building AI technology for deploying and monitoring unmanned aerial vehicles (UAVs).

Following the decision to back out from Project Maven, Pichai outlined Google’s ethical principles when it comes to AI:

  1. Be socially beneficial.
  2. Avoid creating or reinforcing unfair bias.
  3. Be built and tested for safety.
  4. Be accountable to people.
  5. Incorporate privacy design principles.
  6. Uphold high standards of scientific excellence.
  7. Be made available for uses that accord with these principles.

Pichai promised the company “will work to limit potentially harmful or abusive applications” and will block the use of their technology if they “become aware of uses that are inconsistent” with the principles.

Time will tell whether Google will abide by its principles when it comes to AI, but it’s heartening to see Pichai call for sensible regulation to help enforce it across the industry.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Google CEO: We need sensible AI regulation that does not limit its potential appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/01/21/google-ceo-sensible-ai-regulation-limit-potential/feed/ 0
The White House warns European allies not to overregulate AI https://news.deepgeniusai.com/2020/01/07/white-house-warns-european-allies-overregulate-ai/ https://news.deepgeniusai.com/2020/01/07/white-house-warns-european-allies-overregulate-ai/#comments Tue, 07 Jan 2020 13:48:00 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6328 The White House has urged its European allies to avoid overregulation of AI to prevent Western innovation from being hindered. While the news has gone somewhat under the radar given recent events, the Americans are concerned that overregulation may cause Western nations to fall behind the rest of the world. In a statement released by... Read more »

The post The White House warns European allies not to overregulate AI appeared first on AI News.

]]>
The White House has urged its European allies to avoid overregulation of AI to prevent Western innovation from being hindered.

While the news has gone somewhat under the radar given recent events, the Americans are concerned that overregulation may cause Western nations to fall behind the rest of the world.

In a statement released by the Office of Science and Technology Policy, the White House wrote:

“Europe and our allies should avoid heavy handed innovation-killing models, and instead consider a similar regulatory approach.

The best way to counter authoritarian uses of AI is to make sure America and our international partners remain the global hubs of innovation, shaping the evolution of technology in a manner consistent with our common values.”

The UK is expected to retain its lead as the European hub for AI innovation with vast amounts of private and public sector investment, successful companies like DeepMind, and world class universities helping to address the global talent shortage. In Oxford Insights’ 2017 Government AI Readiness Index, the UK ranked number one due to areas such as digital skills training and data quality. The Index considers public service reform, economy and skills, and digital infrastructure.

Despite its European AI leadership, the UK would struggle to match the levels of funding afforded to firms residing in superpowers like the US and China. Many experts have suggested the UK should instead focus on leading in the ethical integration of AI and developing sensible regulations, an area it has much experience in.

Here’s a timeline of some recent work from the UK government towards this goal:

  • September 2016 – the House of Commons Science and Technology Committee published a 44-page report on “Robotics and Artificial Intelligence” which investigates the economic and social implications of employment changes, ethical and legal issues around safety, verification, bias, privacy, and accountability; and strategies to enhance research, funding, and innovation
  • January 2017 – an All Party Parliamentary Group on Artificial Intelligence (APPG AI) was established to address ethical issues, social impact, industry norms, and regulatory options for AI in parliament.
  • June 2017 – parliament established the Select Committee on AI to further consider the economic, ethical and social implications of advances in artificial intelligence, and to make recommendations. All written and oral evidence received by the committee can be seen here.
  • April 2018 – the aforementioned committee published a 183-page report, “AI in the UK: ready, willing and able?” which considers AI development and governance in the UK. It acknowledges that the UK cannot compete with the US or China in terms of funding or people but suggests the country may have a competitive advantage in considering the ethics of AI.
  • September 2018 – the UK government launched an experiment with the World Economic Forum to develop procurement policies for AI. The partnership will bring together diverse stakeholders to collectively develop guidelines to capitalise on governments’ buying power to support the responsible deployment and design of AI technologies.

Western nations are seen as being at somewhat of a disadvantage due to sensitivities around privacy. EU nations, in particular, have strict data collection regulations such as GDPR which limits the amount of data researchers can collect to train AIs.

“Very often we hear ‘Where are the British and European Googles and Facebooks?’ Well, it’s because of barriers like this which stop organisations like that being possible to grow and develop,” said Peter Wright, solicitor and managing director of Digital Law UK.

Dependent on the UK’s future trade arrangement with the EU, it could, of course, decide to chart its own regulatory path following Brexit.

Speaking to reporters in a call, US CTO Michael Kratsios said: “Pre-emptive and burdensome regulation does not only stifle economic innovation and growth, but also global competitiveness amid the rise of authoritarian governments that have no qualms with AI being used to track, surveil, and imprison their own people.”

In the same call, US deputy CTO Lynne Parker commented: “As countries around the world grapple with similar questions about the appropriate regulation of AI, the US AI regulatory principles demonstrate that America is leading the way to shape the evolution in a way that reflects our values of freedom, human rights, and civil liberties.

“The new European Commission has said they intend to release an AI regulatory document in the coming months. After a productive meeting with Commissioner Vestager in November, we encourage Europe to use the US AI principles as a framework. The best way to counter authoritarian uses of AI is to make America and our national partners remain the global hub of innovation, advancing our common values.”

A similar regulation to GDPR in California called CCPA was also signed into law in June 2018. “I think the examples in the US today at state and local level are examples of overregulation which you want to avoid on the national level,” said a government official.

The post The White House warns European allies not to overregulate AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/01/07/white-house-warns-european-allies-overregulate-ai/feed/ 1
CBI: UK tech dominance is ‘at risk’ due to public mistrust of AI https://news.deepgeniusai.com/2019/08/09/cbi-uk-tech-risk-public-mistrust-ai/ https://news.deepgeniusai.com/2019/08/09/cbi-uk-tech-risk-public-mistrust-ai/#respond Fri, 09 Aug 2019 00:52:03 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5920 Business industry group the CBI has warned that UK tech dominance is ‘at risk’ due to public mistrust of AI. In a report today, the CBI warns artificial intelligence companies of the need to ensure they’re approaching the technology in an ethical manner to help build trust. Among the measures suggested to achieve trust include... Read more »

The post CBI: UK tech dominance is ‘at risk’ due to public mistrust of AI appeared first on AI News.

]]>
Business industry group the CBI has warned that UK tech dominance is ‘at risk’ due to public mistrust of AI.

In a report today, the CBI warns artificial intelligence companies of the need to ensure they’re approaching the technology in an ethical manner to help build trust.

Among the measures suggested to achieve trust include ensuring customers know how their data is being used by AI, what decisions are being taken, and challenging unfair biases.

Overall, the CBI remains bullish on AI’s potential to add billions to the UK economy.

Felicity Burch, CBI Director of Digital and Innovation, said:

“At a time of slowing global growth, AI could add billions of pounds to the UK economy and transform our stuttering productivity performance.

The Government has set the right tone by establishing the Centre for Data Ethics & Innovation, but it’s up to business to put ethics into practice.

Ethics can be an intimidating word. But at the end of the day, meaningful ethics is similar to issues organisations already think about: effective governance, employee empowerment, and customer engagement.

The same actions that embed an ethical approach to AI will also make businesses more competitive. We know that diverse businesses are more likely to outperform their rivals. When it comes to AI, businesses who prioritise fairness and inclusion are more likely to create algorithms that make better decisions, giving them the edge.

With the global tech race heating up, the UK is well placed to lead the world in developing ethical AI, which uses data safely and drives progress in business and society.”

A previous study, conducted by PwC, estimates that UK GDP could be around 10 percent higher in 2030 due to AI – the equivalent of an additional £232bn.

Earlier this week, AI News reported on findings by Fountech.ai which highlights the scale of public distrust in artificial intelligence among UK adults.

In a survey of more than 2,000 people, Fountech.ai found over two-thirds (67%) are concerned about the impact of artificial intelligence on their careers.

Another finding in Fountech.ai’s study is that just over half (58%) find algorithms which make recommendations, such as what to buy or watch next, ‘creepy’.

The potential for AI is clear, but the studies conducted by the CBI and others show its full potential can only be unlocked by building public trust in the technology.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post CBI: UK tech dominance is ‘at risk’ due to public mistrust of AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/08/09/cbi-uk-tech-risk-public-mistrust-ai/feed/ 0
AI Expo Global: Fairness and safety in artificial intelligence https://news.deepgeniusai.com/2019/05/01/ai-expo-fairness-safety-artificial-intelligence/ https://news.deepgeniusai.com/2019/05/01/ai-expo-fairness-safety-artificial-intelligence/#respond Wed, 01 May 2019 16:36:31 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5594 AI News sat down with Faculty’s head of research Ilya Feige to discuss safe and fair practices in artificial intelligence development. Feige had just finished giving a talk entitled ‘Fairness in AI: Latest developments in AI safety’ at this year’s AI Expo Global. We managed to grab him to get more of his thoughts on... Read more »

The post AI Expo Global: Fairness and safety in artificial intelligence appeared first on AI News.

]]>
AI News sat down with Faculty’s head of research Ilya Feige to discuss safe and fair practices in artificial intelligence development.

Feige had just finished giving a talk entitled ‘Fairness in AI: Latest developments in AI safety’ at this year’s AI Expo Global. We managed to grab him to get more of his thoughts on the issue.

Rightfully, people are becoming increasingly concerned about unfair and unsafe AIs. Human biases are seeping into algorithms which poses a very real danger that prejudices and oppression could become automated by accident.

AI News reported last week on research from New York University that found inequality in STEM-based careers is causing algorithms to work better or worse for some parts of society over others.

Similar findings, by Joy Buolamwini and her team from the Algorithmic Justice League, highlighted a disparity in the effectiveness of the world’s leading facial recognition systems between genders and skin tones.

In an ideal world, all parts of society would be equally represented tomorrow. The reality is that issue is going to take much longer to rectify, but AI technologies are becoming increasingly used across society today.

AI News asked Feige for his perspective and how the impact of that problem can be reduced much sooner.

“I think the most important thing for organisations to do is to spend more time thinking about bias and on ensuring that every model they build is unbiased because a demographically disparate team can build non-disparate tech.”

Some companies are seeking to build AIs which can scan for bias in other algorithms. We asked Feige for his view on whether he believes this is an ideal solution.

“Definitely, I showed one in my talk. We have tests for: You give me a black box algorithm, I have no idea what your algorithm does – but I can give an input, calculate the output, and I can just tell you how biased it is according to various definitions of bias.”

“We can go even further and say: Let’s modify your algorithm and give it back so it’s unbiased according to one of those definitions.”

In the Western world, we consider ourselves fairly liberal and protective of individual freedoms. China, potentially the world’s leader in AI, has a questionable human rights record and is known for invasive surveillance and mass data collection. Meanwhile, Russia has a reputation for military aggression which some are concerned will drive its AI developments. Much of the Middle East, while not considered leaders in AI, is behind most of the world in areas such as female and gay rights.

We asked Feige for his thoughts on whether these regional attitudes could find their way into AI developments.

“It’s an interesting question. It’s not that some regions will take the issue more or less seriously, they just have different … we’ll say preferences. I suspect China takes surveillance and facial recognition seriously – more seriously than the UK – but they do so in order to leverage it for mass surveillance, for population control.”

“The UK is trying to walk a fine line in efficiently using that very useful technology but not undermine personal privacy and freedom of individuals.”

During his talk, Feige made the point that he’s less concerned about AI biases due to the fact that – unlike humans – algorithms can be controlled.

“This is a real source of optimism for me, just because human decision-making is incredibly biased and everyone knows that.”

Feige asked the audience to raise a hand if they were concerned about AI bias which prompted around half to do so. The same question was asked regarding human bias and most of the room had their hand up.

“You can be precise with machine learning algorithms. You can say: ‘This is the objective I’m trying to achieve, I’m trying to maximise the probability of a candidate being successful at their job according to historical people in their role’. Or, you can be precise about the data the model is trained on and say: ‘I’m going to ignore data from before this time period because things were ‘different’ back then’”.

“Humans have fixed past experiences they can’t control. I can’t change the fact my mum did most of the cooking when I was growing up and I don’t know how it affects my decision-making.”

“I also can’t force myself to hire based on success in their jobs, which I try to do. It’s hard to know if really I just had a good conversation about the football with the candidate.”

Faculty, of which Feige has the role of head of research, is a European company based in London. With the EU Commission recently publishing its guidelines on AI development, we took the opportunity to get his views on them.

“At a high-level, I think they’re great. They align quite a bit with how we think about these things. My biggest wish, whenever a body like that puts together some principles, is that there’s a big gap between that level of guidelines and what is useful for practitioners. Making those more precise is really important and those weren’t precise enough by my standards.”

“But not to just advocate putting the responsibility on policymakers. There’s also an onus on practitioners to try and articulate what bias looks like statistically and how that may apply to different problems, and then say: ‘Ok policy body, which of these is most relevant and can you now make those statements in this language’ and basically bridge the gap.”

Google recently created, then axed, a dedicated ‘ethics board’ for its AI developments. Such boards seem a good idea but representing society can be a minefield. Google’s faced criticism for having a conservative figure with strong anti-LGBTQ and immigrant views on the board.

Feige provided his take on whether companies should have an independent AI oversight board to ensure their developments are safe and ethical.

“To some degree, definitely. I suspect there are some cases you want that oversight board to be very external and like a regulator with a lot of overhead and a lot of teeth.”

“At Faculty, each one of our product teams has a shadow team – which has practically the same skill set – who monitor and oversee the work done by the project team to ensure it follows our internal set of values and guidelines.”

“I think the fundamental question here is how to do this in a productive way and ensure AI safety but that it doesn’t grind innovation to a halt. You can imagine where the UK has a really strong oversight stance and then some other country with much less regulatory oversight has companies which become large multinationals and operate in the UK anyway.”

Getting the balance right around regulation is difficult. Our sister publication IoT News interviewed a digital lawyer who raised the concern that Europe’s strict GDPR regulations will cause AI companies in the continent to fall behind their counterparts in Asia and America which have access to far more data.

Feige believes there is the danger of this happening, but European countries like the UK – whether it ultimately remains part of the EU and subject to regulations like GDPR or not – can use it as an opportunity to lead in AI safety.

Three reasons are provided why the UK could achieve this:

  1. The UK has significant AI talent and renowned universities.
  2. It has a fairly unobjectionable record and respected government (Feige clarifies in comparison to how some countries view the US and China).
  3. The UK has a fairly robust existing regulatory infrastructure – especially in areas such as financial services.

Among the biggest concerns about AI continues to be around its impact on the workforce, particularly whether it will replace low-skilled workers. We wanted to know whether using legislation to protect human workers is a good idea.

“You could ask the question a hundred years ago: ‘Should automation come into agriculture because 90 percent of the population works in it?’ and now it’s almost all automated. I suspect individuals may be hurt by automation but their children will be better off by it.”

“I think any heavy-handed regulation will have unintended consequences and should be thought about well.”

Our discussion with Feige was insightful and provided optimism that AI can be developed safely and fairly, as long as there’s a will to do so.

You can watch our full interview with Feige from AI Expo Global 2019 below:

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post AI Expo Global: Fairness and safety in artificial intelligence appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/05/01/ai-expo-fairness-safety-artificial-intelligence/feed/ 0
EU AI Expert Group: Ethical risks are ‘unimaginable’ https://news.deepgeniusai.com/2019/04/11/eu-ai-expert-group-ethical-risks/ https://news.deepgeniusai.com/2019/04/11/eu-ai-expert-group-ethical-risks/#respond Thu, 11 Apr 2019 11:51:44 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5487 The EU Commission’s AI expert group has published its assessment of the rapidly-advancing technology and warned it has “unimaginable” ethical risks. Some of the highlighted risks includes lethal autonomous systems, tracking individuals, and ‘scoring’ people in society. On the subject of lethal autonomous systems, the experts warn machines with cognitive skills could “decide whom, when... Read more »

The post EU AI Expert Group: Ethical risks are ‘unimaginable’ appeared first on AI News.

]]>
The EU Commission’s AI expert group has published its assessment of the rapidly-advancing technology and warned it has “unimaginable” ethical risks.

Some of the highlighted risks includes lethal autonomous systems, tracking individuals, and ‘scoring’ people in society.

On the subject of lethal autonomous systems, the experts warn machines with cognitive skills could “decide whom, when and where to fight without human intervention”.

When it comes to tracking individuals, the experts foresee biometric data of people being involuntarily used such as “lie detection [or] personality assessment through micro expressions”.

Citizen scoring is on some people’s minds after being featured in an episode of dystopian series Black Mirror. The experts note that scoring criteria must be transparent and fair, with scores being challengeable

The guidelines have been several years in the making and have launched alongside a pilot project for testing how they work in practice.

Experts from various fields across Europe sit in the group, including academic lawyers from Birmingham and Oxford universities.

They concluded: “it is important to build AI systems that are worthy of trust, since human beings will only be able to confidently and fully reap its benefits when the technology, including the processes and people behind the technology, are trustworthy.”

The EU as a whole is looking to invest €20bn (£17bn) every year for the next decade to close the current gap between European developments and those in Asia and North America.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post EU AI Expert Group: Ethical risks are ‘unimaginable’ appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/04/11/eu-ai-expert-group-ethical-risks/feed/ 0
Report: 94 percent of IT leaders want greater focus on AI ethics https://news.deepgeniusai.com/2019/03/26/report-it-leaders-focus-ai-ethics/ https://news.deepgeniusai.com/2019/03/26/report-it-leaders-focus-ai-ethics/#respond Tue, 26 Mar 2019 10:11:33 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5392 A study from SnapLogic has found that 94 percent of IT decision makers across the UK and US want a greater focus on ethical AI development. Bias in algorithms continues to be a problem and is among the biggest barriers to societal adoption. Facial recognition algorithms, for example, have been found to be far less... Read more »

The post Report: 94 percent of IT leaders want greater focus on AI ethics appeared first on AI News.

]]>
A study from SnapLogic has found that 94 percent of IT decision makers across the UK and US want a greater focus on ethical AI development.

Bias in algorithms continues to be a problem and is among the biggest barriers to societal adoption. Facial recognition algorithms, for example, have been found to be far less accurate for some parts of society than others.

Without addressing these issues, we’re in danger of automating problems such as racial profiling. Public trust in AI is already low, so there’s a collective responsibility within the industry to ensure high ethical standards.

Gaurav Dhillon, CEO at SnapLogic, commented:

“AI is the future, and it’s already having a significant impact on business and society. However, as with many fast-moving developments of this magnitude, there is the potential for it to be appropriated for immoral, malicious, or simply unintended purposes.

We should all want AI innovation to flourish, but we must manage the potential risks and do our part to ensure AI advances in a responsible way.”

SnapLogic’s report found that over half (53%) of the IT leaders believe responsibility for ethical AI development lies with the organisation developing it, regardless of whether they’re a commercial business or academic institution.

Far less (17%) blame individual developers working on AI projects. Respondents in the US, however, are over twice as likely (21%) to blame individuals than the UK (9%).

Some global bodies are emerging which aim to establish AI standards and fair rules. Understandably, there’s great concern over AI’s role in military technology. A so-called ‘AI arms race’ between global powers like China, the US, and Russia could lead to irresponsible developments with devastating consequences.

However, just 16 percent of the respondents believe an independent global consortium – comprising of representatives from government, academia, research institutions, and businesses – as the only way to establish much-needed standards, rules, and protocols.

IT leaders are welcoming of expert groups on AI such as the European Commission High-Level Expert Group on Artificial Intelligence. Half of the respondents believe organisations will take guidance and recommendations from such groups. Brits are almost double (15%) as likely to believe organisations will disregard such groups as their American counterparts (9%).

Just five percent of UK IT leaders believe advice from AI expert groups will be useless if not enforced by law.

87 percent of all respondents want AI to be regulated, although there’s some debate over how. 32 percent believe it should come from a combination of government and industry, while 25 percent want an independent industry consortium.

There are discrepancies on the appetite for regulation based on industry, too. Almost a fifth (18%) of IT decision makers in manufacturing are against the regulation, followed by 13 percent in the ‘Technology’ sector, and the same percentage in the ‘Retail, Distribution and Transport’ sector. The reasons given was close to an even split between the belief it would slow down innovation, and that developments should be at the discretion of its developers.

“Regulation has its merits and may well be needed, but it should be implemented thoughtfully such that data access and information flow are retained,” continues Dhillon. “Absent that, AI systems will be working from incomplete or erroneous data, thwarting the advancement of future AI innovation.”

AI will be revolutionary – in fact, some call it the fourth industrial revolution. However, as a great fictional man once said: “With great power, comes great responsibility.”

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Report: 94 percent of IT leaders want greater focus on AI ethics appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/03/26/report-it-leaders-focus-ai-ethics/feed/ 0
Musk warns ‘it begins’ as Putin claims the AI-leading nation rules the world https://news.deepgeniusai.com/2017/09/04/musk-warns-putin-ai-leading-nation-rules-world/ https://news.deepgeniusai.com/2017/09/04/musk-warns-putin-ai-leading-nation-rules-world/#respond Mon, 04 Sep 2017 11:49:28 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=2406 Elon Musk has issued a warning as Russian president Vladimir Putin claims the nation which leads in AI “will become the ruler of the world.” Musk, co-chairman of OpenAI, has long warned of dire consequences for mishandling AI development. OpenAI itself is a non-profit research company that aims to champion promoting and developing friendly AI... Read more »

The post Musk warns ‘it begins’ as Putin claims the AI-leading nation rules the world appeared first on AI News.

]]>
Elon Musk has issued a warning as Russian president Vladimir Putin claims the nation which leads in AI “will become the ruler of the world.”

Musk, co-chairman of OpenAI, has long warned of dire consequences for mishandling AI development. OpenAI itself is a non-profit research company that aims to champion promoting and developing friendly AI in a way to benefit humanity.

As with any major technology advancement, however, there will undoubtedly be those which aim to weaponise it and to do so before rivals. Based on Putin’s comments to Russia-based publication RT, it sounds as if the nation is among them.

“Artificial intelligence is the future, not only for Russia, but for all humankind,” said Putin, in a report from RT. “It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”

Musk tweeted his brief reaction to the news:

Further responses to his tweet highlighted the concern about AI weapon systems. In particular, an AI which may decide a preemptive strike is the best option to prevent a threat from developing. The lack of human involvement in the decision also enables the blame to be mitigated.

Last week, AI News reported that China is catching up to the U.S. in artificial intelligence. Part of this rapid development is due to a significant increase in government support of core AI programs. China will increase spending to $22 billion in the next few years, with plans to spend nearly $60 billion per year by 2025.

Musk has also voiced concerns about this international competition for AI superiority:

These recent developments further highlight the pressing need for regulations and open dialogue on AI development to ensure it benefits humanity rather than poses a threat.

See more: Experts believe AI will be weaponised in the next 12 months

Are you concerned about AI posing a threat? Share your thoughts in the comments.

 

The post Musk warns ‘it begins’ as Putin claims the AI-leading nation rules the world appeared first on AI News.

]]>
https://news.deepgeniusai.com/2017/09/04/musk-warns-putin-ai-leading-nation-rules-world/feed/ 0