ai ethics – AI News https://news.deepgeniusai.com Artificial Intelligence News Wed, 25 Mar 2020 05:41:09 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png ai ethics – AI News https://news.deepgeniusai.com 32 32 White House will take a ‘hands-off’ approach to AI regulation https://news.deepgeniusai.com/2018/05/11/white-house-hands-off-ai-regulation/ https://news.deepgeniusai.com/2018/05/11/white-house-hands-off-ai-regulation/#respond Fri, 11 May 2018 12:16:37 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3083 The White House has decided it will take a ‘hands-off’ approach to AI regulation despite many experts calling for safe and ethical standards to be set. Some of the world’s greatest minds have expressed concern about the development of AI without regulations — including the likes of Elon Musk, and the late Stephen Hawking. Musk... Read more »

The post White House will take a ‘hands-off’ approach to AI regulation appeared first on AI News.

]]>
The White House has decided it will take a ‘hands-off’ approach to AI regulation despite many experts calling for safe and ethical standards to be set.

Some of the world’s greatest minds have expressed concern about the development of AI without regulations — including the likes of Elon Musk, and the late Stephen Hawking.

Musk famously said unregulated AI could post “the biggest risk we face as a civilisation”, while Hawking similarly warned “the development of full artificial intelligence could spell the end of the human race.”

The announcement that developers will be free to experiment with AI as they see fit was made during a meeting with representatives of 40 companies including Google, Facebook, and Intel.

Strict regulations can stifle innovation, and the U.S has made clear it wants to emerge a world leader in the AI race.

Western nations are often seen as somewhat at a disadvantage to Eastern countries like China, not because they have less talent, but citizens are more wary about data collection and their privacy in general. However, there’s a strong argument to be made for striking a balance.

Making the announcement, White House Science Advisor Michael Kratsios noted the government did not stand in the way of Alexander Graham Bell or the Wright brothers when they invented the telephone and aeroplane. Of course, telephones and aeroplanes weren’t designed with the ultimate goal of becoming self-aware and able to make automated decisions.

Both telephones and aeroplanes, like many technological advancements, have been used for military applications. However, human operators have ultimately always made the decisions. AI could be used to automatically launch a nuclear missile if left unchecked.

Recent AI stories have some people unnerved. A self-driving car from Uber malfunctioned and killed a pedestrian. At Google I/O, the company’s AI called a hair salon and the receptionist had no idea they were not speaking to a human.

People not feeling comfortable with AI developments is more likely to stifle innovation than balanced regulations.

What are your thoughts on the White House’s approach to AI regulation?

 

The post White House will take a ‘hands-off’ approach to AI regulation appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/05/11/white-house-hands-off-ai-regulation/feed/ 0
#MWC18: Taking responsibility for AI https://news.deepgeniusai.com/2018/02/27/mwc-18-ai-responsibility/ https://news.deepgeniusai.com/2018/02/27/mwc-18-ai-responsibility/#respond Tue, 27 Feb 2018 10:39:31 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=2869 A session here at MWC 2018 titled ‘AI Everywhere: Ethics and Responsibility’ explored some of the questions we should be asking ourselves as the ethical minefield of AI development progresses. Dr Paula Boddington, a researcher and philosopher from Oxford University, wrote the book ‘Towards a Code of Ethics for Artificial Intelligence’ and led today’s proceedings.... Read more »

The post #MWC18: Taking responsibility for AI appeared first on AI News.

]]>
A session here at MWC 2018 titled ‘AI Everywhere: Ethics and Responsibility’ explored some of the questions we should be asking ourselves as the ethical minefield of AI development progresses.

Dr Paula Boddington, a researcher and philosopher from Oxford University, wrote the book ‘Towards a Code of Ethics for Artificial Intelligence’ and led today’s proceedings. She claims to embrace technological progress but wants to ensure all potential impacts of developments have been considered.

“In many ways, AI is getting us to ask questions about the very limits – and grounds – of our human values,” says Boddington. “One of the most exciting things right now is that all over the world people are having deep and practical conversations about ethics.”

Naturally, we’ve covered ethics on many occasions here on AI News. You will have heard the warnings from some of the world’s most talented minds, such as Stephen Hawking and Elon Musk, but although they represent some of the most prominent figures – they’re far from being alone in their concerns.

Just earlier this month, we covered a report from some of Boddington’s colleagues at Oxford University warning that AI poses a ‘clear and present danger’. In the report, the researchers join previous calls across the industry for sensible regulation — including for a robot ethics charter, and for taking a ‘global stand’ against AI militarisation.

Part of today’s difficulty is defining what even constitutes artificial intelligence, argues Boddington.

“It’s difficult to find an exact definition of AI that everyone will agree on,” she argues. “In very broad terms, we could think of it as a technology which aims to extend human agency, decision, and thought. In some cases, replacing certain tasks and jobs.”

Opinion is split on the impact of AI on jobs – some believe it will kill off jobs and that a universal basic income will become necessary, while others believe it will only enhance the capabilities of workers. There’s also the opinion that AI will increase the wealth inequality between the rich and poor.

“You may argue that technology, in general, enhances human capabilities and therefore raises the question of responsibilities,” says Boddington. “But AI has potentially unprecedented power to how it extends human responsibility and decision-making.”

Boddington highlights the potential for AI if used ethically for things such as diagnosing medical conditions and quickly interpreting large amounts of data. As a philosopher, she ponders whether it extends our reach beyond what humans can handle.

‘Responsibility is one of the things which makes us human’

Responsibility is the word of the day, and Boddington has concerns about AI diminishing it. She brings the audience’s focus to one of the most famous studies of obedience in psychology – carried out by Stanley Milgram, a psychologist at Yale University.

Milgram’s study, for those unaware, involved authority figures giving the command to one set of test subjects to electrocute others when they answered questions wrong – with an increasing level of shock.

The levels were labelled as they became more deadly. While some began to question in the upper levels, they ultimately obeyed as a result – it’s theorised – of their lab surroundings. When subjects were asked to go straight to deadly levels of shock, they refused.

The study concluded that, when responsibility is eroded bit-by-bit, people can be susceptible to committing acts considered inhuman. Milgram launched his study out of interest in how easily ordinary people could be influenced into committing atrocities, following WWII.

AI is already being used for marketing and therefore is being designed to manipulate people. Boddington is concerned that humans may end up making or authorising poor decisions through AI due to diminished responsibility.

“We could allow it to replace human thought and decision where we shouldn’t,” warns Boddington. “Responsibility is one of the things which makes us human.”

Beyond making us human, responsibility also provides health. In a study of Whitehall staff, where there are strict hierarchies, those which held responsibility and had the power to make changes had better health than those who did not. Having these responsibilities eroded may lead to poorer wellbeing.

Answering these questions, and ensuring the ethical implementation of AI, will require global cooperation and collaboration across all parts of society. The failure to do so may have serious consequences.

What are your thoughts about ethics in AI development?

 

The post #MWC18: Taking responsibility for AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/02/27/mwc-18-ai-responsibility/feed/ 0
Consumers believe AI should be held to a ‘Blade Runner’ law https://news.deepgeniusai.com/2017/10/06/consumers-ai-law/ https://news.deepgeniusai.com/2017/10/06/consumers-ai-law/#respond Fri, 06 Oct 2017 15:43:42 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=2503 A study conducted by SYZYGY titled Sex, lies and AI: How the British public feels about artificial intelligence’ has revealed the extent to which consumers expect AI to be regulated. Blade Runner 2049 is now in cinemas with its futuristic vision which, as you’d expect, features artificial intelligence. The original Blade Runner film released in... Read more »

The post Consumers believe AI should be held to a ‘Blade Runner’ law appeared first on AI News.

]]>
A study conducted by SYZYGY titled Sex, lies and AI: How the British public feels about artificial intelligence’ has revealed the extent to which consumers expect AI to be regulated.

Blade Runner 2049 is now in cinemas with its futuristic vision which, as you’d expect, features artificial intelligence. The original Blade Runner film released in 1982 envisioned what felt like a distant future but the new film has elements which now don’t seem that far away.

Like many similar films — including the likes of I, Robot and Automata — the AIs in Blade Runner are expected to conform with Isaac Asimov’s Three Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The robots must also not conceal their identity and it’s this rule which consumers in SYZYGY’s study want AIs to adhere to.

Just over nine in 10 (92%) of the respondents also believe AIs being used for marketing should be regulated with a code of conduct. Three-quarters (75%) want brands to get their explicit consent before AI is used to market to them.

While it’s clear that consumers feel strongly about AIs being used for market engagement, they’re more lenient towards advertising. Just 17 percent would have a negative view on their favourite brand if they found an ad was created by an AI. 79 percent claim they would not object to AI being used to profile them for advertising.

Meanwhile, 28 percent of respondents would have a negative feeling if they found a brand was using AI rather than a human for customer service. Women in the study were more susceptible to having a negative perception with a rise to over a third (33%) when men are removed from the results.

This are the biggest fears the respondents have about AI:

AI and ethics

SYZYGY is launching a voluntary set of AI Marketing Ethics guidelines and calling on brands and marketing agencies to contribute. They propose the following core guidelines:

  • Do no harm – AI technology should not be used to deceive, manipulate or in any other way harm the wellbeing of marketing audiences
  • Build trust – AI should be used to build rather than erode trust in marketing. This means using AI to improve marketing transparency, honesty, and fairness, and to eliminate false, manipulative or deceptive content
  • Do not conceal – AI systems should not conceal their identity or pose as humans in interactions with marketing audiences
  • Be helpful – AI in marketing should be put to the service of marketing audiences by helping people make better purchase decisions based on their genuine needs through the provision of clear, truthful and unbiased information

So far, the guidelines appear to offer a sensible place to start. Over time, new conundrums will present themselves and rules will need to be enshrined in law.

An empathy test on SYZYGY’s website asks the user various questions and poses some interesting scenarios. One, in particular, goes into the complex decisions which AIs powering self-driving cars may have to make…

“It is 2049. You are riding in a driverless car along Pacific Coast Highway. The autonomous vehicle rounds a corner and detects a crosswalk full of children. It brakes, but your lane is unexpectedly full of sand from a recent rock slide. It can’t get traction. Your car does some calculations: If it continues braking, it will almost certainly kill five children. Should it save them by steering you off the cliff to your certain death?”

54 percent of respondents said a self-driving car should be programmed to sacrifice their passengers to minimise overall harm. However, 71 percent said they’d not be willing to travel in such transport.

The ethical use of AI is bound to be a big topic over the coming years. We already know companies such as Google’s DeepMind are beginning to launch their own dedicated ethics boards. As always, we’ll be here to keep you on top of the conversation.

The report was based on a survey of 2,000 UK adults from the WPP Lightspeed Consumer Panel. You can find the full report here.

Do you agree with the respondents about the use of AI? Share your thoughts in the comments.

 

The post Consumers believe AI should be held to a ‘Blade Runner’ law appeared first on AI News.

]]>
https://news.deepgeniusai.com/2017/10/06/consumers-ai-law/feed/ 0