impact – AI News https://news.deepgeniusai.com Artificial Intelligence News Thu, 28 May 2020 14:26:41 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png impact – AI News https://news.deepgeniusai.com 32 32 Jack Dorsey tells Andrew Yang that AI is ‘coming for programming jobs’ https://news.deepgeniusai.com/2020/05/26/jack-dorsey-andrew-yang-ai-programming-jobs/ https://news.deepgeniusai.com/2020/05/26/jack-dorsey-andrew-yang-ai-programming-jobs/#respond Tue, 26 May 2020 15:10:02 +0000 https://news.deepgeniusai.com/?p=9625 Twitter CEO Jack Dorsey recently told former 2020 US presidential candidate Andrew Yang that AI “is coming for programming jobs”. There is still fierce debate about the impact that artificial intelligence will have on jobs. Some believe that AI will replace many jobs and lead to the requirement of a Universal Basic Income (UBI), while... Read more »

The post Jack Dorsey tells Andrew Yang that AI is ‘coming for programming jobs’ appeared first on AI News.

]]>
Twitter CEO Jack Dorsey recently told former 2020 US presidential candidate Andrew Yang that AI “is coming for programming jobs”.

There is still fierce debate about the impact that artificial intelligence will have on jobs. Some believe that AI will replace many jobs and lead to the requirement of a Universal Basic Income (UBI), while others claim it will primarily offer assistance to help workers be more productive.

Dorsey is a respected technologist with a deep understanding of emerging technologies. Aside from creating Twitter, he also founded Square which is currently pushing the mass adoption of blockchain-based digital currencies such as Bitcoin and Ethereum.

Yang was seen as the presidential candidate for technologists before he suspended his campaign in February, with The New York Times calling him “The Internet’s Favorite Candidate” and his campaign was noted for its “tech-friendly” nature. The entrepreneur, lawyer, and philanthropist founded Venture for America, a non-profit which aimed to create jobs in cities most affected by the Great Recession. In March, Yang announced the creation of the Humanity Forward non-profit which is dedicated to promoting the ideas during his presidential campaign.

Jobs are now very much at threat once again; with the coronavirus wiping out all job gains since the Great Recession over a period of just four weeks. If emerging technologies such as AI do pose a risk to jobs, it could only compound the problem further.

In an episode of the Yang Speaks podcast, Dorsey warns that AI will pose a particular threat to entry-level programming jobs. However, even seasoned programmers will have their worth devalued.

“A lot of the goals of machine learning and deep learning is to write the software itself over time so a lot of entry-level programming jobs will just not be as relevant anymore,” Dorsey told Yang.

Yang is a proponent of a UBI. Dorsey said that such free cash payments could provide a “floor” for if people lose their jobs due to automation. Such free cash wouldn’t allow for luxurious items and holidays, but would ensure that people can keep a roof over their heads and food on the table.

UBI would provide workers with “peace of mind” that they can “feed their children while they are learning how to transition into this new world,” Dorsey explains.

Critics of UBI argue that such a permanent scheme would be expensive.

The UK is finding that out to some extent currently with its coronavirus furlough scheme. Under the scheme, the state will pay 80 percent of a worker’s salary to prevent job losses during the crisis. However, it’s costing approximately £14 billion per month and is expected to be wound down in the coming months due to being unsustainable.

However, some kind of UBI system is appearing increasingly needed.

In November, the Brookings Institute published a report (PDF) which highlights the risk AI poses to jobs. 

“Workers with graduate or professional degrees will be almost four times as exposed to AI as workers with just a high school degree. Holders of bachelor’s degrees will be the most exposed by education level, more than five times as exposed to AI than workers with just a high school degree,” the paper says.

In their analysis, the Brookings Institute ranked professions by their risk from AI exposure. Computer programmers ranked third, backing Dorsey’s prediction, just behind market research analysts and sales managers.

(Image Credit: Jack Dorsey by Thierry Ehrmann under CC BY 2.0 license)

The post Jack Dorsey tells Andrew Yang that AI is ‘coming for programming jobs’ appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/05/26/jack-dorsey-andrew-yang-ai-programming-jobs/feed/ 0
AI Expo Global: Fairness and safety in artificial intelligence https://news.deepgeniusai.com/2019/05/01/ai-expo-fairness-safety-artificial-intelligence/ https://news.deepgeniusai.com/2019/05/01/ai-expo-fairness-safety-artificial-intelligence/#respond Wed, 01 May 2019 16:36:31 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5594 AI News sat down with Faculty’s head of research Ilya Feige to discuss safe and fair practices in artificial intelligence development. Feige had just finished giving a talk entitled ‘Fairness in AI: Latest developments in AI safety’ at this year’s AI Expo Global. We managed to grab him to get more of his thoughts on... Read more »

The post AI Expo Global: Fairness and safety in artificial intelligence appeared first on AI News.

]]>
AI News sat down with Faculty’s head of research Ilya Feige to discuss safe and fair practices in artificial intelligence development.

Feige had just finished giving a talk entitled ‘Fairness in AI: Latest developments in AI safety’ at this year’s AI Expo Global. We managed to grab him to get more of his thoughts on the issue.

Rightfully, people are becoming increasingly concerned about unfair and unsafe AIs. Human biases are seeping into algorithms which poses a very real danger that prejudices and oppression could become automated by accident.

AI News reported last week on research from New York University that found inequality in STEM-based careers is causing algorithms to work better or worse for some parts of society over others.

Similar findings, by Joy Buolamwini and her team from the Algorithmic Justice League, highlighted a disparity in the effectiveness of the world’s leading facial recognition systems between genders and skin tones.

In an ideal world, all parts of society would be equally represented tomorrow. The reality is that issue is going to take much longer to rectify, but AI technologies are becoming increasingly used across society today.

AI News asked Feige for his perspective and how the impact of that problem can be reduced much sooner.

“I think the most important thing for organisations to do is to spend more time thinking about bias and on ensuring that every model they build is unbiased because a demographically disparate team can build non-disparate tech.”

Some companies are seeking to build AIs which can scan for bias in other algorithms. We asked Feige for his view on whether he believes this is an ideal solution.

“Definitely, I showed one in my talk. We have tests for: You give me a black box algorithm, I have no idea what your algorithm does – but I can give an input, calculate the output, and I can just tell you how biased it is according to various definitions of bias.”

“We can go even further and say: Let’s modify your algorithm and give it back so it’s unbiased according to one of those definitions.”

In the Western world, we consider ourselves fairly liberal and protective of individual freedoms. China, potentially the world’s leader in AI, has a questionable human rights record and is known for invasive surveillance and mass data collection. Meanwhile, Russia has a reputation for military aggression which some are concerned will drive its AI developments. Much of the Middle East, while not considered leaders in AI, is behind most of the world in areas such as female and gay rights.

We asked Feige for his thoughts on whether these regional attitudes could find their way into AI developments.

“It’s an interesting question. It’s not that some regions will take the issue more or less seriously, they just have different … we’ll say preferences. I suspect China takes surveillance and facial recognition seriously – more seriously than the UK – but they do so in order to leverage it for mass surveillance, for population control.”

“The UK is trying to walk a fine line in efficiently using that very useful technology but not undermine personal privacy and freedom of individuals.”

During his talk, Feige made the point that he’s less concerned about AI biases due to the fact that – unlike humans – algorithms can be controlled.

“This is a real source of optimism for me, just because human decision-making is incredibly biased and everyone knows that.”

Feige asked the audience to raise a hand if they were concerned about AI bias which prompted around half to do so. The same question was asked regarding human bias and most of the room had their hand up.

“You can be precise with machine learning algorithms. You can say: ‘This is the objective I’m trying to achieve, I’m trying to maximise the probability of a candidate being successful at their job according to historical people in their role’. Or, you can be precise about the data the model is trained on and say: ‘I’m going to ignore data from before this time period because things were ‘different’ back then’”.

“Humans have fixed past experiences they can’t control. I can’t change the fact my mum did most of the cooking when I was growing up and I don’t know how it affects my decision-making.”

“I also can’t force myself to hire based on success in their jobs, which I try to do. It’s hard to know if really I just had a good conversation about the football with the candidate.”

Faculty, of which Feige has the role of head of research, is a European company based in London. With the EU Commission recently publishing its guidelines on AI development, we took the opportunity to get his views on them.

“At a high-level, I think they’re great. They align quite a bit with how we think about these things. My biggest wish, whenever a body like that puts together some principles, is that there’s a big gap between that level of guidelines and what is useful for practitioners. Making those more precise is really important and those weren’t precise enough by my standards.”

“But not to just advocate putting the responsibility on policymakers. There’s also an onus on practitioners to try and articulate what bias looks like statistically and how that may apply to different problems, and then say: ‘Ok policy body, which of these is most relevant and can you now make those statements in this language’ and basically bridge the gap.”

Google recently created, then axed, a dedicated ‘ethics board’ for its AI developments. Such boards seem a good idea but representing society can be a minefield. Google’s faced criticism for having a conservative figure with strong anti-LGBTQ and immigrant views on the board.

Feige provided his take on whether companies should have an independent AI oversight board to ensure their developments are safe and ethical.

“To some degree, definitely. I suspect there are some cases you want that oversight board to be very external and like a regulator with a lot of overhead and a lot of teeth.”

“At Faculty, each one of our product teams has a shadow team – which has practically the same skill set – who monitor and oversee the work done by the project team to ensure it follows our internal set of values and guidelines.”

“I think the fundamental question here is how to do this in a productive way and ensure AI safety but that it doesn’t grind innovation to a halt. You can imagine where the UK has a really strong oversight stance and then some other country with much less regulatory oversight has companies which become large multinationals and operate in the UK anyway.”

Getting the balance right around regulation is difficult. Our sister publication IoT News interviewed a digital lawyer who raised the concern that Europe’s strict GDPR regulations will cause AI companies in the continent to fall behind their counterparts in Asia and America which have access to far more data.

Feige believes there is the danger of this happening, but European countries like the UK – whether it ultimately remains part of the EU and subject to regulations like GDPR or not – can use it as an opportunity to lead in AI safety.

Three reasons are provided why the UK could achieve this:

  1. The UK has significant AI talent and renowned universities.
  2. It has a fairly unobjectionable record and respected government (Feige clarifies in comparison to how some countries view the US and China).
  3. The UK has a fairly robust existing regulatory infrastructure – especially in areas such as financial services.

Among the biggest concerns about AI continues to be around its impact on the workforce, particularly whether it will replace low-skilled workers. We wanted to know whether using legislation to protect human workers is a good idea.

“You could ask the question a hundred years ago: ‘Should automation come into agriculture because 90 percent of the population works in it?’ and now it’s almost all automated. I suspect individuals may be hurt by automation but their children will be better off by it.”

“I think any heavy-handed regulation will have unintended consequences and should be thought about well.”

Our discussion with Feige was insightful and provided optimism that AI can be developed safely and fairly, as long as there’s a will to do so.

You can watch our full interview with Feige from AI Expo Global 2019 below:

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post AI Expo Global: Fairness and safety in artificial intelligence appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/05/01/ai-expo-fairness-safety-artificial-intelligence/feed/ 0