Society – AI News https://news.deepgeniusai.com Artificial Intelligence News Wed, 16 Dec 2020 17:19:18 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Society – AI News https://news.deepgeniusai.com 32 32 Facebook is developing a news-summarising AI called TL;DR https://news.deepgeniusai.com/2020/12/16/facebook-developing-news-summarising-ai-tldr/ https://news.deepgeniusai.com/2020/12/16/facebook-developing-news-summarising-ai-tldr/#comments Wed, 16 Dec 2020 17:19:16 +0000 https://news.deepgeniusai.com/?p=10126 Facebook is developing an AI called TL;DR which summarises news into shorter snippets. Anyone who’s spent much time on the web will know what TL;DR stands for⁠—but, for everyone else, it’s an acronym for “Too Long, Didn’t Read”. It’s an understandable sentiment we’ve all felt at some point. People lead busy lives. Some outlets now... Read more »

The post Facebook is developing a news-summarising AI called TL;DR appeared first on AI News.

]]>
Facebook is developing an AI called TL;DR which summarises news into shorter snippets.

Anyone who’s spent much time on the web will know what TL;DR stands for⁠—but, for everyone else, it’s an acronym for “Too Long, Didn’t Read”.

It’s an understandable sentiment we’ve all felt at some point. People lead busy lives. Some outlets now even specialise in short, at-a-glance news.

The problem is, it’s hard to get the full picture of a story in just a brief snippet.

In a world where fake news can be posted and spread like wildfire across social networks – almost completely unchecked – it feels even more dangerous to normalise “news” being delivered in short-form without full context.

There are two sides to most stories, and it’s hard to see how both can be summarised properly.

However, the argument also goes the other way. When articles are too long, people have a natural habit of skim-reading them. Skimming in this way often means people then believe they’re fully informed on a topic… when we know that’s often not the case.

TL;DR needs to strike a healthy balance between summarising the news but not so much that people don’t get enough of the story. Otherwise, it could increase existing societal problems with misinformation, fake news, and lack of media trust.

According to BuzzFeed, Facebook showed off TL;DR during an internal meeting this week. 

Facebook appears to be planning to add an AI-powered assistant to TL;DR which can answer questions about the article. The assistant could help to clear up anything the reader is uncertain about, but it’s also going to have to prove it doesn’t suffer from any biases which arguably all current algorithms suffer from to some extent.

The AI is also going to have to be very careful in not taking things like quotes out-of-context and end up further automating the spread of misinformation.

There’s also going to be a debate over what sources Facebook should use. Should Facebook stick only to the “mainstream media” which many believe follow the agendas of certain powerful moguls? Or serve news from smaller outlets without much historic credibility? The answer probably lies somewhere in the middle, but it’s going to be difficult to get right.

Facebook continues to be a major source of misinformation – in large part driven by algorithms promoting such content – and it’s had little success so far in any news-related efforts. I think most people will be expecting this to be another disaster waiting to happen.

(Image Credit: Mark Zuckerberg by Alessio Jacona under CC BY-SA 2.0 license)

The post Facebook is developing a news-summarising AI called TL;DR appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/16/facebook-developing-news-summarising-ai-tldr/feed/ 1
Google fires ethical AI researcher Timnit Gebru after critical email https://news.deepgeniusai.com/2020/12/04/google-fires-ethical-ai-researcher-timnit-gebru-email/ https://news.deepgeniusai.com/2020/12/04/google-fires-ethical-ai-researcher-timnit-gebru-email/#comments Fri, 04 Dec 2020 16:18:56 +0000 https://news.deepgeniusai.com/?p=10062 A leading figure in ethical AI development has been fired by Google after criticising the company. Timnit Gebru is considered a pioneer in the field and researched the risks and inequalities found in large language models. Gebru claims she was fired by Google over an unpublished paper and sending an email critical of the company’s... Read more »

The post Google fires ethical AI researcher Timnit Gebru after critical email appeared first on AI News.

]]>
A leading figure in ethical AI development has been fired by Google after criticising the company.

Timnit Gebru is considered a pioneer in the field and researched the risks and inequalities found in large language models.

Gebru claims she was fired by Google over an unpublished paper and sending an email critical of the company’s practices.

The paper questions whether language models can be too big, who benefits from them, and whether they can increase prejudice and inequalities. Some recent cases validate her claims about large models and datasets in general.

For example, MIT was forced to remove a large dataset earlier this year called 80 Million Tiny Images. The dataset is popular for training AIs but was found to contain images labelled with racist, misogynistic, and other unacceptable terms.

A statement on MIT’s website claims it was unaware of the offensive labels and they were “a consequence of the automated data collection procedure that relied on nouns from WordNet.”

The statement goes on to explain the 80 million images contained in the dataset – with sizes of just 32×32 pixels – meant that manual inspection would be almost impossible and couldn’t guarantee all offensive images would be removed.

Gebru reportedly sent an email to the Google Brain Women and Allies listserv that is “inconsistent with the expectations of a Google manager.”

In the email, Gebru expressed her frustration with a perceived lack of progress at Google in hiring women at Google. Gebru claimed she was also told not to publish a piece of research and advised employees to stop filling out diversity paperwork because it didn’t matter.

On top of the questionable reasons for her firing, Gebru says her former colleagues were emailed saying she offered her resignation—which she claims was not the case:

Platformer obtained an email from Jeff Dean, Head of Google Research, which was sent to employees and offers his take on Gebru’s claims:

“We’ve approved dozens of papers that Timnit and/or the other Googlers have authored and then published, but as you know, papers often require changes during the internal review process (or are even deemed unsuitable for submission). Unfortunately, this particular paper was only shared with a day’s notice before its deadline — we require two weeks for this sort of review — and then instead of awaiting reviewer feedback, it was approved for submission and submitted.

A cross functional team then reviewed the paper as part of our regular process and the authors were informed that it didn’t meet our bar for publication and were given feedback about why. It ignored too much relevant research — for example, it talked about the environmental impact of large models, but disregarded subsequent research showing much greater efficiencies. Similarly, it raised concerns about bias in language models, but didn’t take into account recent research to mitigate these issues.”

Dean goes on to claim Gebru made demands which included revealing the identities of the individuals he and Google Research VP of Engineering Megan Kacholia consulted with as part of the paper’s review. If the demands weren’t met, Gebru reportedly said she would leave the company.

It’s a case of one word against another, but – for a company already in the spotlight from both the public and regulators over questionable practices – being seen to fire an ethics researcher for calling out problems is not going to be good PR.

(Image Credit: Timnit Gebru by Kimberly White/Getty Images for TechCrunch under CC BY 2.0 license)

The post Google fires ethical AI researcher Timnit Gebru after critical email appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/04/google-fires-ethical-ai-researcher-timnit-gebru-email/feed/ 2
Synthesized’s free tool aims to detect and remove algorithmic biases https://news.deepgeniusai.com/2020/11/12/synthesized-free-tool-detect-remove-algorithmic-biases/ https://news.deepgeniusai.com/2020/11/12/synthesized-free-tool-detect-remove-algorithmic-biases/#respond Thu, 12 Nov 2020 11:13:52 +0000 https://news.deepgeniusai.com/?p=10016 Synthesized has launched a free tool which aims to quickly identify and remove dangerous biases in algorithms. As humans, we all have biases. These biases, often unconsciously, end up in algorithms which are designed to be used across society. In practice, this could mean anything from a news app serving more left-wing or right-wing content—through... Read more »

The post Synthesized’s free tool aims to detect and remove algorithmic biases appeared first on AI News.

]]>
Synthesized has launched a free tool which aims to quickly identify and remove dangerous biases in algorithms.

As humans, we all have biases. These biases, often unconsciously, end up in algorithms which are designed to be used across society.

In practice, this could mean anything from a news app serving more left-wing or right-wing content—through to facial recognition systems which flag some races and genders more than others.

A 2010 study (PDF) by researchers at NIST and the University of Texas found that algorithms designed and tested in East Asia are better at recognising East Asians, while those developed in Western countries are more accurate when detecting Caucasians.

Dr Nicolai Baldin, CEO and Founder of Synthesized, said:

“The reputational risk of all organisations is under threat due to biased data and we’ve seen this will no longer be tolerated at any level. It’s a burning priority now and must be dealt with as a matter of urgency, both from a legal and ethical standpoint.

Last year, Algorithmic Justice League founder Joy Buolamwini gave a presentation during the World Economic Forum on the need to fight AI bias. Buolamwini highlighted the massive disparities in effectiveness when popular facial recognition algorithms were applied to various parts of society.

Synthesized claims its platform is able to automatically identify bias across data attributes like gender, age, race, religion, sexual orientation, and more. 

The platform was designed to be simple-to-use with no coding knowledge required. Users only have to upload a structured data file – as simple as a spreadsheet – to begin analysing for potential biases. A ‘Total Fairness Score’ will be provided to show what percentage of the provided dataset contained biases.

“Synthesized’s Community Edition for Bias Mitigation is one of the first offerings specifically created to understand, investigate, and root out bias in data,” explains Baldin. “We designed the platform to be very accessible, easy-to-use, and highly scalable, as organisations have data stored across a huge range of databases and data silos.”

Some examples of how Synthesized’s tool could be used across industries include:

  • In finance, to create fairer credit ratings
  • In insurance, for more equitable claims
  • In HR, to eliminate biases in hiring processes
  • In universities, for ensuring fairness in admission decisions

Synthesized’s platform uses a proprietary algorithm which is said to be quicker and more accurate than existing techniques for removing biases in datasets. A new synthetic dataset is created which, in theory, should be free of biases.

“With the generation of synthetic data, Synthesized’s platform gives its users the ability to equally distribute all attributes within a dataset to remove bias and rebalance the dataset completely,” the company says.

“Users can also manually change singular data attributes within a dataset, such as gender, providing granular control of the rebalancing process.”

If only MIT used such a tool on its dataset it was forced to remove in July after being found to be racist and misogynistic.

You can find out more about Synthesized’s tool and how to get started here.

(Photo by Agence Olloweb on Unsplash)

The post Synthesized’s free tool aims to detect and remove algorithmic biases appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/12/synthesized-free-tool-detect-remove-algorithmic-biases/feed/ 0
Information Commissioner clears Cambridge Analytica of influencing Brexit https://news.deepgeniusai.com/2020/10/08/information-commissioner-cambridge-analytica-influencing-brexit/ https://news.deepgeniusai.com/2020/10/08/information-commissioner-cambridge-analytica-influencing-brexit/#respond Thu, 08 Oct 2020 16:32:57 +0000 https://news.deepgeniusai.com/?p=9938 A three-year investigation by the UK Information Commissioner’s office has cleared Cambridge Analytica of electoral interference. Cambridge Analytica was accused in March 2018 of using AI tools and big data to influence the results of the Brexit referendum and the US presidential election. Most objective observers probably felt the case was overblown, but it’s taken... Read more »

The post Information Commissioner clears Cambridge Analytica of influencing Brexit appeared first on AI News.

]]>
A three-year investigation by the UK Information Commissioner’s office has cleared Cambridge Analytica of electoral interference.

Cambridge Analytica was accused in March 2018 of using AI tools and big data to influence the results of the Brexit referendum and the US presidential election. Most objective observers probably felt the case was overblown, but it’s taken until now to be confirmed.

“From my review of the materials recovered by the investigation I have found no further evidence to change my earlier view that CA [Cambridge Analytica] was not involved in the EU referendum campaign in the UK,” wrote Information Commissioner Elizabeth Denham.

Cambridge Analytica did obtain a ton of user data—but through predominantly commercial means, and of mostly US voters. Such data is available to, and has also been purchased by, other electoral campaigns for targeted advertising purposes (the Remain campaigns in the UK actually outspent their Leave counterparts by £6 million.)

“CA were purchasing significant volumes of commercially available personal data (at one estimate over 130 billion data points), in the main about millions of US voters, to combine it with the Facebook derived insight information they had obtained from an academic at Cambridge University, Dr Aleksandr Kogan, and elsewhere,” wrote Denham.

The only real scandal was Facebook’s poor protection of users which allowed third-party apps to scrape their data—for which it was fined £500,000 by the UK’s data protection watchdog.

It seems the claims Cambridge Analytica used powerful AI tools were also rather overblown, with the information commissioner saying all they found were models “built from ‘off the shelf’ analytical tools”.

The information commissioner even found evidence that Cambridge Analytica’s own staff “were concerned about some of the public statements the leadership of the company were making about their impact and influence.”

Cambridge Analytica appears to have been a victim of those unable to accept democratic results combined with its own boasting of capabilities that weren’t actually that impressive.

You can read the full report here (PDF)

(Photo by Christian Lue on Unsplash)

The post Information Commissioner clears Cambridge Analytica of influencing Brexit appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/10/08/information-commissioner-cambridge-analytica-influencing-brexit/feed/ 0
Google returns to using human YouTube moderators after AI errors https://news.deepgeniusai.com/2020/09/21/google-human-youtube-moderators-ai-errors/ https://news.deepgeniusai.com/2020/09/21/google-human-youtube-moderators-ai-errors/#respond Mon, 21 Sep 2020 17:05:18 +0000 https://news.deepgeniusai.com/?p=9865 Google is returning to using humans for YouTube moderation after repeated errors with its AI system. Moderating a large network like YouTube is no easy task. Aside from the sheer volume of content uploaded every day, moderators are subjected to the worst of humanity and often end up requiring therapy. They’re the unsung heroes. AI... Read more »

The post Google returns to using human YouTube moderators after AI errors appeared first on AI News.

]]>
Google is returning to using humans for YouTube moderation after repeated errors with its AI system.

Moderating a large network like YouTube is no easy task. Aside from the sheer volume of content uploaded every day, moderators are subjected to the worst of humanity and often end up requiring therapy. They’re the unsung heroes.

AI has been hailed as helping to deal with some of the aforementioned issues. Either by automating the moderation process entirely or by offering a helping hand to humans.

Google was left with little choice but to give more power to its AI moderators as the COVID-19 pandemic took hold… but it hasn’t been smooth sailing.

In late August, YouTube said that it had removed 11.4 million videos over the three months prior–the most since the site launched in 2005.

That figure alone should raise a few eyebrows. If a team of humans were removing that many videos, they probably deserve quite the pay rise.

Of course, most of the video removals weren’t done by humans. Many of the videos didn’t even violate the guidelines.

Neal Mohan, chief product officer at YouTube, told the Financial Times:

“One of the decisions we made [at the beginning of the COVID-19 pandemic] when it came to machines who couldn’t be as precise as humans, we were going to err on the side of making sure that our users were protected, even though that might have resulted in [a] slightly higher number of videos coming down.”

Some of the removals left content creators bewildered, angry, and out of pocket in some cases.

Around 320,000 of videos taken down were appealed, and half of the appealed videos were reinstated.

Deciding what content to ultimately remove feels like one of the many tasks which needs human involvement. Humans are much better at detecting nuances and things like sarcasm.

However, the sheer scale of content needing to be moderated also requires an AI to help automate some of that process.

“Over 50 percent of those 11 million videos were removed without a single view by an actual YouTube user and over 80 percent were removed with less than 10 views,” Mohan said. “That’s the power of machines.”

AIs can also help to protect humans from the worst of the content. Content detection systems are being built to automatically blur things like child abuse enough so that human moderators know what it is to remove it—but to limit their psychological impact.

Some believe AIs are better in helping to determine what content should be removed simply using logic rather than a human’s natural biases like their political-leaning, but we know human biases seep into algorithms.

In May, YouTube admitted to deleting messages critical of the Chinese Communist Party (CCP). YouTube later blamed an “error with our enforcement systems” for the mistakes. Senator Josh Hawley even wrote (PDF) to Google CEO Sundar Pichai seeking answers to “troubling reports that your company has resumed its long pattern of censorship at the behest of the Chinese Communist Party.”

Google appears to have quickly realised that replacing humans entirely with AI is rarely a good idea. The company says many of the human moderators who were “put offline” during the pandemic are now coming back.

(Photo by Rachit Tank on Unsplash)

The post Google returns to using human YouTube moderators after AI errors appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/09/21/google-human-youtube-moderators-ai-errors/feed/ 0
Microsoft: The UK must increase its AI skills, or risk falling behind https://news.deepgeniusai.com/2020/08/12/microsoft-uk-ai-skills-risk-falling-behind/ https://news.deepgeniusai.com/2020/08/12/microsoft-uk-ai-skills-risk-falling-behind/#comments Wed, 12 Aug 2020 13:46:27 +0000 https://news.deepgeniusai.com/?p=9809 A report from Microsoft warns that the UK faces an AI skills gap which may harm its global competitiveness. The research, titled AI Skills in the UK, shines a spotlight on some concerning issues. For its UK report, Microsoft used data from a global AI skills study featuring more than 12,000 people in 20 countries... Read more »

The post Microsoft: The UK must increase its AI skills, or risk falling behind appeared first on AI News.

]]>
A report from Microsoft warns that the UK faces an AI skills gap which may harm its global competitiveness.

The research, titled AI Skills in the UK, shines a spotlight on some concerning issues.

For its UK report, Microsoft used data from a global AI skills study featuring more than 12,000 people in 20 countries to see how the UK is doing in comparison to the rest of the world.

Most notably, compared to the rest of the world, the UK is seeing a higher failure rate for AI projects. 29 percent of AI ventures launched by UK businesses have generated no commercial value compared to the 19 percent average elsewhere in the world.

35 percent of British business leaders foresee an AI skills gap within two years, while 28 percent believe there already is one (above the global average of 24%).

However, it seems UK businesses aren’t helping to prepare employees with the skills they need. Just 17 percent of British employees have been part of AI reskilling efforts (compared to the global figure of 38 percent.)

Agata Nowakowska, AVP EMEA at Skillsoft, said:

“UK employers will have to address the growing digital skills gap within the workforce to ensure their business is able to fully leverage every digital transformation investment that’s made. With technologies like AI and cloud becoming as commonplace as word processing or email in the workplace, firms will need to ensure employees can use such tools and aren’t apprehensive about using them.

Organisations will need to think holistically about managing reskilling, upskilling and job transitioning. As the war for talent intensifies, employee development and talent pooling will become increasingly vital to building a modern workforce that’s adaptable and flexible. Addressing and easing workplace role transitions will require new training models and approaches that include on-the-job training and opportunities that support and signpost workers to opportunities to upgrade their skills.” 

Currently, a mere 32 percent of British employees feel their workplace is doing enough to prepare them for an AI-enabled future (compared to the global average of 42%)

“The most successful organisations will be the ones that transform both technically and culturally, equipping their people with the skills and knowledge to become the best competitive asset they have,” comments Simon Lambert, Chief Learning Officer for Microsoft UK.

“Human ingenuity is what will make the difference – AI technology alone will not be enough.”

AI brain drain

It’s well-documented that the UK suffers from a “brain drain” problem. The country’s renowned universities – like Oxford and Cambridge – produce globally desirable AI talent, but they’re often swooped up by Silicon Valley giants who are willing to pay much higher salaries than many British firms.

In one example, a senior professor from Imperial College London couldn’t understand why one of her students was not turning up to any classes. Most people wouldn’t pay £9,250 per year in tuition fees and not turn up. The professor called her student to find out why he’d completed three years but wasn’t turning up for his final year. She found that he was offered a six-figure salary at Apple. 

This problem also applies to teachers who are needed to pass their knowledge onto the future generations. Many are lured away from academia to work on groundbreaking projects with almost endless resources, less administrative duties, and be paid handsomely for it too.

Some companies, Microsoft included, have taken measures to address the brain drain problem. After all, a lack of AI talent harms the entire industry.

Dr Chris Bishop, Director of Microsoft’s Research Lab in Cambridge, said:

“One thing we’ve seen over the past few years is: because there are so many opportunities for people with skills in machine learning, particularly in industry, we’ve seen a lot of outflux of top academic talent to industry.

This concerns us because it’s those top academic professors and researchers who are responsible not just for doing research, but also for nurturing the next generation of talent in this field.”

Since 2018, Microsoft has funded a program for training the next generation of data scientists and machine-learning engineers called the Microsoft Research-Cambridge University Machine Learning Initiative.

Microsoft partners with universities to ensure it doesn’t steal talent, allows employees to continue roles in teaching, funds some related PhD scholarships, sends researchers to co-supervise students in universities, and offers paid internships to work alongside teams at Microsoft on projects.

You can find the full AI Skills in the UK report here.

(Photo by William Warby on Unsplash)

The post Microsoft: The UK must increase its AI skills, or risk falling behind appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/08/12/microsoft-uk-ai-skills-risk-falling-behind/feed/ 1
University College London: Deepfakes are the ‘most serious’ AI crime threat https://news.deepgeniusai.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/ https://news.deepgeniusai.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/#respond Thu, 06 Aug 2020 12:41:52 +0000 https://news.deepgeniusai.com/?p=9794 Researchers from University College London have released a ranking of what experts believe to be the most serious AI crime threats. The researchers first created a list of 20 expected ways AI will be used by criminals within the next 15 years. 31 experts were then asked to rank them by potential severity. Deepfakes –... Read more »

The post University College London: Deepfakes are the ‘most serious’ AI crime threat appeared first on AI News.

]]>
Researchers from University College London have released a ranking of what experts believe to be the most serious AI crime threats.

The researchers first created a list of 20 expected ways AI will be used by criminals within the next 15 years. 31 experts were then asked to rank them by potential severity.

Deepfakes – AI-generated images, videos, and articles – ranked top of the list as the most serious threat.

New and dangerous territory

It’s of little surprise to see deepfakes rank so highly, given the existing issues with disinformation campaigns.

Most fake content today at least must be created by humans, such as those working in the likes of Russia’s infamous “troll farms”. Human-generated disinformation campaigns take time to produce to a convincing standard and often have patterns which make them easier to trace. 

Automating the production of fake content en masse, to influence things such as democratic votes and public opinion, is entering into new and dangerous territory.

One of the most high-profile deepfake cases so far was that of US house speaker Nancy Pelosi. In 2018, a deepfake video circulated on social media which made Pelosi appear drunk and slurring her words. Pelosi criticised Facebook’s response, or lack thereof, and later told California’s KQED: “I think they have proven — by not taking down something they know is false — that they were willing enablers of the Russian interference in our election.”

The deepfake of Pelosi was unsophisticated and likely created to be amusing rather than malicious, but it’s an early warning of how such fakes could be used to cause reputational damage – or worse. Just imagine the potential consequences a fake video of the president announcing an imminent strike on somewhere like North Korea could have.

Deepfakes also have obvious potential to be used for fraud purposes, to pretend to be someone else to access things like bank accounts and sensitive information.

Then there’s the issue of blackmail. Deep learning has already been used to put the faces of celebrities on adult performers. While fake, the threat to release such videos – and the embarrassment caused – could lead to some paying a ransom to keep it from being made public.

“People now conduct large parts of their lives online and their online activity can make and break reputations,” comments first author Dr Matthew Caldwell of UCL Computer Science. “Such an online environment, where data is property and information power, is ideally suited for exploitation by AI-based criminal activity.”

All in all, it’s easy to see why experts are so concerned about deepfakes.

As part of a bid to persuade Facebook to change its policies on deepfakes, Israeli startup Canny AI created a deepfake of Facebook CEO Mark Zuckerberg last year which made it appear like he said: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

Other AI crime threats

There were four other major AI crime threats identified by the researchers: the use of driverless cars as weapons, automated spear fishing, harvesting information for blackmail, and the disruption of AI-controlled systems.

“As the capabilities of AI-based technologies expand, so too has their potential for criminal exploitation,” explained senior author Professor Lewis Griffin of UCL Computer Science. “To adequately prepare for possible AI threats, we need to identify what these threats might be, and how they may impact our lives.”

Of medium concern were of things such as the sale of items and services wrongly called AI, such as security screening and targeted advertising solutions. The researchers believe leading people to believe they’re AI-powered could be lucrative.

Among the lesser concerns is things such as so-called “burglar bots” which could get in through access points of a property to unlock them or search for data. The researchers believe this poses less of a threat because they can be easily prevented through methods such as letterbox cages.

Similarly, the researchers note the potential for AI-based stalking is damaging for individuals but isn’t considered a major threat as it could not operate at scale.

You can find the researchers’ full paper in the Crime Science Journal here.

(Photo by Bill Oxford on Unsplash)

The post University College London: Deepfakes are the ‘most serious’ AI crime threat appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/feed/ 0
Musk predicts AI will be superior to humans within five years https://news.deepgeniusai.com/2020/07/28/musk-predicts-ai-superior-humans-five-years/ https://news.deepgeniusai.com/2020/07/28/musk-predicts-ai-superior-humans-five-years/#comments Tue, 28 Jul 2020 12:17:59 +0000 https://news.deepgeniusai.com/?p=9769 Elon Musk has made another of his trademark predictions – this time, it’s that AI will be superior to humans within five years. Musk has been among the most vocal prominent figures in warning about the dangers of artificial intelligence. In 2018, for example, Musk famously warned that AI could become “an immortal dictator from... Read more »

The post Musk predicts AI will be superior to humans within five years appeared first on AI News.

]]>
Elon Musk has made another of his trademark predictions – this time, it’s that AI will be superior to humans within five years.

Musk has been among the most vocal prominent figures in warning about the dangers of artificial intelligence. In 2018, for example, Musk famously warned that AI could become “an immortal dictator from which we would never escape” and that the technology is more dangerous than nuclear weapons.

Speaking in a New York Times interview, Musk said that current trends suggest AI could overtake humans by 2025. However, Musk adds “that doesn’t mean that everything goes to hell in five years. It just means that things get unstable or weird.”

If correct, the latest prediction from Musk would mean the so-called technological singularity – when machine intelligence overtakes human – is set to happen much sooner than other experts predict. Ray Kurzweil, a respected futurist, has previously estimated the aforementioned singularity to occur around 2045.

As the founder of Tesla, SpaceX, and Neuralink – three companies which use AI far more than most – Musk isn’t against the technology, but has called for it to be regulated.

Musk also founded OpenAI back in 2015 with the goal of researching and promoting ethical artificial intelligence. Following disagreements with the company’s direction, Musk left OpenAI in 2018.

Back in February, Musk responded to an MIT Technology Review profile of OpenAI saying that it “should be more open” and that all organisations “developing advanced AI should be regulated, including Tesla.”

Last year, OpenAI decided not to release a text generator which it believed to have dangerous implications in a world already struggling with fake news and disinformation campaigns.

Two graduates later recreated and released a similar generator to OpenAI’s, with one saying that it “allows everyone to have an important conversation about security, and researchers to help secure against future potential abuses.”

OpenAI has since provided select researchers access to their powerful text generator. The latest version, GPT-3, has been making headlines in recent weeks for the incredible things it can achieve with limited input.

GPT-3 offers 175 billion parameters compared to GTP-2’s 1.5 billion parameters – which shows the rapid pace of AI advancements. However, Musk’s prediction of the singularity happening within five years perhaps needs to be taken with a pinch of salt.

(Image Credit: Elon Musk by JD Lasica under CC BY 2.0 license)

The post Musk predicts AI will be superior to humans within five years appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/28/musk-predicts-ai-superior-humans-five-years/feed/ 2
Baidu ends participation in AI alliance as US-China relations deteriorate https://news.deepgeniusai.com/2020/06/19/baidu-ai-alliance-us-china-relations-deteriorate/ https://news.deepgeniusai.com/2020/06/19/baidu-ai-alliance-us-china-relations-deteriorate/#respond Fri, 19 Jun 2020 16:13:07 +0000 https://news.deepgeniusai.com/?p=9700 Baidu will no longer participate in the Partnership on AI (PAI) alliance amid deteriorating relations between the US and China. PAI is a US-led alliance which aims to foster the ethical development and deployment of AI technologies. Baidu was the only Chinese member. The loss of Baidu’s expertise and any representation from China is devastating... Read more »

The post Baidu ends participation in AI alliance as US-China relations deteriorate appeared first on AI News.

]]>
Baidu will no longer participate in the Partnership on AI (PAI) alliance amid deteriorating relations between the US and China.

PAI is a US-led alliance which aims to foster the ethical development and deployment of AI technologies. Baidu was the only Chinese member.

The loss of Baidu’s expertise and any representation from China is devastating for PAI. Ethical AI development requires global cooperation to set acceptable standards which help to ensure safety while not limiting innovation.

Baidu has officially cited financial pressures for its decision to exit the alliance.

In a statement, Baidu wrote:

“Baidu shares the vision of the Partnership on AI and is committed to promoting the ethical development of AI technologies. 

We are in discussions about renewing our membership, and remain open to other opportunities to collaborate with industry peers on advancing AI.”

Directors from PAI hope to see Baidu renew its membership to the alliance next year.

Cooperation between American and Chinese firms

Cooperation between American and Chinese firms is getting more difficult as the world’s largest economies continue to implement sanctions on each other.

The US has criticised China for its handling of the coronavirus outbreak, trade practices, its mass imprisonment and alleged torture of Uyghur Muslims in “re-education” camps, and breaking the semi-autonomy of Hong Kong.

In the tech world, much of the focus has been on Chinese telecoms giant Huawei – which the US accuses of being a national security threat. Canada arrested Huawei CFO Meng Wanzhou last year on allegations of using the company’s subsidiaries to flout US sanctions against Iran. Two Canadian businessmen that were arrested in China shortly after Meng’s detention, in a suspected retaliation, were charged with spying by Beijing this week.

An increasing number of Chinese companies, including Huawei, have found themselves being added to an ‘Entity List’ in the US which bans American companies from working with them without explicit permission from the government.

The US added six Chinese AI companies to its Entity List last October, citing their role in alleged human rights violations.

Earlier this week, the US Commerce Department made an exception to Huawei’s inclusion on the Entity List which allows US companies to work with the Chinese giant for the purposes of developing 5G standards. Hopefully, we can see the same being done for AI companies.

However, on the whole, cooperation between American and Chinese firms is getting more difficult as a result of the political climate. It wouldn’t be surprising to see more cases of companies like Baidu dropping out of well-intentioned alliances such as PAI if sensible resolutions to differences are not sought.

(Photo by Erwan Hesry on Unsplash)

The post Baidu ends participation in AI alliance as US-China relations deteriorate appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/19/baidu-ai-alliance-us-china-relations-deteriorate/feed/ 0
Leading AI researchers propose ‘toolbox’ for verifying ethics claims https://news.deepgeniusai.com/2020/04/20/ai-researchers-toolbox-verifying-ethics-claims/ https://news.deepgeniusai.com/2020/04/20/ai-researchers-toolbox-verifying-ethics-claims/#comments Mon, 20 Apr 2020 14:23:30 +0000 https://news.deepgeniusai.com/?p=9558 Researchers from OpenAI, Google Brain, Intel, and 28 other leading organisations have published a paper which proposes a ‘toolbox’ for verifying AI ethics claims. With concerns around AI spanning from dangerous indifference to innovation-halting scaremongering; it’s clear there’s a need for a system to achieve a healthy balance. “AI systems have been developed in ways... Read more »

The post Leading AI researchers propose ‘toolbox’ for verifying ethics claims appeared first on AI News.

]]>
Researchers from OpenAI, Google Brain, Intel, and 28 other leading organisations have published a paper which proposes a ‘toolbox’ for verifying AI ethics claims.

With concerns around AI spanning from dangerous indifference to innovation-halting scaremongering; it’s clear there’s a need for a system to achieve a healthy balance.

“AI systems have been developed in ways that are inconsistent with the stated values of those developing them,” the researchers wrote. “This has led to a rise in concern, research, and activism relating to the impacts of AI systems.”

The researchers note that significant work has gone into articulating ethical principles by many players involved with AI development, but the claims are meaningless without some way to verify them.

“People who get on airplanes don’t trust an airline manufacturer because of its PR campaigns about the importance of safety – they trust it because of the accompanying infrastructure of technologies, norms, laws, and institutions for ensuring airline safety.”

Among the core ideas put forward is to pay developers for discovering bias in algorithms. Such a practice is already widespread in cybersecurity; with many companies offering up bounties to find bugs in their software.

“Bias and safety bounties would extend the bug bounty concept to AI and could complement existing efforts to better document data sets and models for their performance limitations and other properties,” the authors wrote.

“We focus here on bounties for discovering bias and safety issues in AI systems as a starting point for analysis and experimentation but note that bounties for other properties (such as security, privacy protection, or interpretability) could also be explored.”

Another potential avenue is so-called “red teaming,” the creation of a dedicated team which adopts the mindset of a possible attacker to find flaws and vulnerabilities in a plan, organisation, or technical system.

“Knowledge that a lab has a red team can potentially improve the trustworthiness of an organization with respect to their safety and security claims.”

A red team alone is unlikely to give too much confidence; but combined with other measures can go a long way. Verification from parties outside an organisation itself will be key to instil trust in that company’s AI developments.

“Third party auditing is a form of auditing conducted by an external and independent auditor, rather than the organization being audited, and can help address concerns about the incentives for accuracy in self-reporting.”

“Provided that they have sufficient information about the activities of an AI system, independent auditors with strong reputational and professional incentives for truthfulness can help verify claims about AI development.”

The researchers highlight that a current roadblock with third-party auditing is that there’s yet to be any techniques or best practices established specifically for AI. Frameworks, such as Claims-Arguments-Evidence (CAE) and Goal Structuring Notation (GSN), may provide a starting place as they’re already widely-used for safety-critical auditing.

Audit trails, covering all steps of the AI development process, are also recommended to become the norm. The researchers again point to commercial aircraft, as a safety-critical system, and their use of flight data recorders to capture multiple types of data every second and provide a full log.

“Standards setting bodies should work with academia and industry to develop audit trail requirements for safety-critical applications of AI systems.”

The final suggestion for software-oriented methods of verifying AI ethics claims is the use of privacy-preserving machine learning (PPML).

Privacy-preserving machine learning aims to protect the privacy of data or models used in machine learning, at training or evaluation time, and during deployment.

Three established types of PPML are covered in the paper: Federated learning, differential privacy, and encrypted computation.

“Where possible, AI developers should contribute to, use, and otherwise support the work of open-source communities working on PPML, such as OpenMined, Microsoft SEAL, tf-encrypted, tf-federated, and nGraph-HE.”

The researchers, representing some of the most renowned institutions in the world, have come up with a comprehensive package of ways any organisation involved with AI development can provide assurance to governance and the wider public to ensure the industry can reach its full potential responsibly.

You can find the full preprint paper on arXiv here (PDF)

(Photo by Alexander Sinn on Unsplash)

The post Leading AI researchers propose ‘toolbox’ for verifying ethics claims appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/04/20/ai-researchers-toolbox-verifying-ethics-claims/feed/ 1