Social Media – AI News https://news.deepgeniusai.com Artificial Intelligence News Wed, 25 Mar 2020 05:13:02 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Social Media – AI News https://news.deepgeniusai.com 32 32 Facebook pledges crackdown on deepfakes ahead of the US presidential election https://news.deepgeniusai.com/2020/01/08/facebook-crackdown-deepfakes-us-presidential-election/ https://news.deepgeniusai.com/2020/01/08/facebook-crackdown-deepfakes-us-presidential-election/#respond Wed, 08 Jan 2020 18:04:20 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6331 Facebook has pledged to crack down on misleading deepfakes ahead of the US presidential election later this year. Voter manipulation is a concern for any functioning democracy and deepfakes provide a whole new challenge for social media platforms. A fake video of Nancy Pelosi, Speaker of the United States House of Representatives, went viral last... Read more »

The post Facebook pledges crackdown on deepfakes ahead of the US presidential election appeared first on AI News.

]]>
Facebook has pledged to crack down on misleading deepfakes ahead of the US presidential election later this year.

Voter manipulation is a concern for any functioning democracy and deepfakes provide a whole new challenge for social media platforms.

A fake video of Nancy Pelosi, Speaker of the United States House of Representatives, went viral last year after purportedly showing her slurring her words as if she was intoxicated. The clip shows how even a relatively unsophisticated video (it wasn’t an actual deepfake) could be used to cause reputational damage and swing votes.

Facebook refused to remove the video of Nancy Pelosi and instead said it would display an article from a third-party fact-checking website highlighting that it’s been edited and take measures to limit its reach. The approach, of course, was heavily criticised.

The new rules from Facebook claim that deepfake videos that are designed to be misleading will be banned. The problem with the rules is they don’t cover videos altered for parody or those edited “solely to omit or change the order of words,” which will not sound encouraging to those wanting a firm stance against manipulation.

In the age of “fake news,” many people have become aware not to necessarily believe what they read. Likewise, an increasing number of people also know how easily images are manipulated. Deepfake videos pose such a concern because the wider public are not yet aware enough of their existence or how to spot them.

A report from the NYU Stern Center for Business and Human Rights last September, covered by our sister publication MarketingTech, highlighted the various ways disinformation could be used ahead of this year’s presidential elections.

One of the eight predictions is that deepfake videos will be used “to portray candidates saying and doing things they never said or did”. Another prediction is that Iran and China may join Russia as sources of disinformation, the former perhaps now being even more likely given recent escalations between the US and Iran and the desire for non-military retaliation.

Legislation is being introduced to criminalise the production of deepfakes without disclosing that they’ve been modified, but the best approach is to limit them from being widely shared in the first place.

“A better approach, and one that avoids the danger of overreaching government censorship, would be for the social media platforms to improve their AI-screening technology, enhance human review, and remove deepfakes before they can do much damage,” the report suggests.

The month after Facebook refused to remove the edited video of Pelosi, a deepfake created by Israeli startup Canny AI aimed to raise awareness of the issue by making it appear like Facebook CEO Mark Zuckerberg said: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

Canny AI’s deepfake was designed to be clearly fake but it shows how easy it’s becoming to manipulate people’s views. In a tense world, it’s not hard to imagine what devastation could be caused simply by releasing a deepfake of a political leader declaring war or planning to launch a missile.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Facebook pledges crackdown on deepfakes ahead of the US presidential election appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/01/08/facebook-crackdown-deepfakes-us-presidential-election/feed/ 0
Deepfake has Johnson and Corbyn advocating each other for Britain’s next PM https://news.deepgeniusai.com/2019/11/12/deepfake-johnson-corbyn-britain-next-pm/ https://news.deepgeniusai.com/2019/11/12/deepfake-johnson-corbyn-britain-next-pm/#respond Tue, 12 Nov 2019 14:16:24 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6185 A think tank has released two deepfake videos which appear to show election rivals Boris Johnson and Jeremy Corbyn advocating each other for Britain’s top role. The clips are produced by Future Advocacy and intend to show that people can no longer necessarily trust what they see in videos, not just to question what they... Read more »

The post Deepfake has Johnson and Corbyn advocating each other for Britain’s next PM appeared first on AI News.

]]>
A think tank has released two deepfake videos which appear to show election rivals Boris Johnson and Jeremy Corbyn advocating each other for Britain’s top role.

The clips are produced by Future Advocacy and intend to show that people can no longer necessarily trust what they see in videos, not just to question what they read and hear.

Here’s the Johnson video:

And here’s the Corbyn video:

In the era of fake news, people are becoming increasingly aware not to believe everything they read. Training the general population not to always believe what they can see with their own eyes is a lot more challenging.

At the same time, it’s also important in a democracy that media plurality is maintained and not too much influence is centralised to a handful of “trusted” outlets. Similarly, people cannot be allowed to just call something fake news to avoid scrutiny.

Future Advocacy highlights four key challenges:

  1. Detecting deepfakes – whether society can create the means for detecting a deepfake directly at the point of upload or once it has become widely disseminated.
  2. Liar’s dividend – a phenomenon in which genuine footage of controversial content can be dismissed by the subject as a deepfake, despite it being true.
  3. Regulation – what should the limitations be with regards to the creation of deepfakes and can these be practically enforced?
  4. Damage limitation – managing the impacts of deepfakes when regulation fails and the question of where responsibility should lie for damage limitation.

Areeq Chowdhury, Head of Think Tank at Future Advocacy, said:

“Deepfakes represent a genuine threat to democracy and society more widely. They can be used to fuel misinformation and totally undermine trust in audiovisual content.

Despite warnings over the past few years, politicians have so far collectively failed to address the issue of disinformation online. Instead, the response has been to defer to tech companies to do more. The responsibility for protecting our democracy lies in the corridors of Westminster not the boardrooms of Silicon Valley.

By releasing these deepfakes, we aim to use shock and humour to inform the public and put pressure on our lawmakers. This issue should be put above party politics. We urge all politicians to work together to update our laws and protect society from the threat of deepfakes, fake news, and micro-targeted political adverts online.”

Journalists are going to have to become experts in spotting fake content to maintain trust and integrity. Social media companies will also have to take some responsibility for the content they allow to spread on their platforms.

Social media moderation

Manual moderation of every piece of content that’s posted to a network like Facebook or Twitter is simply unfeasible, so automation is going to become necessary to at least flag potentially offending content.

But what constitutes offending content? That is the question social media giants are battling with in order to strike the right balance between free speech and expression while protecting their users from manipulation.

Just last night, Twitter released its draft policy on deepfakes and is currently accepting feedback on it.

The social network proposes the following steps for tweets it detects as featuring potentially manipulated content:

  • Place a notice next to tweets that share synthetic or manipulated media.
  • Warn people before they share or like tweets with synthetic or manipulated media.
  • Add a link – for example, to a news article or Twitter Moment – so that people can read more about why various sources believe the media is synthetic or manipulated.

Twitter defines deepfakes as “any photo, audio, or video that has been significantly altered or fabricated in a way that intends to mislead people or changes its original meaning.”

Twitter’s current definition sounds like it could end up flagging the internet’s favourite medium, memes, as deepfakes. However, there’s a compelling argument that memes often should at least be flagged as modified from their original intent.

Take the infamous “This is fine” meme that was actually part of a larger comic by KC Green before it was manipulated for individual purposes.

In this Vulture piece, Green gives his personal stance that he’s mostly fine with people using his work as a meme so long as they’re not monetising it for themselves or using it for political purposes.

On July 25th 2016, the official Republican Party Twitter account used Green’s work and added “Well ¯\_(ツ)_/¯ #DemsInPhilly #EnoughClinton”. Green later tweeted: “Everyone is in their right to use this is fine on social media posts, but man o man I personally would like @GOP to delete their stupid post.”

Raising awareness of deepfakes

Bill Posters is a UK artist known for creating subversive deepfakes of famous celebrities, including Donald Trump and Kim Kardashian. Posters was behind the viral deepfake of Mark Zuckerberg for the Spectre project which AI News reported on earlier this year.

Posters commented on his activism using deepfakes:

“We’ve used the biometric data of famous UK politicians to raise awareness to the fact that without greater controls and protections concerning personal data and powerful new technologies, misinformation poses a direct risk to everyone’s human rights including the rights of those in positions of power.

It’s staggering that after 3 years, the recommendations from the DCMS Select Committee enquiry into fake news or the Information Commissioner’s Office enquiry into the Cambridge Analytica scandals have not been applied to change UK laws to protect our liberty and democracy.

As a result, the conditions for computational forms of propaganda and misinformation campaigns to be amplified by social media platforms are still in effect today. We urge all political parties to come together and pass measures which safeguard future elections.”

As the UK heads towards its next major election, there is sure to be much debate around potential voter manipulation. Many have pointed towards Russian interference in Western democracies but there’s yet to be any solid evidence of that being the case.

Opposition parties, however, have criticised the incumbent government in the UK as refusing to release a report into Russian interference. Former US presidential candidate Hillary Clinton branded it “inexplicable and shameful” that the UK government has not yet published the report.

Allegations of interference and foul play will likely increase in the run-up to the election, but Future Advocacy is doing a great job in highlighting to the public that not everything you see can be believed.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Deepfake has Johnson and Corbyn advocating each other for Britain’s next PM appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/11/12/deepfake-johnson-corbyn-britain-next-pm/feed/ 0
Mozilla shares YouTube horror tales in campaign for responsible algorithms https://news.deepgeniusai.com/2019/10/15/mozilla-shares-youtube-horror-campaign-responsible-algorithms/ https://news.deepgeniusai.com/2019/10/15/mozilla-shares-youtube-horror-campaign-responsible-algorithms/#respond Tue, 15 Oct 2019 12:02:41 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6107 Mozilla has launched a campaign for more responsible algorithms by sharing YouTube horror tales crowdsourced from social media. We’ve all scratched our heads at some recommendations when using online platforms. Just yesterday, Verge reporter Megan Farokhmanesh shared how her Instagram recommendations have been plagued by some rather bizarre CGI images of teeth. Farokhmanesh’s account is... Read more »

The post Mozilla shares YouTube horror tales in campaign for responsible algorithms appeared first on AI News.

]]>
Mozilla has launched a campaign for more responsible algorithms by sharing YouTube horror tales crowdsourced from social media.

We’ve all scratched our heads at some recommendations when using online platforms. Just yesterday, Verge reporter Megan Farokhmanesh shared how her Instagram recommendations have been plagued by some rather bizarre CGI images of teeth.

Farokhmanesh’s account is of a recommendation algorithm going rogue in a relatively harmless and amusing way, but that’s not always the case.

Algorithms need to be unbiased. It’s easy to imagine how, without due scrutiny, algorithms could recommend content which influences a person to think or vote a certain way. The bias may not even be intentional, but it doesn’t make it any less dangerous.

YouTube’s algorithms, in particular, have been called out for promoting some awful content – including paedophilia and radicalisation. To really put that danger in perspective, around 70 percent of YouTube’s viewing time comes from recommendations.

Mozilla’s newly-launched site features 28 horror stories caused by YouTube’s algorithms. The site was launched following a Mozilla-led social media campaign where users shared their stories using the #YouTubeRegrets hashtag.

In one story, an individual provided an account of their preschool son who – like many his age – liked watching Thomas the Tank Engine videos. YouTube’s recommendations led him into watching graphic compilations of train wrecks.

At that early stage in a person’s life, what they see can be detrimental to their long-term development. However, that doesn’t mean adults aren’t also affected.

Another person said: “I started by watching a boxing match, then street boxing matches, and then I saw videos of street fights, then accidents and urban violence… I ended up with a horrible vision of the world and feeling bad, without really wanting to.”

Yet another person said they’d often watch a drag queen who did a lot of positive-affirmation and confidence-building videos. YouTube’s recommendations allegedly served up a ton of anti-LGBT content for ages after, which could have a devastating impact on already too-often marginalised communities.

In September, Mozilla advised YouTube it could improve its service – and trust in it – by taking three key steps:

  • Provide independent researchers with access to meaningful data.
  • Build simulation tools for researchers.
  • Empower researchers by not implementing restrictive API rate limits and provide access to a historical archive of videos.

We’ll have to wait and see whether YouTube takes on Mozilla’s advice, but we hope changes are made sooner rather than later. Around 250 million hours of YouTube content per day is watched via recommendations, and we can never be certain for how long those videos will last in the minds of those viewing them.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Mozilla shares YouTube horror tales in campaign for responsible algorithms appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/10/15/mozilla-shares-youtube-horror-campaign-responsible-algorithms/feed/ 0
Pinterest uses AI to reduce self-harm content by 88% over the past year https://news.deepgeniusai.com/2019/10/11/pinterest-ai-reduce-self-harm-content-past-year/ https://news.deepgeniusai.com/2019/10/11/pinterest-ai-reduce-self-harm-content-past-year/#respond Fri, 11 Oct 2019 14:10:59 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6100 Pinterest announced on World Mental Health Day that it’s reduced self-harm content by 88 percent over the past year using AI. In a blog post titled Getting better at helping people feel better, the social media platform says it’s using machine learning techniques to identify content which displays, encourages, or rationalises self-harm. Anxiety and depression... Read more »

The post Pinterest uses AI to reduce self-harm content by 88% over the past year appeared first on AI News.

]]>
Pinterest announced on World Mental Health Day that it’s reduced self-harm content by 88 percent over the past year using AI.

In a blog post titled Getting better at helping people feel better, the social media platform says it’s using machine learning techniques to identify content which displays, encourages, or rationalises self-harm.

Anxiety and depression are at all-time highs while many countries are failing to properly fund mental health services. In the UK, someone commits suicide every 90 minutes.

Experts believe social media plays a large part in the record levels of anxiety and depression. Aside from issues like online bullying, it’s believed that – because people generally upload just the best parts of their lives as part of societal pressure to almost build their own ‘brand’ online – others unfairly compare themselves.

Writing from a rainy England right now, I bet if I opened Facebook right now I’d find multiple posts of people on a sunny beach with a cold drink within a few minutes. The unreasonable part of my brain could easily convince myself that everyone else is having a good time when, in reality, that’s far from the case.

While two-thirds of global suicides affect males, there’s a worrying increase in mental health issues affecting women. A study by the National Centre for Social Research found that over the past 14 years, the number of girls and young women self-harming has tripled. Social media and its pressures likely play a large role in that increase.

Aside from its AI smarts, Pinterest is also removing over 4,600 terms and phrases related to self-harm. If someone searches for one of the terms then links to free support from experts will be prominently displayed. Pinterest received guidance from expert groups like the Samaritans, National Suicide Prevention Lifeline, and Vibrant Emotional Health, on the approach it should take.

In the US, Pinterest added some wellbeing activities to its iOS app earlier this year. Previously, they would only be offered when someone searched for something that indicates they are feeling down. Now, all users have to search for is #pinterestwellbeing to help find exercises to improve their mood before they get too down.

It’s great to see social media platforms like Pinterest exploring the use of technologies such as AI to help solve the world’s mental health crisis. There’s a long way to go, but it’s a welcome start nonetheless.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Pinterest uses AI to reduce self-harm content by 88% over the past year appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/10/11/pinterest-ai-reduce-self-harm-content-past-year/feed/ 0
California introduces legislation to stop political and porn deepfakes https://news.deepgeniusai.com/2019/10/07/california-introduces-legislation-stop-political-porn-deepfakes/ https://news.deepgeniusai.com/2019/10/07/california-introduces-legislation-stop-political-porn-deepfakes/#respond Mon, 07 Oct 2019 11:48:28 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6083 Deepfake videos have the potential to do unprecedented amounts of harm so California has introduced two bills designed to limit them. For those unaware, deepfakes use machine learning technology in order to make a person appear like they’re convincingly doing or saying things which they’re not. There are two main concerns about deepfake videos: Personal... Read more »

The post California introduces legislation to stop political and porn deepfakes appeared first on AI News.

]]>
Deepfake videos have the potential to do unprecedented amounts of harm so California has introduced two bills designed to limit them.

For those unaware, deepfakes use machine learning technology in order to make a person appear like they’re convincingly doing or saying things which they’re not.

There are two main concerns about deepfake videos:

  • Personal defamation – An individual is made to appear in a sexual and/or humiliating scene either for blackmail purposes or to stain that person’s image.
  • Manipulation – An influential person, typically a politician, is made to appear they’ve said something in order to sway public opinion and perhaps even vote a certain way.

Many celebrities have become victims of deepfake porn. One of the bills signed into law by the state of California last week allows victims to sue anyone who puts their image into a pornographic video without consent.

Earlier this year, Facebook CEO Mark Zuckerberg became the victim of a deepfake. Zuckerberg was portrayed to say: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

Clearly, to most of us, the Zuckerberg video is a fake. The video was actually created by Isreali startup Canny AI as part of a commissioned art installation called Spectre that was on display at the Sheffield Doc/Fest in the UK.

A month prior to the Zuckerberg video, Facebook refused to remove a deepfake video of House Speaker Nancy Pelosi which aimed to portray her as intoxicated. If deepfakes are allowed to go viral on huge social media platforms like Facebook, it will pose huge societal problems.

Pelosi later told California’s KQED: “I think they have proven — by not taking down something they know is false — that they were willing enablers of the Russian interference in our election.”

California’s second bill legislates against posting any manipulated video of a political candidate, albeit only within 60 days of an election.

California Assembly representative Marc Berman said:

“Voters have a right to know when video, audio, and images that they are being shown, to try to influence their vote in an upcoming election, have been manipulated and do not represent reality.

[That] makes deepfake technology a powerful and dangerous new tool in the arsenal of those who want to wage misinformation campaigns to confuse voters.”

While many people now know not to trust everything they read, most of us are still accustomed to believing what we see with our eyes. That’s what poses the biggest threat with deepfake videos.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post California introduces legislation to stop political and porn deepfakes appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/10/07/california-introduces-legislation-stop-political-porn-deepfakes/feed/ 0
Ofcom: AI is not ready to effectively moderate online content ‘for the foreseeable future’ https://news.deepgeniusai.com/2019/07/22/ofcom-ai-moderate-online-content-future/ https://news.deepgeniusai.com/2019/07/22/ofcom-ai-moderate-online-content-future/#respond Mon, 22 Jul 2019 14:05:15 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5853 Ofcom and Cambridge Consultants have teamed up on a report examining the effectiveness of AI-powered online content moderation. Governments around the world have put increasing amounts of pressure on social networks and communication services to take responsibility for content posted on them. Society itself is becoming more aware of the dangers following live-streamed terror attacks,... Read more »

The post Ofcom: AI is not ready to effectively moderate online content ‘for the foreseeable future’ appeared first on AI News.

]]>

Ofcom and Cambridge Consultants have teamed up on a report examining the effectiveness of AI-powered online content moderation.

Governments around the world have put increasing amounts of pressure on social networks and communication services to take responsibility for content posted on them. Society itself is becoming more aware of the dangers following live-streamed terror attacks, cyberbullying, political manipulation, and more.

With some platforms having billions of users, manual content moderation of everything posted is not feasible. When illegal content is uploaded, it often requires someone to report it and wait for a human moderator to make a decision (those moderators sometimes require therapy after being exposed to such content).

Ofcom and Cambridge Consultants’ report suggests that AI could help to reduce the psychological impact on human moderators in a few key ways:

  • Varying the level and type of harmful content they are exposed to.
  • Automatically blurring out parts of the content which the moderator can optionally choose to view if required for a decision.
  • Humans can ‘ask’ the AI questions about the content to prepare themselves or know whether it will be particularly difficult for them, perhaps due to past individual experiences.

The slow process of manual content moderation often means harmful content is seen by millions before it’s taken down. While most AI moderation implementations today still require human oversight, some advancements in content detection are helping to speed up content being flagged and removed.

Earlier this month, Facebook-owned Instagram unveiled improvements to an AI-powered moderation system it uses in a bid to prevent troublesome content from ever being posted. While previously restricted to comments, Instagram will now ask users “Are you sure you want to post this?” for any posts it deems may cause distress to others.

As the UK’s telecoms regulator, Ofcom’s report should help to form workable policies rather than generic demands from politicians without a real understanding of how these things work (can anyone remember the calls to ban encryption and/or knowingly create backdoors?)

The report essentially determines that, for the foreseeable future, effective fully automated content moderation is not possible.

Among the chief reasons for fully automated content moderation being problematic is that – while some harmful posts can be identified by analysing it alone – other content requires a full understanding of context. For example, the researchers note how regional and cultural differences in national laws and what’s socially acceptable are difficult for today’s AI moderation solutions to account for but trivial for local human moderators.

Some content is also easier to analyse than others. Photos and pre-recorded videos could be analysed before they’re posted, whereas live-streams pose a particular difficulty because what appears to be an innocent scene could become harmful very quickly.

“Human moderators will continue to be required to review highly contextual, nuanced content,’ says Cambridge Consultants’ report. “However, AI-based content moderation systems can reduce the need for human moderation and reduce the impact on them of viewing harmful content.”

You can find a copy of the full report here (PDF)

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Ofcom: AI is not ready to effectively moderate online content ‘for the foreseeable future’ appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/07/22/ofcom-ai-moderate-online-content-future/feed/ 0
Instagram uses ‘AI intervention’ to help counter bullying https://news.deepgeniusai.com/2019/07/10/instagram-ai-intervention-counter-bullying/ https://news.deepgeniusai.com/2019/07/10/instagram-ai-intervention-counter-bullying/#respond Wed, 10 Jul 2019 17:00:17 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5822 Media sharing platform Instagram is launching an ‘AI intervention’ system designed to help counter the scourge of cyberbullying. With 95 million photos and videos uploaded to Instagram every day, manual content moderation is impossible. Inappropriate content must be reported for moderation today, and the psychological damage it can do to human moderators is well-documented. AI... Read more »

The post Instagram uses ‘AI intervention’ to help counter bullying appeared first on AI News.

]]>
Media sharing platform Instagram is launching an ‘AI intervention’ system designed to help counter the scourge of cyberbullying.

With 95 million photos and videos uploaded to Instagram every day, manual content moderation is impossible. Inappropriate content must be reported for moderation today, and the psychological damage it can do to human moderators is well-documented.

AI has long been herald as a means to automate the moderation process, but today it still requires a human to make a final decision. Instagram has decided to follow the old adage of ‘prevention is better than a cure’ with its new AI system.

Instagram already uses such a system for negative comments, providing users with the opportunity to undo the message. Instagram claims in earlier tests they’ve found “it encourages some people to undo their comment and share something less hurtful once they have had a chance to reflect.”

Now, prior to any potentially harmful content being shared, the platform will intervene to ask “are you sure you want to post this?” to really make the user think about what they’re doing.

Instagram is also helping to stop bullies getting the attention they seek from their victims.

“We’ve heard from young people in our community that they’re reluctant to block, unfollow, or report their bully because it could escalate the situation, especially if they interact with their bully in real life,” says Instagram head Adam Mosseri.

Users flagged as bullies will be made invisible to the victim. Furthermore, the victim’s current online status will be made unknown to the bully.

“Restricted people won’t be able to see when you’re active on Instagram or when you’ve read their direct messages,” adds Mosseri.

A study last year found that around 59 percent of US teens have suffered from bullying and harassment online. Over half of the respondents said that social media sites, police, teachers, and others are not doing enough to counter the issue.

The new Instagram AI intervention system will begin rolling out to English speakers first before it’s made available globally.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Instagram uses ‘AI intervention’ to help counter bullying appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/07/10/instagram-ai-intervention-counter-bullying/feed/ 0
Facebook outage gave a glimpse at how its AI analyses images https://news.deepgeniusai.com/2019/07/04/facebook-outage-ai-analyses-images/ https://news.deepgeniusai.com/2019/07/04/facebook-outage-ai-analyses-images/#respond Thu, 04 Jul 2019 15:47:03 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5805 Facebook’s issue displaying images yesterday gave users an interesting look at how the social media giant’s AI analyses their uploads. An outage yesterday meant Facebook users were unable to see uploaded images, providing a welcome respite from the usual mix of food and baby photos. In their place, however, was some interesting text. Text in... Read more »

The post Facebook outage gave a glimpse at how its AI analyses images appeared first on AI News.

]]>

Facebook’s issue displaying images yesterday gave users an interesting look at how the social media giant’s AI analyses their uploads.

An outage yesterday meant Facebook users were unable to see uploaded images, providing a welcome respite from the usual mix of food and baby photos. In their place, however, was some interesting text.

Text in the placeholder where the image should have been displayed showed how Facebook’s AI automatically tagged the images.

Some of the aforementioned tags were understandable, like “one person, beard”. Other tags – such as a group of women standing together being tagged as “hoes” – were more questionable.

Facebook says it uses machine learning for tagging the images and reading the description to blind users. It’s unclear if there were “hoes” in terms of the farming tool in the image – but I don’t know how often even one is in a group photo, let alone plural.

Techies like many of our readers know Facebook is analysing photos to further understand each user, primarily for advertising purposes. Yesterday’s outage, however, will have opened more of the public’s eyes to how each of their uploads are being analysed and data extracted.

Shout-out to the person who uploaded a photo of their baby only for Facebook to categorise it as “Image may contain: dog”.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Facebook outage gave a glimpse at how its AI analyses images appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/07/04/facebook-outage-ai-analyses-images/feed/ 0
Twitter’s latest acquisition tackles fake news using AI https://news.deepgeniusai.com/2019/06/04/twitter-acquisition-fake-news-ai/ https://news.deepgeniusai.com/2019/06/04/twitter-acquisition-fake-news-ai/#respond Tue, 04 Jun 2019 15:46:03 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5716 Twitter has acquired Fabula AI, a UK-based startup employing artificial intelligence for tackling fake news. Fake news is among the most difficult challenges of our time. Aside from real stories often being called it by certain politicians, actual fake news is used to coerce people into making decisions. Governments have been putting increasing pressure on... Read more »

The post Twitter’s latest acquisition tackles fake news using AI appeared first on AI News.

]]>
Twitter has acquired Fabula AI, a UK-based startup employing artificial intelligence for tackling fake news.

Fake news is among the most difficult challenges of our time. Aside from real stories often being called it by certain politicians, actual fake news is used to coerce people into making decisions.

Governments have been putting increasing pressure on sites like Twitter and Facebook to take more responsibility for the content shared on them.

With billions of users, each uploading content, manual moderation of it all isn’t feasible. Automation is increasingly being used to flag problem content before a human moderator checks it.

Twitter CTO Parag Agrawal says its acquisition of Fabula is “to improve the health of the conversation, with expanding applications to stop spam and abuse and other strategic priorities in the future.”

Fabula has developed the ability to analyse “very large and complex data sets” for signs of network manipulation and can identify patterns that other machine-learning techniques can’t, according to Agrawal.

In addition, Fabula has created a truth-risk score to identify misinformation. The score is generated using data from trust fact-checking sources like PolitiFact and Snopes. Armed with the score, Twitter can determine how trustworthy a claim is and perhaps even make it visible to others.

A post on Twitter’s blog yesterday hints at the possible direction: “Context on Tweets and our enforcement is important in understanding our rules, so we’ll add more notices within Twitter for clarity, such as if a Tweet breaks our rules but remains on the service because the content is in the public interest.”

Often fake news is used to cause political gain or turmoil. Russia is regularly linked with modern disinformation campaigns, but even Western democracies have used it to influence both national and international affairs.

The US presidential elections were influenced by fake news. Last year, Congress released more than 3,000 Facebook ads purchased by Russian-linked agents ahead of the 2016 presidential contest.

In Fabula AI’s home country, some allege fake news was behind the UK’s decision to leave in the EU referendum. There’s less conclusive data behind the allegation, but we do know powerful targeted advertising was used to promote so-called ‘alternative facts’.

Fabula’s team will be joining the Twitter Cortex machine-learning team. Exact terms of the deal or how Fabula’s technology will be used have not been disclosed.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Twitter’s latest acquisition tackles fake news using AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/06/04/twitter-acquisition-fake-news-ai/feed/ 0