social media – AI News https://news.deepgeniusai.com Artificial Intelligence News Fri, 02 Oct 2020 11:59:31 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png social media – AI News https://news.deepgeniusai.com 32 32 Facebook uses AI to help people support each other https://news.deepgeniusai.com/2020/10/02/facebook-ai-help-people-support-each-other/ https://news.deepgeniusai.com/2020/10/02/facebook-ai-help-people-support-each-other/#respond Fri, 02 Oct 2020 11:59:30 +0000 https://news.deepgeniusai.com/?p=9899 Facebook has deployed an AI system which matches people needing support with local heroes offering it. “United we stand, divided we fall” is a clichéd saying—but tackling a pandemic is a collective effort. While we’ve all seen people taking selfish actions, they’ve been more than balanced out by those helping to support their communities. Facebook... Read more »

The post Facebook uses AI to help people support each other appeared first on AI News.

]]>
Facebook has deployed an AI system which matches people needing support with local heroes offering it.

“United we stand, divided we fall” is a clichéd saying—but tackling a pandemic is a collective effort. While we’ve all seen people taking selfish actions, they’ve been more than balanced out by those helping to support their communities.

Facebook has been its usual blessing and a curse during the pandemic. On the one hand, it’s helped people to stay connected and organise community efforts. On the other, it’s allowed dangerous misinformation to spread like wildfire that’s led to the increase in anti-vaccine and anti-mask movements.

The social media giant is hoping that AI can help to swing the balance more towards Facebook having an overall benefit within our communities.

If a person has posted asking for help because they’re unable to leave the house, Facebook’s AI may automatically match that person with someone local who has recently said they’re willing to get things like groceries or prescriptions for people.

In a blog post, Facebook explains how it built its matching algorithm:

We built and deployed this matching algorithm using XLM-R, our open-source, cross-lingual understanding model that extends our work on XLM and RoBERTa, to produce a relevance score that ranks how closely a request for help matches the current offers for help in that community.

The system then integrates the posts’ ranking score into a set of models trained on PyText, our open-source framework for natural language processing.

It’s a great idea which could go a long way to making a real positive impact on people in difficult times. Hopefully, we’ll see more of such efforts from Facebook to improve our communities.

(Photo by Bohdan Pyryn on Unsplash)

The post Facebook uses AI to help people support each other appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/10/02/facebook-ai-help-people-support-each-other/feed/ 0
Facebook pledges crackdown on deepfakes ahead of the US presidential election https://news.deepgeniusai.com/2020/01/08/facebook-crackdown-deepfakes-us-presidential-election/ https://news.deepgeniusai.com/2020/01/08/facebook-crackdown-deepfakes-us-presidential-election/#respond Wed, 08 Jan 2020 18:04:20 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6331 Facebook has pledged to crack down on misleading deepfakes ahead of the US presidential election later this year. Voter manipulation is a concern for any functioning democracy and deepfakes provide a whole new challenge for social media platforms. A fake video of Nancy Pelosi, Speaker of the United States House of Representatives, went viral last... Read more »

The post Facebook pledges crackdown on deepfakes ahead of the US presidential election appeared first on AI News.

]]>
Facebook has pledged to crack down on misleading deepfakes ahead of the US presidential election later this year.

Voter manipulation is a concern for any functioning democracy and deepfakes provide a whole new challenge for social media platforms.

A fake video of Nancy Pelosi, Speaker of the United States House of Representatives, went viral last year after purportedly showing her slurring her words as if she was intoxicated. The clip shows how even a relatively unsophisticated video (it wasn’t an actual deepfake) could be used to cause reputational damage and swing votes.

Facebook refused to remove the video of Nancy Pelosi and instead said it would display an article from a third-party fact-checking website highlighting that it’s been edited and take measures to limit its reach. The approach, of course, was heavily criticised.

The new rules from Facebook claim that deepfake videos that are designed to be misleading will be banned. The problem with the rules is they don’t cover videos altered for parody or those edited “solely to omit or change the order of words,” which will not sound encouraging to those wanting a firm stance against manipulation.

In the age of “fake news,” many people have become aware not to necessarily believe what they read. Likewise, an increasing number of people also know how easily images are manipulated. Deepfake videos pose such a concern because the wider public are not yet aware enough of their existence or how to spot them.

A report from the NYU Stern Center for Business and Human Rights last September, covered by our sister publication MarketingTech, highlighted the various ways disinformation could be used ahead of this year’s presidential elections.

One of the eight predictions is that deepfake videos will be used “to portray candidates saying and doing things they never said or did”. Another prediction is that Iran and China may join Russia as sources of disinformation, the former perhaps now being even more likely given recent escalations between the US and Iran and the desire for non-military retaliation.

Legislation is being introduced to criminalise the production of deepfakes without disclosing that they’ve been modified, but the best approach is to limit them from being widely shared in the first place.

“A better approach, and one that avoids the danger of overreaching government censorship, would be for the social media platforms to improve their AI-screening technology, enhance human review, and remove deepfakes before they can do much damage,” the report suggests.

The month after Facebook refused to remove the edited video of Pelosi, a deepfake created by Israeli startup Canny AI aimed to raise awareness of the issue by making it appear like Facebook CEO Mark Zuckerberg said: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

Canny AI’s deepfake was designed to be clearly fake but it shows how easy it’s becoming to manipulate people’s views. In a tense world, it’s not hard to imagine what devastation could be caused simply by releasing a deepfake of a political leader declaring war or planning to launch a missile.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Facebook pledges crackdown on deepfakes ahead of the US presidential election appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/01/08/facebook-crackdown-deepfakes-us-presidential-election/feed/ 0
Mozilla shares YouTube horror tales in campaign for responsible algorithms https://news.deepgeniusai.com/2019/10/15/mozilla-shares-youtube-horror-campaign-responsible-algorithms/ https://news.deepgeniusai.com/2019/10/15/mozilla-shares-youtube-horror-campaign-responsible-algorithms/#respond Tue, 15 Oct 2019 12:02:41 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6107 Mozilla has launched a campaign for more responsible algorithms by sharing YouTube horror tales crowdsourced from social media. We’ve all scratched our heads at some recommendations when using online platforms. Just yesterday, Verge reporter Megan Farokhmanesh shared how her Instagram recommendations have been plagued by some rather bizarre CGI images of teeth. Farokhmanesh’s account is... Read more »

The post Mozilla shares YouTube horror tales in campaign for responsible algorithms appeared first on AI News.

]]>
Mozilla has launched a campaign for more responsible algorithms by sharing YouTube horror tales crowdsourced from social media.

We’ve all scratched our heads at some recommendations when using online platforms. Just yesterday, Verge reporter Megan Farokhmanesh shared how her Instagram recommendations have been plagued by some rather bizarre CGI images of teeth.

Farokhmanesh’s account is of a recommendation algorithm going rogue in a relatively harmless and amusing way, but that’s not always the case.

Algorithms need to be unbiased. It’s easy to imagine how, without due scrutiny, algorithms could recommend content which influences a person to think or vote a certain way. The bias may not even be intentional, but it doesn’t make it any less dangerous.

YouTube’s algorithms, in particular, have been called out for promoting some awful content – including paedophilia and radicalisation. To really put that danger in perspective, around 70 percent of YouTube’s viewing time comes from recommendations.

Mozilla’s newly-launched site features 28 horror stories caused by YouTube’s algorithms. The site was launched following a Mozilla-led social media campaign where users shared their stories using the #YouTubeRegrets hashtag.

In one story, an individual provided an account of their preschool son who – like many his age – liked watching Thomas the Tank Engine videos. YouTube’s recommendations led him into watching graphic compilations of train wrecks.

At that early stage in a person’s life, what they see can be detrimental to their long-term development. However, that doesn’t mean adults aren’t also affected.

Another person said: “I started by watching a boxing match, then street boxing matches, and then I saw videos of street fights, then accidents and urban violence… I ended up with a horrible vision of the world and feeling bad, without really wanting to.”

Yet another person said they’d often watch a drag queen who did a lot of positive-affirmation and confidence-building videos. YouTube’s recommendations allegedly served up a ton of anti-LGBT content for ages after, which could have a devastating impact on already too-often marginalised communities.

In September, Mozilla advised YouTube it could improve its service – and trust in it – by taking three key steps:

  • Provide independent researchers with access to meaningful data.
  • Build simulation tools for researchers.
  • Empower researchers by not implementing restrictive API rate limits and provide access to a historical archive of videos.

We’ll have to wait and see whether YouTube takes on Mozilla’s advice, but we hope changes are made sooner rather than later. Around 250 million hours of YouTube content per day is watched via recommendations, and we can never be certain for how long those videos will last in the minds of those viewing them.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Mozilla shares YouTube horror tales in campaign for responsible algorithms appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/10/15/mozilla-shares-youtube-horror-campaign-responsible-algorithms/feed/ 0
Ofcom: AI is not ready to effectively moderate online content ‘for the foreseeable future’ https://news.deepgeniusai.com/2019/07/22/ofcom-ai-moderate-online-content-future/ https://news.deepgeniusai.com/2019/07/22/ofcom-ai-moderate-online-content-future/#respond Mon, 22 Jul 2019 14:05:15 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5853 Ofcom and Cambridge Consultants have teamed up on a report examining the effectiveness of AI-powered online content moderation. Governments around the world have put increasing amounts of pressure on social networks and communication services to take responsibility for content posted on them. Society itself is becoming more aware of the dangers following live-streamed terror attacks,... Read more »

The post Ofcom: AI is not ready to effectively moderate online content ‘for the foreseeable future’ appeared first on AI News.

]]>

Ofcom and Cambridge Consultants have teamed up on a report examining the effectiveness of AI-powered online content moderation.

Governments around the world have put increasing amounts of pressure on social networks and communication services to take responsibility for content posted on them. Society itself is becoming more aware of the dangers following live-streamed terror attacks, cyberbullying, political manipulation, and more.

With some platforms having billions of users, manual content moderation of everything posted is not feasible. When illegal content is uploaded, it often requires someone to report it and wait for a human moderator to make a decision (those moderators sometimes require therapy after being exposed to such content).

Ofcom and Cambridge Consultants’ report suggests that AI could help to reduce the psychological impact on human moderators in a few key ways:

  • Varying the level and type of harmful content they are exposed to.
  • Automatically blurring out parts of the content which the moderator can optionally choose to view if required for a decision.
  • Humans can ‘ask’ the AI questions about the content to prepare themselves or know whether it will be particularly difficult for them, perhaps due to past individual experiences.

The slow process of manual content moderation often means harmful content is seen by millions before it’s taken down. While most AI moderation implementations today still require human oversight, some advancements in content detection are helping to speed up content being flagged and removed.

Earlier this month, Facebook-owned Instagram unveiled improvements to an AI-powered moderation system it uses in a bid to prevent troublesome content from ever being posted. While previously restricted to comments, Instagram will now ask users “Are you sure you want to post this?” for any posts it deems may cause distress to others.

As the UK’s telecoms regulator, Ofcom’s report should help to form workable policies rather than generic demands from politicians without a real understanding of how these things work (can anyone remember the calls to ban encryption and/or knowingly create backdoors?)

The report essentially determines that, for the foreseeable future, effective fully automated content moderation is not possible.

Among the chief reasons for fully automated content moderation being problematic is that – while some harmful posts can be identified by analysing it alone – other content requires a full understanding of context. For example, the researchers note how regional and cultural differences in national laws and what’s socially acceptable are difficult for today’s AI moderation solutions to account for but trivial for local human moderators.

Some content is also easier to analyse than others. Photos and pre-recorded videos could be analysed before they’re posted, whereas live-streams pose a particular difficulty because what appears to be an innocent scene could become harmful very quickly.

“Human moderators will continue to be required to review highly contextual, nuanced content,’ says Cambridge Consultants’ report. “However, AI-based content moderation systems can reduce the need for human moderation and reduce the impact on them of viewing harmful content.”

You can find a copy of the full report here (PDF)

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Ofcom: AI is not ready to effectively moderate online content ‘for the foreseeable future’ appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/07/22/ofcom-ai-moderate-online-content-future/feed/ 0
Instagram uses ‘AI intervention’ to help counter bullying https://news.deepgeniusai.com/2019/07/10/instagram-ai-intervention-counter-bullying/ https://news.deepgeniusai.com/2019/07/10/instagram-ai-intervention-counter-bullying/#respond Wed, 10 Jul 2019 17:00:17 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5822 Media sharing platform Instagram is launching an ‘AI intervention’ system designed to help counter the scourge of cyberbullying. With 95 million photos and videos uploaded to Instagram every day, manual content moderation is impossible. Inappropriate content must be reported for moderation today, and the psychological damage it can do to human moderators is well-documented. AI... Read more »

The post Instagram uses ‘AI intervention’ to help counter bullying appeared first on AI News.

]]>
Media sharing platform Instagram is launching an ‘AI intervention’ system designed to help counter the scourge of cyberbullying.

With 95 million photos and videos uploaded to Instagram every day, manual content moderation is impossible. Inappropriate content must be reported for moderation today, and the psychological damage it can do to human moderators is well-documented.

AI has long been herald as a means to automate the moderation process, but today it still requires a human to make a final decision. Instagram has decided to follow the old adage of ‘prevention is better than a cure’ with its new AI system.

Instagram already uses such a system for negative comments, providing users with the opportunity to undo the message. Instagram claims in earlier tests they’ve found “it encourages some people to undo their comment and share something less hurtful once they have had a chance to reflect.”

Now, prior to any potentially harmful content being shared, the platform will intervene to ask “are you sure you want to post this?” to really make the user think about what they’re doing.

Instagram is also helping to stop bullies getting the attention they seek from their victims.

“We’ve heard from young people in our community that they’re reluctant to block, unfollow, or report their bully because it could escalate the situation, especially if they interact with their bully in real life,” says Instagram head Adam Mosseri.

Users flagged as bullies will be made invisible to the victim. Furthermore, the victim’s current online status will be made unknown to the bully.

“Restricted people won’t be able to see when you’re active on Instagram or when you’ve read their direct messages,” adds Mosseri.

A study last year found that around 59 percent of US teens have suffered from bullying and harassment online. Over half of the respondents said that social media sites, police, teachers, and others are not doing enough to counter the issue.

The new Instagram AI intervention system will begin rolling out to English speakers first before it’s made available globally.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Instagram uses ‘AI intervention’ to help counter bullying appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/07/10/instagram-ai-intervention-counter-bullying/feed/ 0
Facebook outage gave a glimpse at how its AI analyses images https://news.deepgeniusai.com/2019/07/04/facebook-outage-ai-analyses-images/ https://news.deepgeniusai.com/2019/07/04/facebook-outage-ai-analyses-images/#respond Thu, 04 Jul 2019 15:47:03 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5805 Facebook’s issue displaying images yesterday gave users an interesting look at how the social media giant’s AI analyses their uploads. An outage yesterday meant Facebook users were unable to see uploaded images, providing a welcome respite from the usual mix of food and baby photos. In their place, however, was some interesting text. Text in... Read more »

The post Facebook outage gave a glimpse at how its AI analyses images appeared first on AI News.

]]>

Facebook’s issue displaying images yesterday gave users an interesting look at how the social media giant’s AI analyses their uploads.

An outage yesterday meant Facebook users were unable to see uploaded images, providing a welcome respite from the usual mix of food and baby photos. In their place, however, was some interesting text.

Text in the placeholder where the image should have been displayed showed how Facebook’s AI automatically tagged the images.

Some of the aforementioned tags were understandable, like “one person, beard”. Other tags – such as a group of women standing together being tagged as “hoes” – were more questionable.

Facebook says it uses machine learning for tagging the images and reading the description to blind users. It’s unclear if there were “hoes” in terms of the farming tool in the image – but I don’t know how often even one is in a group photo, let alone plural.

Techies like many of our readers know Facebook is analysing photos to further understand each user, primarily for advertising purposes. Yesterday’s outage, however, will have opened more of the public’s eyes to how each of their uploads are being analysed and data extracted.

Shout-out to the person who uploaded a photo of their baby only for Facebook to categorise it as “Image may contain: dog”.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Facebook outage gave a glimpse at how its AI analyses images appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/07/04/facebook-outage-ai-analyses-images/feed/ 0
Twitter’s latest acquisition tackles fake news using AI https://news.deepgeniusai.com/2019/06/04/twitter-acquisition-fake-news-ai/ https://news.deepgeniusai.com/2019/06/04/twitter-acquisition-fake-news-ai/#respond Tue, 04 Jun 2019 15:46:03 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5716 Twitter has acquired Fabula AI, a UK-based startup employing artificial intelligence for tackling fake news. Fake news is among the most difficult challenges of our time. Aside from real stories often being called it by certain politicians, actual fake news is used to coerce people into making decisions. Governments have been putting increasing pressure on... Read more »

The post Twitter’s latest acquisition tackles fake news using AI appeared first on AI News.

]]>
Twitter has acquired Fabula AI, a UK-based startup employing artificial intelligence for tackling fake news.

Fake news is among the most difficult challenges of our time. Aside from real stories often being called it by certain politicians, actual fake news is used to coerce people into making decisions.

Governments have been putting increasing pressure on sites like Twitter and Facebook to take more responsibility for the content shared on them.

With billions of users, each uploading content, manual moderation of it all isn’t feasible. Automation is increasingly being used to flag problem content before a human moderator checks it.

Twitter CTO Parag Agrawal says its acquisition of Fabula is “to improve the health of the conversation, with expanding applications to stop spam and abuse and other strategic priorities in the future.”

Fabula has developed the ability to analyse “very large and complex data sets” for signs of network manipulation and can identify patterns that other machine-learning techniques can’t, according to Agrawal.

In addition, Fabula has created a truth-risk score to identify misinformation. The score is generated using data from trust fact-checking sources like PolitiFact and Snopes. Armed with the score, Twitter can determine how trustworthy a claim is and perhaps even make it visible to others.

A post on Twitter’s blog yesterday hints at the possible direction: “Context on Tweets and our enforcement is important in understanding our rules, so we’ll add more notices within Twitter for clarity, such as if a Tweet breaks our rules but remains on the service because the content is in the public interest.”

Often fake news is used to cause political gain or turmoil. Russia is regularly linked with modern disinformation campaigns, but even Western democracies have used it to influence both national and international affairs.

The US presidential elections were influenced by fake news. Last year, Congress released more than 3,000 Facebook ads purchased by Russian-linked agents ahead of the 2016 presidential contest.

In Fabula AI’s home country, some allege fake news was behind the UK’s decision to leave in the EU referendum. There’s less conclusive data behind the allegation, but we do know powerful targeted advertising was used to promote so-called ‘alternative facts’.

Fabula’s team will be joining the Twitter Cortex machine-learning team. Exact terms of the deal or how Fabula’s technology will be used have not been disclosed.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Twitter’s latest acquisition tackles fake news using AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/06/04/twitter-acquisition-fake-news-ai/feed/ 0
Trump speech ‘DeepFake’ shows a present AI threat https://news.deepgeniusai.com/2019/01/14/trump-speech-deepfake-ai-threat/ https://news.deepgeniusai.com/2019/01/14/trump-speech-deepfake-ai-threat/#comments Mon, 14 Jan 2019 12:19:09 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4424 A so-called ‘DeepFake’ video of a Trump speech was broadcast on a Fox-owned Seattle TV network, showing a very present AI threat. The station, Q13, broadcasted a doctored Trump speech in which he somehow appeared even more orange and pulled amusing faces. You can see a side-by-side comparison with the original below: Following the broadcast,... Read more »

The post Trump speech ‘DeepFake’ shows a present AI threat appeared first on AI News.

]]>
A so-called ‘DeepFake’ video of a Trump speech was broadcast on a Fox-owned Seattle TV network, showing a very present AI threat.

The station, Q13, broadcasted a doctored Trump speech in which he somehow appeared even more orange and pulled amusing faces.

You can see a side-by-side comparison with the original below:

https://www.youtube.com/watch?v=UZLs11uSg-A&feature=youtu.be

Following the broadcast, a Q13 employee was sacked. It’s unclear if the worker created the clip or whether it was just allowed to air.

The video could be the first DeepFake to be televised, but it won’t be the last. Social media provides even less filtration and enables fake clips to spread with ease.

We’ve heard much about sophisticated disinformation campaigns. At one point, the US was arguably the most prominent creator of such campaigns to influence foreign decisions.

Russia, in particular, has been linked to vast disinformation campaigns. These have primarily targeted social media with things such as their infamous Twitter bots.

According to Pew Research, just five percent of Americans have ‘a lot of trust’ in the information they get from social media. This is much lower than in national and local news organisations.

It’s not difficult to imagine an explosion in doctored videos that appear like they’re coming from trusted outlets. Combining the reach of social media with the increased trust Americans have in traditional news organisations is a dangerous concept.

While the Trump video appears to be a bit of fun, the next could be used to influence an election or big policy decision. It’s a clear example of how AI is already creating new threats.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Trump speech ‘DeepFake’ shows a present AI threat appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/01/14/trump-speech-deepfake-ai-threat/feed/ 1