Ofcom: AI is not ready to effectively moderate online content ‘for the foreseeable future’

ofcom cambridge consultants ai artificial intelligence content moderation social media facebook instagram
Ofcom: AI is not ready to effectively moderate online content ‘for the foreseeable future’

Ofcom and Cambridge Consultants have teamed up on a report examining the effectiveness of AI-powered online content moderation.

Governments around the world have put increasing amounts of pressure on social networks and communication services to take responsibility for content posted on them. Society itself is becoming more aware of the dangers following live-streamed terror attacks, cyberbullying, political manipulation, and more.

With some platforms having billions of users, manual content moderation of everything posted is not feasible. When illegal content is uploaded, it often requires someone to report it and wait for a human moderator to make a decision (those moderators sometimes require therapy after being exposed to such content).

Ofcom and Cambridge Consultants’ report suggests that AI could help to reduce the psychological impact on human moderators in a few key ways:

  • Varying the level and type of harmful content they are exposed to.
  • Automatically blurring out parts of the content which the moderator can optionally choose to view if required for a decision.
  • Humans can ‘ask’ the AI questions about the content to prepare themselves or know whether it will be particularly difficult for them, perhaps due to past individual experiences.

The slow process of manual content moderation often means harmful content is seen by millions before it’s taken down. While most AI moderation implementations today still require human oversight, some advancements in content detection are helping to speed up content being flagged and removed.

Earlier this month, Facebook-owned Instagram unveiled improvements to an AI-powered moderation system it uses in a bid to prevent troublesome content from ever being posted. While previously restricted to comments, Instagram will now ask users “Are you sure you want to post this?” for any posts it deems may cause distress to others.

As the UK’s telecoms regulator, Ofcom’s report should help to form workable policies rather than generic demands from politicians without a real understanding of how these things work (can anyone remember the calls to ban encryption and/or knowingly create backdoors?)

The report essentially determines that, for the foreseeable future, effective fully automated content moderation is not possible.

Among the chief reasons for fully automated content moderation being problematic is that – while some harmful posts can be identified by analysing it alone – other content requires a full understanding of context. For example, the researchers note how regional and cultural differences in national laws and what’s socially acceptable are difficult for today’s AI moderation solutions to account for but trivial for local human moderators.

Some content is also easier to analyse than others. Photos and pre-recorded videos could be analysed before they’re posted, whereas live-streams pose a particular difficulty because what appears to be an innocent scene could become harmful very quickly.

“Human moderators will continue to be required to review highly contextual, nuanced content,’ says Cambridge Consultants’ report. “However, AI-based content moderation systems can reduce the need for human moderation and reduce the impact on them of viewing harmful content.”

You can find a copy of the full report here (PDF)

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

Tags: , , , , , ,