Home Office unveils AI tool for tackling extremist content

Home Office unveils AI tool for tackling extremist content

Following impatience with technology leaders in delivering measures to fight the spread of extremist content, the Home Office has unveiled its own AI tool for detecting it with impressive accuracy.

The UK government has long called on tech giants to ensure any extremist content is removed from their platforms quickly. A report from the Home Affairs Committee slammed the likes of Facebook, Twitter, and YouTube for “consciously failing” to remove terrorist propaganda.

Most law-abiding citizens want to see extremist ideology pulled from the internet to prevent it being spread — especially, as we’ve seen, to manipulate young minds. However, some question the feasibility of automatic tools, others share concerns about legitimate content being removed, and there’s a few who believe such policing counters internet freedom.

In a blog post, the Open Rights Group posted eight thoughts:

  1. What really is the accuracy of this system? Small error rates translate into very large numbers of errors at scale. We see this with more general internet filters in the UK, where our blocked.org.uk project regularly uncovers and reports errors.
  2. Machines are not determining meaning Machines can only attempt to pattern match, with the assumption that content and form imply purpose and meaning. This explains how errors can occur, particularly in missing new material.
  3. Context is everything The same content can, in different circumstances, be legal or illegal.
  4. The learning is only as good as the underlying material The underlying database is used to train machines to pattern match. Therefore the quality of the initial database is very important.
  5. Machines are making no legal judgment The machines are not making a legal determination. This means a company’s decision to act on what the machine says is absent of clear knowledge.
  6. Humans and courts need to be able to review complaints Anyone who posts material must be able to get human review, and recourse to courts if necessary.
  7. Whose decision is this exactly? The government wants small companies to use the database to identify and remove material. If material is incorrectly removed, perhaps appealed, who is responsible for reviewing any mistake?
  8. How do we know about errors? Any takedown system tends towards overzealous takedowns. We hope the identification system is built for accuracy and prefers to miss material rather than remove the wrong things.

The Home Office claims its tool is able to identify extremist content with a 99.99 percent success rate. This is an impressive rate, but it says one in every 50 videos would require human moderation to be certain. With the number of videos uploaded across various networks every day, that will still add up.

On average, tech firms take around 36 hours to remove extremist content. Human moderators have to watch some of the most disturbing content, which can range from beheadings to child sex abuse, and some have understandably needed therapy. Technology like the Home Office’s could help to speed up removal and reduce the mental impact on moderators.

If the tech giants do not take a tougher stance against extremist content, the UK government has not ruled out legislation to force them.

Speaking to the BBC, Home Secretary Amber Rudd said: “We’re not going to rule out taking legislative action if we need to do it, but I remain convinced that the best way to take real action, to have the best outcomes, is to have an industry-led forum like the one we’ve got. This has to be in conjunction, though, of larger companies working with smaller companies.”

What are your thoughts on the use of AI to tackle extremist content?

 

Tags: , , , , , ,