Google is returning to using humans for YouTube moderation after repeated errors with its AI system.
Moderating a large network like YouTube is no easy task. Aside from the sheer volume of content uploaded every day, moderators are subjected to the worst of humanity and often end up requiring therapy. They’re the unsung heroes.
AI has been hailed as helping to deal with some of the aforementioned issues. Either by automating the moderation process entirely or by offering a helping hand to humans.
Google was left with little choice but to give more power to its AI moderators as the COVID-19 pandemic took hold… but it hasn’t been smooth sailing.
In late August, YouTube said that it had removed 11.4 million videos over the three months prior–the most since the site launched in 2005.
That figure alone should raise a few eyebrows. If a team of humans were removing that many videos, they probably deserve quite the pay rise.
Of course, most of the video removals weren’t done by humans. Many of the videos didn’t even violate the guidelines.
Neal Mohan, chief product officer at YouTube, told the Financial Times:
“One of the decisions we made [at the beginning of the COVID-19 pandemic] when it came to machines who couldn’t be as precise as humans, we were going to err on the side of making sure that our users were protected, even though that might have resulted in [a] slightly higher number of videos coming down.”
Some of the removals left content creators bewildered, angry, and out of pocket in some cases.
Around 320,000 of videos taken down were appealed, and half of the appealed videos were reinstated.
Deciding what content to ultimately remove feels like one of the many tasks which needs human involvement. Humans are much better at detecting nuances and things like sarcasm.
However, the sheer scale of content needing to be moderated also requires an AI to help automate some of that process.
“Over 50 percent of those 11 million videos were removed without a single view by an actual YouTube user and over 80 percent were removed with less than 10 views,” Mohan said. “That’s the power of machines.”
AIs can also help to protect humans from the worst of the content. Content detection systems are being built to automatically blur things like child abuse enough so that human moderators know what it is to remove it—but to limit their psychological impact.
Some believe AIs are better in helping to determine what content should be removed simply using logic rather than a human’s natural biases like their political-leaning, but we know human biases seep into algorithms.
In May, YouTube admitted to deleting messages critical of the Chinese Communist Party (CCP). YouTube later blamed an “error with our enforcement systems” for the mistakes. Senator Josh Hawley even wrote (PDF) to Google CEO Sundar Pichai seeking answers to “troubling reports that your company has resumed its long pattern of censorship at the behest of the Chinese Communist Party.”
Google appears to have quickly realised that replacing humans entirely with AI is rarely a good idea. The company says many of the human moderators who were “put offline” during the pandemic are now coming back.
(Photo by Rachit Tank on Unsplash)