What is content moderation?
One of the defining features of the modern world is that we spend our digital lives wading through a sea of words and images, much of it generated by ordinary people rather than professionals. There are literally trillions of Facebook and Reddit posts, tweets, Instagram images, TikTok and YouTube videos and website comments.
Inevitably, not all this content is appropriate, safe or even legal. That’s where moderators come in.
What content moderators do
Content moderators are paid by tech companies to sift through the text, images and videos posted on the platforms they run, such as social media sites. They determine whether that content should be deleted, restricted or left up. They will also often make decisions on whether the behaviour of a particular account or profile requires action, like suspension or banning for posting inappropriate content or harassing other users. Their decisions are meant to be based on both the laws of the country where the content is being read and the rules set by the platform.
Working to strict targets, content moderators often spend their shifts reviewing a constant feed of potentially harmful content flagged by automated systems or reported by individual users.
How many people work in content moderation?
Although it is difficult to say exactly, it may be in the hundreds of thousands. Meta, which owns Facebook and Instagram, has previously said it employs about 15,000 moderators. TikTok says it has more than 40,000 people working on moderation. They are likely some of the largest employers, but virtually any service that wants to keep the worst content off its platform has to employ at least some humans to do it.
Many of these jobs are based in middle- and low-income countries, where wages are lower, often via outsourcing companies. Users in rich regions such as the US and Europe are therefore protected from harmful content by people working hard thousands of miles away.
What challenges do moderators face?
The work can often be traumatic, as our reporting on moderators for TikTok and the world’s biggest dating apps shows. Workers can be exposed to extreme violence, images of sexual abuse and other disturbing content. This can take a huge toll on their mental health, and they often receive inadequate support.
They are under pressure to consistently make the right decisions and the job is hard. Often harmful content is missed, or important or innocuous content is mistakenly taken down. The problem is particularly bad in languages other than English, which tech companies typically spend less money on policing.
The future of content moderation
Most big tech companies are looking at ways to cut the costs of moderation and handle the growing volume of content, either through further outsourcing, or through new artificial intelligence-driven systems. The AI is trained on the decisions made by the humans it is ultimately meant to replace.
However, with ever more people using social media, and huge amounts of harmful stuff still slipping through the gaps, there’s arguably a greater need for even more content moderators to do the difficult task of keeping the internet safe for everyone else.
Tech editor: Jasper Jackson
Deputy editors: Katie Mark and Chrissie Giles
Editor: Franz Wild
Production editors: Frankie Goodway and Emily Goddard
Fact checker: Ero Partsakoulaki
Our reporting on Big Tech is funded by Open Society Foundations. None of our funders have any influence over our editorial decisions or output
-
Subject: