Sing it with me, folks: content moderation is impossible to do well at scale. Over the last few weeks, all of the big social media platforms have talked about their intense efforts to block misinformation about Covid-19. It appeared to be something of an all hands on deck situation for employees (mostly working from home) at these companies. Indeed, earlier this week, Facebook, Google, Linkedin, Microsoft, Reddit, Twitter, and YouTube all released a joint statement about how they’re working together to fight Covid-19 misinformation, and hoping other platforms would join in.
However, battling misinformation is not always so easy — as Facebook discovered yesterday. Yesterday afternoon a bunch of folks started noticing that Facebook was blocking all sorts of perfectly normal content, including NY Times stories about Covid-19. Now, we can joke all we want about some of the poor NY Times reporting, but to argue that its reporting on Covid-19 is misinformation would be, well, misinformation itself. There was some speculation, a la YouTube’s warning that this could be due to content moderators being sent home — and not being allowed to do their content moderation duties over privacy concerns, but the company said that it was “a bug in an anti-spam system” and was “unrelated to any changes in our content moderation workforce.” Whether you buy that or not is your choice.
Still, it’s a reminder that any effort to block misinformation is going to be fraught with problems and mistakes, and trying to adapt rapidly, especially on a big (the biggest) news story with rapidly changing factors and new information (and misinformation) all the time, is going to run into some problems sooner or later.
Permalink | Comments | Email This Story