By Mike Masnick
If you’ve been around the content moderation/trust and safety debates for many years, you may remember that in the early 2000s, Yahoo got understandably slammed for providing data to the Chinese government that allowed the government to track down and jail a journalist who was critical of the Chinese government. This was a wake up call for many about the international nature of the internet — and the fact that not every “government request” is equal. This, in fact, is a key point that is often raised in discussions about new laws requiring certain content moderation rules to be followed — because not all governments look at content moderation the same way. And efforts by, say, the US government to force internet companies to “block copyright infringement” can and will be used by other countries to justify censorship.
The video conferencing software Zoom is going through what appears to be an accelerated bout of historical catch-up as its popularity has rocketed thanks to global pandemic lockdown. It keeps coming across problems that tons of other companies have gone through before it — with the latest being, as stated above, that requests from all governments are not equal. It started when Zoom closed the account of a US-based Chinese activist, who used Zoom to hold an event commemorating the Tiananmen Square massacre. Zoom initially admitted that it shut the account to “comply” with a request from the Chinese government:
“Just like any global company, we must comply with applicable laws in the jurisdictions where we operate. When a meeting is held across different countries, the participants within those countries are required to comply with their respective local laws. We aim to limit the actions we take to those necessary to comply with local law and continuously review and improve our process on these matters. We have reactivated the US-based account.”
That response did not satisfy anyone and as more and more complaints came in, Zoom put out a much better response, which more or less showed that they’re coming up to speed on a ton of lessons that have already been learned by others (at the very least, it suggests they should hire some experienced trust and safety staffers…). At the very least, though, Zoom admits that in taking the Chinese government’s requests at face value, it “made two mistakes”:
We strive to limit actions taken to only those necessary to comply with local laws. Our response should not have impacted users outside of mainland China. We made two mistakes:
We suspended or terminated the host accounts, one in Hong Kong SAR and two in the U.S. We have reinstated these three host accounts.
We shut down the meetings instead of blocking the participants by country. We currently do not have the capability to block participants by country. We could have anticipated this need. While there would have been significant repercussions, we also could have kept the meetings running.
There are reasonable points to be made that a company like Zoom should have anticipated issues like this, but at the very least you can give the company credit for admitting (directly) to its mistakes, and coming up with plans and policies to avoid doing it again in the future.
But there is a larger point here that often gets lost in all these discussions about trust and safety, and content moderation. So much of the debates usually focus on the assumption that (1) requests to block or take down content or accounts are done in good faith, and (2) that those making the requests have similar values. That’s frequently not the case at all. We’ve shown this over and over again here on Techdirt in which laws against “fake news” are used to silence critics of a ruling class.
So for anyone pushing for laws that require internet companies to somehow ban or block “bad behavior” and “bad actors,” you need to be able to come up with a definition of those things that won’t be abused horribly by authoritarian governments around the globe.