When the Supreme Court hears a landmark case on Section 230 later in February, all eyes will be on the biggest players in tech—Meta, Google, Twitter, YouTube.
A legal provision tucked into the Communications Decency Act, Section 230 has provided the foundation for Big Tech’s explosive growth, protecting social platforms from lawsuits over harmful user-generated content while giving them leeway to remove posts at their discretion (though they are still required to take down illegal content, such as child pornography, if they become aware of its existence). The case might have a range of outcomes; if Section 230 is repealed or reinterpreted, these companies may be forced to transform their approach to moderating content and to overhaul their platform architectures in the process.
But another big issue is at stake that has received much less attention: depending on the outcome of the case, individual users of sites may suddenly be liable for run-of-the-mill content moderation. Many sites rely on users for community moderation to edit, shape, remove, and promote other users’ content online—think Reddit’s upvote, or changes to a Wikipedia page. What might happen if those users were forced to take on legal risk every time they made a content decision?
To read more, click here.