Google is facing a lawsuit that could fundamentally alter the future of online speech and content moderation. The case revolves around Section 230 of the Communications Decency Act, which protects internet companies from being sued over content posted by third parties. Google is being sued by the family of a woman who was killed in a 2015 ISIS terror attack in Paris. The family claims that Google-owned YouTube is liable for damages because it failed to remove ISIS terrorist videos and even boosted their prominence by recommending them to users.
The case is significant because it could lead to a rewriting of Section 230, which is sometimes referred to as “the Magna Carta of the internet.” If the Supreme Court rules in favor of the family, it could set a precedent that allows tech companies to be held liable for malicious content that is allowed to proliferate online. This could have far-reaching implications for the tech industry and for online speech and content moderation more broadly.
Google has defended itself by arguing that eroding the protections offered by Section 230 could fundamentally change how the internet works. The company has pointed out that the law has been instrumental in allowing internet companies to organize and distribute content, and that weakening it could lead to harmful or even illegal content being spread online. Google has also claimed that the family has not provided evidence that the ISIS attackers who committed the attacks in Paris were inspired by YouTube videos.
There is bipartisan support in Congress for rewriting Section 230, with both liberals and conservatives claiming that the law has been abused by tech companies. Liberals allege that the law has allowed disinformation and harmful content to spread online, while conservatives claim that it has been used to curtail right-leaning speech. Some legal scholars have expressed concern that weakening the protections offered by Section 230 could lead to a stifling of free speech online, with certain content being censored or removed.
The Supreme Court’s decision in the case, which is expected in late June or early July, could have a significant impact on the future of online speech and content moderation. Many legal experts believe that the case is likely to be closely watched by tech companies, legal scholars, and policymakers alike. Regardless of the outcome, the case is likely to have far-reaching implications for the tech industry and for the future of online speech and content moderation.