World

These 26 words ‘created the internet.’ Now the Supreme Court may be coming for them | CNN Business




Washington
CNN
 — 

Congress, the White House and now the US Supreme Court are all focusing their attention on a federal law that’s long served as a legal shield for online platforms.

This week, the Supreme Court is set to hear oral arguments on two pivotal cases dealing with online speech and content moderation. Central to the arguments is “Section 230,” a federal law that’s been roundly criticized by both Republicans and Democrats for different reasons but that tech companies and digital rights groups have defended as vital to a functioning internet.

Tech companies involved in the litigation have cited the 27-year-old statute as part of an argument for why they shouldn’t have to face lawsuits alleging they gave knowing, substantial assistance to terrorist acts by hosting or algorithmically recommending terrorist content.

A set of rulings against the tech industry could significantly narrow Section 230 and its legal protections for websites and social media companies. If that happens, the Court’s decisions could expose online platforms to an array of new lawsuits over how they present content to users. Such a result would represent the most consequential limitations ever placed on a legal shield that predates today’s biggest social media platforms and has allowed them to nip many content-related lawsuits in the bud.

And more could be coming: the Supreme Court is still mulling whether to hear several additional cases with implications for Section 230, while members of Congress have expressed renewed enthusiasm for rolling back the law’s protections for websites, and President Joe Biden has called for the same in a recent op-ed.

Here’s everything you need to know about Section 230, the law that’s been called “the 26 words that created the internet.”

Passed in 1996 in the early days of the World Wide Web, Section 230 of the Communications Decency Act was meant to nurture startups and entrepreneurs. The legislation’s text recognized that the internet was in its infancy and risked being choked out of existence if website owners could be sued for things that other people posted.

One of the law’s architects, Oregon Democratic Sen. Ron Wyden, has said that without Section 230, “all online media would face an onslaught of bad-faith lawsuits and pressure campaigns from the powerful” seeking to silence them.

He’s also said Section 230 directly empowers websites to remove content they believe is objectionable by creating a “good Samaritan” safe harbor: Under Section 230, websites enjoy immunity for moderating content in the ways they see fit — not according to others’ preferences — although the federal government can still sue platforms for violating criminal or intellectual property laws.

Contrary to what some politicians have claimed, Section 230’s protections do not hinge on a platform being politically or ideologically neutral. The law also does not require that a website be classified as a publisher in order to “qualify” for liability protection. Apart from meeting the definition of an “interactive computer service,” websites need not do anything to gain Section 230’s benefits – they apply automatically.

The law’s central provision holds that websites (and their users) cannot be treated legally as the publishers or speakers of other people’s content. In plain English, that means that any legal responsibility attached to publishing a given piece of content ends with the person or entity that created it, not the platforms on which the content is shared or the users who re-share it.

The seemingly simple language of Section 230 belies its sweeping impact. Courts have repeatedly accepted Section 230 as a defense against claims of defamation, negligence and other allegations. In the past, it’s protected AOL, Craigslist, Google and Yahoo, building up a body of law so broad and influential as to be considered a pillar of today’s internet.

“The free and open internet as we know it couldn’t exist without Section 230,” the Electronic Frontier Foundation, a digital rights group, has written. “Important court rulings on Section 230 have held that users and services cannot be sued for forwarding email, hosting online reviews, or sharing photos or videos that others find objectionable. It also helps to quickly resolve lawsuits cases that have no legal basis.”

In recent years, however, critics of Section 230 have increasingly questioned the law’s scope and proposed restrictions on the circumstances in which websites may invoke the legal shield.

For years, much of the criticism of Section 230 has come from conservatives who say that the law lets social media platforms suppress right-leaning views for political reasons.

By safeguarding platforms’ freedom to moderate content as they see fit, Section 230 does shield websites from lawsuits that might arise from that type of viewpoint-based content moderation, though social media companies have said they do not make content decisions based on ideology but rather on violations of their policies.

The Trump administration tried to turn some of those criticisms into concrete policy that would have had significant consequences, if it had succeeded. For example, in 2020, the Justice Department released a legislative proposal for changes to Section 230 that would create an eligibility test for websites seeking the law’s protections. That same year, the White House issued an executive order calling on the Federal Communications Commission to interpret Section 230 in a more narrow way.

The executive order faced a number of legal and procedural problems, not least of which was the fact that the FCC is not part of the judicial branch; that it does not regulate social media or content moderation decisions; and that it is an independent agency that, by law, does not take direction from the White House.

Even though the Trump-era efforts to curtail Section 230 never bore fruit, conservatives are still looking for opportunities to do so. And they aren’t alone. Since 2016, when social media platforms’ role in spreading Russian election disinformation broke open a national dialogue about the companies’ handling of toxic content, Democrats have increasingly railed against Section 230.

By safeguarding platforms’ freedom to moderate content as they see fit, Democrats have said, Section 230 has allowed websites to escape accountability for hosting hate speech and misinformation that others have recognized as objectionable but that social media companies can’t or won’t remove themselves.

The result is a bipartisan hatred for Section 230, even if the two parties cannot agree on why Section 230 is flawed or what policies might appropriately take its place.

“I would be prepared to make a bet that if we took a vote on a plain Section 230 repeal, it would clear this committee with virtually every vote,” said Rhode Island Democratic Sen. Sheldon Whitehouse at a hearing last week of the Senate Judiciary Committee. “The problem, where we bog down, is that we want 230-plus. We want to repeal 230 and then have ‘XYZ.’ And we don’t agree on what the ‘XYZ’ are.”

The deadlock has thrown much of the momentum for changing Section 230 to the courts — most notably, the US Supreme Court, which now has an opportunity this term to dictate how far the law extends.

Tech critics have called for added legal exposure and accountability. “The massive social media industry has grown up largely shielded from the courts and the normal development of a body of law. It is highly irregular for a global industry that wields staggering influence to be protected from judicial inquiry,” wrote the Anti-Defamation League in a Supreme Court brief.

For the tech giants, and even for many of Big Tech’s fiercest competitors, it would be a bad thing, because it would undermine what has allowed the internet to flourish. It would potentially put many websites and users into unwitting and abrupt legal jeopardy, they say, and it would dramatically change how some websites operate in order to avoid liability.

The social media platform Reddit has argued in a Supreme Court brief that if Section 230 is narrowed so that its protections do not cover a site’s recommendations of content a user might enjoy, that would “dramatically expand Internet users’ potential to be sued for their online interactions.”

“‘Recommendations’ are the very thing that make Reddit a vibrant place,” wrote the company and several volunteer Reddit moderators. “It is users who upvote and downvote content, and thereby determine which posts gain prominence and which fade into obscurity.”

People would stop using Reddit, and moderators would stop volunteering, the brief argued, under a legal regime that “carries a serious risk of being sued for ‘recommending’ a defamatory or otherwise tortious post that was created by someone else.”

While this week’s oral arguments won’t be the end of the debate over Section 230, the outcome of the cases could lead to hugely significant changes the internet has never before seen — for better or for worse.



Source link