Long title | Protection For 'Good Samaritan' Blocking and Screening of Offensive Material |
---|---|
Nicknames | Section 230 |
Enacted by | the 104th United States Congress |
Effective | February 8, 1996 |
Codification | |
Acts amended | Communications Act of 1934 Telecommunications Act of 1996 |
U.S.C. sections created | 47 U.S.C. § 230 |
Legislative history | |
|
In the United States, Section 230 is a section of the Communications Act of 1934 that was enacted as part of the Communications Decency Act of 1996, which is Title V of the Telecommunications Act of 1996, and generally provides immunity for online computer services with respect to third-party content generated by its users. At its core, Section 230(c)(1) provides immunity from liability for providers and users of an "interactive computer service" who publish information provided by third-party users:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
Section 230(c)(2) further provides "Good Samaritan" protection from civil liability for operators of interactive computer services in the good faith removal or moderation of third-party material they deem "obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected."
Section 230 was developed in response to a pair of lawsuits against online discussion platforms in the early 1990s that resulted in different interpretations of whether the service providers should be treated as publishers or, alternatively, as distributors of content created by their users. Its authors, Representatives Christopher Cox and Ron Wyden, believed interactive computer services should be treated as distributors, not liable for the content they distributed, as a means to protect the growing Internet at the time.
Section 230 was enacted as part of the Communications Decency Act (CDA) of 1996 (a common name for Title V of the Telecommunications Act of 1996), formally codified as part of the Communications Act of 1934 at 47 U.S.C. § 230. [a] After passage of the Telecommunications Act, the CDA was challenged in courts and was ruled by the Supreme Court in Reno v. American Civil Liberties Union (1997) to be unconstitutional, though Section 230 was determined to be severable from the rest of the legislation and remained in place. Since then, several legal challenges have validated the constitutionality of Section 230.
Section 230 protections are not limitless and require providers to remove material illegal on a federal level, such as in copyright infringement cases. In 2018, Section 230 was amended by the Stop Enabling Sex Traffickers Act (FOSTA-SESTA) to require the removal of material violating federal and state sex trafficking laws. In the following years, protections from Section 230 have come under more scrutiny on issues related to hate speech and ideological biases in relation to the power that technology companies can hold on political discussions and became a major issue during the 2020 United States presidential election, especially with regard to alleged censorship of more conservative viewpoints on social media.
Passed when Internet use was just starting to expand in both breadth of services and range of consumers in the United States, [2] Section 230 has frequently been referred to as a key law, which allowed the Internet to develop. [3]
Section 230 has two primary parts both listed under §230(c) as the "Good Samaritan" portion of the law. Under section 230(c)(1), as identified above, an information service provider shall not be treated as a "publisher or speaker" of information from another provider. Section 230(c)(2) provides immunity from civil liabilities for information service providers that remove or restrict content from their services they deem "obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected", as long as they act "in good faith" in this action. [4]
In analyzing the availability of the immunity offered by Section 230, courts generally apply a three-prong test. A defendant must satisfy each of the three prongs to gain the benefit of the immunity: [5]
Section 230 immunity is not unlimited. The statute specifically excepts federal criminal liability (§230(e)(1)), electronic privacy violations (§230(e)(4)) and intellectual property claims (§230(e)(2)). [6] There is also no immunity from state laws that are consistent with though state criminal laws have been held preempted in cases such as Backpage.com, LLC v. McKenna [7] and Voicenet Communications, Inc. v. Corbett [8] (agreeing that "the plain language of the CDA provides ... immunity from inconsistent state criminal laws"). What constitutes "publishing" under the CDA is somewhat narrowly defined by the courts. The Ninth Circuit held that "Publication involves reviewing, editing, and deciding whether to publish or to withdraw from publication third-party content." [9] Thus, the CDA does not provide immunity with respect to content that an interactive service provider creates or develops entirely by themselves. [10] [11] CDA immunity also does not bar an action based on promissory estoppel. [12] [13] As of mid-2016, courts have issued conflicting decisions regarding the scope of the intellectual property exclusion set forth in §230(e)(2). For example, in Perfect 10, Inc. v. CCBill, LLC , [14] the 9th Circuit Court of Appeals ruled that the exception for intellectual property law applies only to federal intellectual property claims such as copyright infringement, trademark infringement, and patents, reversing a district court ruling that the exception applies to state-law right of publicity claims. [15] The 9th Circuit's decision in Perfect 10 conflicts with conclusions from other courts including Doe v. Friendfinder. The Friendfinder court specifically discussed and rejected the lower court's reading of "intellectual property law" in CCBill and held that the immunity does not reach state right of publicity claims. [16]
Two bills passed since the passage of Section 230 have added further limits to its protections. The Digital Millennium Copyright Act in 1998, service providers must comply with additional requirements for copyright infringement to maintain safe harbor protections from liability, as defined in the DMCA's Title II, Online Copyright Infringement Liability Limitation Act. [17] The Stop Enabling Sex Traffickers Act (the FOSTA-SESTA act) of 2018 eliminated the safe harbor for service providers in relationship to federal and state sex trafficking laws.
Prior to the Internet, case law was clear that a liability line was drawn between publishers of content and distributors of content; a publisher would be expected to have awareness of material it was publishing and thus should be held liable for any illegal content it published, while a distributor would likely not be aware and thus would be immune. This was established in the 1959 case, Smith v. California , [18] where the Supreme Court ruled that putting liability on the provider (a book store in this case) would have "a collateral effect of inhibiting the freedom of expression, by making the individual the more reluctant to exercise it." [19]
In the early 1990s, the Internet became more widely adopted and created means for users to engage in forums and other user-generated content. While this helped to expand the use of the Internet, it also resulted in a number of legal cases putting service providers at fault for the content generated by its users. This concern was raised by legal challenges against CompuServe and Prodigy, which were early service providers at that time. [20] CompuServe stated it would not attempt to regulate what users posted on its services, while Prodigy had employed a team of moderators to validate content. Both companies faced legal challenges related to content posted by their users. In Cubby, Inc. v. CompuServe Inc. , CompuServe was found not to be at fault as, by its stance as allowing all content to go unmoderated, it was a distributor and thus not liable for libelous content posted by users. However, in Stratton Oakmont, Inc. v. Prodigy Services Co. , the court concluded that because Prodigy had taken an editorial role with regard to customer content, it was a publisher and was legally responsible for libel committed by its customers. [21] [b]
Service providers made their Congresspersons aware of these cases, believing that if followed by other courts across the nation, the cases would stifle the growth of the Internet. [22] United States Representative Christopher Cox (R-CA) had read an article about the two cases and felt the decisions were backwards. "It struck me that if that rule was going to take hold then the internet would become the Wild West and nobody would have any incentive to keep the internet civil," Cox stated. [23]
At the time, Congress was preparing the Communications Decency Act (CDA), part of the omnibus Telecommunications Act of 1996, which was designed to make knowingly sending indecent or obscene material to minors a criminal offense. A version of the CDA had passed through the Senate pushed by Senator J. James Exon (D-NE). [24] People in a grassroots effort in the tech industry reacted to try to convince the House of Representatives to challenge Exon's bill. Based on the Stratton Oakmont decision, Congress recognized that requiring service providers to block indecent content would make them be treated as publishers in the context of the First Amendment, and thus would make them become liable for other content such as libel, not set out in the existing CDA. [20] Cox and fellow Representative Ron Wyden (D-OR) wrote the House bill's section 509, titled the Internet Freedom and Family Empowerment Act, designed to override the decision from Stratton Oakmont, so that a service provider could moderate content as necessary and would not have to act as a wholly neutral conduit. The new provision was added to the text of the proposed statute while the CDA was in conference within the House.
The overall Telecommunications Act, with both Exon's CDA and Cox/Wyden's provision, passed both Houses by near-unanimous votes and was signed into law by President Bill Clinton by February 1996. [25] Cox/Wyden's section became Section 509 of the Telecommunications Act of 1996 and became law as a new Section 230 of the Communications Act of 1934. The anti-indecency portion of the CDA was immediately challenged on passage, resulting in the Supreme Court 1997 case, Reno v. American Civil Liberties Union , that ruled all of the anti-indecency sections of the CDA were unconstitutional, but left Section 230, among other provisions of the Act, as law. [26]
Section 230 has often been called "The 26 words that made the Internet". [2] The passage and subsequent legal history supporting the constitutionality of Section 230 have been considered essential to the growth of the Internet through the early part of the 21st century. Coupled with the Digital Millennium Copyright Act (DMCA) of 1998, Section 230 provides internet service providers safe harbors to operate as intermediaries of content without fear of being liable for that content as long as they take reasonable steps to delete or prevent access to that content. These protections allowed experimental and novel applications in the Internet area without fear of legal ramifications, creating the foundations of modern Internet services such as advanced search engines, social media, video streaming, and cloud computing. NERA Economic Consulting estimated in 2017 that Section 230 and the DMCA, combined, contributed about 425,000 jobs to the U.S. in 2017 and represented a total revenue of US$44 billion annually. [27]
The first major challenge to Section 230 itself was Zeran v. AOL , a 1997 case decided at the Fourth Circuit. [28] The case involved a person that sued America Online (AOL) for failing to remove, in a timely manner, libelous ads posted by AOL users that inappropriately connected his home phone number to the Oklahoma City bombing. The court found for AOL and upheld the constitutionality of Section 230, stating that Section 230 "creates a federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service." [29] The court asserted in its ruling Congress's rationale for Section 230 was to give Internet service providers broad immunity "to remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children's access to objectionable or inappropriate online material." [28] In addition, Zeran notes "the amount of information communicated via interactive computer services is ... staggering. The specter of tort liability in an area of such prolific speech would have an obviously chilling effect. It would be impossible for service providers to screen each of their millions of postings for possible problems. Faced with potential liability for each message republished by their services, interactive computer service providers might choose to severely restrict the number and type of messages posted. Congress considered the weight of the speech interests implicated and chose to immunize service providers to avoid any such restrictive effect." [28]
This rule, cementing Section 230's liability protections, has been considered one of the most important case laws affecting the growth of the Internet, allowing websites to be able to incorporate user-generated content without fear of prosecution. [30] However, at the same time, this has led to Section 230 being used as a shield for some website owners as courts have ruled Section 230 provides complete immunity for ISPs with regard to the torts committed by their users over their systems. [31] [32] Through the next decade, most cases involving Section 230 challenges generally fell in favor of service providers, ruling in favor of their immunity from third-party content on their sites. [32]
While Section 230 had seemed to have given near complete immunity to service providers in its first decade, new case law around 2008 started to find cases where providers can be liable for user content due to being a "publisher or speaker" related to that content under §230(c)(1). One of the first such cases to make this challenge was Fair Housing Council of San Fernando Valley v. Roommates.com, LLC 521 F.3d 1157 (9th Cir. 2008), [33] The case centered on the services of Roommates.com that helped to match renters based on profiles they created on their website; this profile was generated by a mandatory questionnaire and which included information about their gender and race and preferred roommates' race. The Fair Housing Council of San Fernando Valley stated this created discrimination and violated the Fair Housing Act, and asserted that Roommates.com was liable for this. In 2008, the Ninth Circuit in an en banc decision ruled against Roommates.com, agreeing that its required profile system made it an information content provider and thus ineligible to receive the protections of §230(c)(1). [32]
The decision from Roommates.com was considered to be the most significant deviation from Zeran in how Section 230 was handled in case law. [32] [34] Eric Goldman of the Santa Clara University School of Law wrote that while the Ninth Circuit's decision in Roommates.com was tailored to apply to a limited number of websites, he was "fairly confident that lots of duck-biting plaintiffs will try to capitalize on this opinion and they will find some judges who ignore the philosophical statements and instead turn a decision on the opinion's myriad of ambiguities". [32] [35] Over the next several years, a number of cases cited the Ninth Circuit's decision in Roommates.com to limit some of the Section 230 immunity to websites. Law professor Jeff Kosseff of the United States Naval Academy reviewed 27 cases in the 2015–2016 year involving Section 230 immunity concerns, and found more than half of them had denied the service provider immunity, in contrast to a similar study he had performed in from 2001 to 2002 where a majority of cases granted the website immunity; Kosseff asserted that the Roommates.com decision was the key factor that led to this change. [32]
Around 2001, a University of Pennsylvania paper warned that "online sexual victimization of American children appears to have reached epidemic proportions" due to the allowances granted by Section 230. [36] Over the next decade, advocates against such exploitation, such as the National Center for Missing and Exploited Children and Cook County Sheriff Tom Dart, pressured major websites to block or remove content related to sex trafficking, leading to sites like Facebook, MySpace, and Craigslist to pull such content. Because mainstream sites were blocking this content, those that engaged or profited from trafficking started to use more obscure sites, leading to the creation of sites like Backpage. In addition to removing these from the public eye, these new sites worked to obscure what trafficking was going on and who was behind it, limiting ability for law enforcement to take action. [36] Backpage and similar sites quickly came under numerous lawsuits from victims of the sex traffickers and exploiters for enabling this crime, but the court continually found in favor of Backpage due to Section 230. [37] Attempts to block Backpage from using credit card services as to deny them revenue was also defeated in the courts, as Section 230 allowed their actions to stand in January 2017. [38]
Due to numerous complaints from constituents, Congress began an investigation into Backpage and similar sites in January 2017, finding Backpage complicit in aiding and profiting from illegal sex trafficking. [39] Subsequently, Congress introduced the FOSTA-SESTA bills: the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) in the House of Representatives by Ann Wagner (R-MO) in April 2017, and the Stop Enabling Sex Traffickers Act (SESTA) U.S. Senate bill introduced by Rob Portman (R-OH) in August 2017. Combined, the FOSTA-SESTA bills modified Section 230 to exempt service providers from Section 230 immunity when dealing with civil or criminal crimes related to sex trafficking, [40] which removes section 230 immunity for services that knowingly facilitate or support sex trafficking. [41] The bill passed both Houses and was signed into law by President Donald Trump on April 11, 2018. [42] [43]
The bills were criticized by pro-free speech and pro-Internet groups as a "disguised internet censorship bill" that weakens the section 230 immunity, places unnecessary burdens on Internet companies and intermediaries that handle user-generated content or communications with service providers required to proactively take action against sex trafficking activities, and requires a "team of lawyers" to evaluate all possible scenarios under state and federal law (which may be financially unfeasible for smaller companies). [44] [45] [46] [47] [48] Critics also argued that FOSTA-SESTA did not distinguish between consensual, legal sex offerings from non-consensual ones, and argued it would cause websites otherwise engaged in legal offerings of sex work to be threatened with liability charges. [39] Online sex workers argued that the bill would harm their safety, as the platforms they utilize for offering and discussing sexual services in a legal manner (as an alternative to street prostitution) had begun to reduce their services or shut down entirely due to the threat of liability under the bill. [49] [50]
Many social media sites, notably the Big Tech companies of Facebook, Google, and Apple, as well as Twitter, have come under scrutiny as a result of the alleged Russian interference in the 2016 United States elections, where it was alleged that Russian agents used the sites to spread propaganda and fake news to swing the election in favor of Donald Trump. These platforms also were criticized for not taking action against users that used the social media outlets for harassment and hate speech against others. Shortly after the passage of FOSTA-SESTA acts, some in Congress recognized that additional changes should be made to Section 230 to require service providers to deal with these bad actors, beyond what Section 230 already provided to them. [51]
In 2020, Supreme Court Justice Clarence Thomas made a statement in respect of denying certiorari to Malwarebytes, Inc. v. Enigma Software Group USA, LLC ., which referenced Robert Katzman's dissent in Force v. Facebook. He opined that section 230 had been interpreted too broadly, and could be narrowed or eliminated in a future case, which he urged his colleagues to hear.
Courts have…departed from the most natural reading of the text by giving Internet companies immunity for their own content ... Section 230(c)(1) protects a company from publisher liability only when content is ‘provided by another information content provider.’ Nowhere does this provision protect a company that is itself the information content provider. [52] [53]
Consequently, in 2023 the Supreme Court agreed to hear two cases considering whether Social media can be held liable for "aiding and abetting" in acts of international terrorism, when their recommender systems promote it.
Numerous experts have suggested that changing 230 without repealing it entirely would be the optimal way to improve it. [54] Google's former fraud czar Shuman Ghosemajumder proposed in 2021 that full protections should only apply to unmonetized content, to align platforms' content moderation efforts with their financial incentives, and to encourage the use of better technology to achieve that necessary scale. [55] Researchers Marshall Van Alstyne and Michael D. Smith supported this idea of an additional duty-of-care requirement. [56] However, journalist Martin Baron has argued that most of Section 230 is essential for social media companies to exist at all. [57]
Some politicians, including Republican senators Ted Cruz (TX) and Josh Hawley (MO), have accused major social networks of displaying a bias against conservative perspectives when moderating content (such as Twitter suspensions). [58] [59] [60] In a Fox News op-ed, Cruz argued that section 230 should only apply to providers that are politically "neutral", suggesting that a provider "should be considered to be a liable 'publisher or speaker' of user content if they pick and choose what gets published or spoke." [61] Section 230 does not contain any requirements that moderation decisions be neutral. [61] Hawley alleged that section 230 immunity was a "sweetheart deal between big tech and big government". [62] [63]
In December 2018, Republican representative Louie Gohmert introduced the Biased Algorithm Deterrence Act (H.R.492), which would remove all section 230 protections for any provider that used filters or any other type of algorithms to display user content when otherwise not directed by a user. [64] [65]
In June 2019, Hawley introduced the Ending Support for Internet Censorship Act (S. 1914), that would remove section 230 protections from companies whose services have more than 30 million active monthly users in the U.S. and more than 300 million worldwide, or have over $500 million in annual global revenue, unless they receive a certification from the majority of the Federal Trade Commission that they do not moderate against any political viewpoint, and have not done so in the past 2 years. [66] [67]
There has been criticism—and support—of the proposed bill from various points on the political spectrum. A poll of more than 1,000 voters gave Senator Hawley's bill a net favorability rating of 29 points among Republicans (53% favor, 24% oppose) and 26 points among Democrats (46% favor, 20% oppose). [68] Some Republicans feared that by adding FTC oversight, the bill would continue to fuel fears of a big government with excessive oversight powers. [69] Nancy Pelosi, the Democratic Speaker of the House, has indicated support for the same approach Hawley has taken. [70] The chairman of the Senate Judiciary Committee, Senator Graham, has also indicated support for the same approach Hawley has taken, saying "he is considering legislation that would require companies to uphold 'best business practices' to maintain their liability shield, subject to periodic review by federal regulators." [71]
Legal experts have criticized the Republicans' push to make Section 230 encompass platform neutrality. Wyden stated in response to potential law changes that "Section 230 is not about neutrality. Period. Full stop. 230 is all about letting private companies make their own decisions to leave up some content and take other content down." [72] Kosseff has stated that the Republican intentions are based on a "fundamental misunderstanding" of Section 230's purpose, as platform neutrality was not one of the considerations made at the time of passage. [73] Kosseff stated that political neutrality was not the intent of Section 230 according to the framers, but rather making sure providers had the ability to make content-removal judgement without fear of liability. [20] There have been concerns that any attempt to weaken Section 230 could actually cause an increase in censorship when services lose their exemption from liability. [63] [74]
Attempts to bring damages to tech companies for apparent anti-conservative bias in courts, arguing against Section 230 protections, have generally failed. A lawsuit brought by the non-profit Freedom's Watch in 2018 against Google, Facebook, Twitter, and Apple on antitrust violations for using their positions to create anti-conservative censorship was dismissed by the D.C. Circuit Court of Appeals in May 2020, with the judges ruling that censorship can only apply to First Amendment rights blocked by the government and not by private entities. [75]
In the wake of the 2019 shootings in Christchurch, New Zealand, El Paso, Texas, and Dayton, Ohio, the impact on Section 230 and liability towards online hate speech has been raised. In both the Christchurch and El Paso shootings, the perpetrator posted hate speech manifestos to 8chan, a moderated imageboard known to be favorable for the posting of extreme views. Concerned politicians and citizens raised calls at large tech companies for the need for hate speech to be removed from the Internet; however, hate speech is generally protected speech under the First Amendment, and Section 230 removes the liability for these tech companies to moderate such content as long as it is not illegal. This has given the appearance that tech companies do not need to be proactive against hateful content, thus allowing the hate content to proliferate online and lead to such incidents. [76] [24]
Notable articles on these concerns were published after the El Paso shooting by The New York Times , [76] The Wall Street Journal , [77] and Bloomberg Businessweek , [24] among other outlets, but which were criticized by legal experts including Mike Godwin, Mark Lemley, and David Kaye, as the articles implied that hate speech was protected by Section 230, when it is in fact protected by the First Amendment. In the case of The New York Times, the paper issued a correction to affirm that the First Amendment protected hate speech, and not Section 230. [78] [79] [80]
Members of Congress have indicated they may pass a law that changes how Section 230 would apply to hate speech as to make tech companies liable for this. Wyden, now a Senator, stated that he intended for Section 230 to be both "a sword and a shield" for Internet companies, the "sword" allowing them to remove content they deem inappropriate for their service, and the shield to help keep offensive content from their sites without liability. However, Wyden warned that because tech companies have not been willing to use the sword to remove content, they could be at risk of losing the shield. [76] [24] Some have compared Section 230 to the Protection of Lawful Commerce in Arms Act, a law that grants gun manufacturers immunity from certain types of lawsuits when their weapons are used in criminal acts. According to law professor Mary Anne Franks, "They have not only let a lot of bad stuff happen on their platforms, but they've actually decided to profit off of people's bad behavior." [24]
Representative Beto O'Rourke stated his intent for his 2020 presidential campaign to introduce sweeping changes to Section 230 to make Internet companies liable for not being proactive in taking down hate speech. [81] O'Rourke later dropped out of the race. Fellow candidate and former vice president Joe Biden has similarly called for Section 230 protections to be weakened or otherwise "revoked" for "big tech" companies—particularly Facebook—having stated in a January 2020 interview with The New York Times that "[Facebook] is not merely an internet company. It is propagating falsehoods they know to be false", and that the U.S. needed to "[set] standards" in the same way that the European Union's General Data Protection Regulation (GDPR) set standards for online privacy. [82] [83]
In the aftermath of the Backpage trial and subsequent passage of FOSTA-SESTA, others have found that Section 230 appears to protect tech companies from content that is otherwise illegal under United States law. Professor Danielle Citron and journalist Benjamin Wittes found that as late as 2018, several groups deemed as terrorist organizations by the United States had been able to maintain social media accounts on services run by American companies, despite federal laws that make providing material support to terrorist groups subject to civil and criminal charges. [84] However, case law from the Second Circuit has ruled that under Section 230, technology companies are generally not liable for civil claims based on terrorism-related content. [85] U.S. Supreme Court Justice Clarence Thomas has stated that Section 230 gives companies too much immunity in these area, as reported in his several dissenting statements to court orders denying certification of cases related to Section 230. Thomas believed that the Supreme Court needed to review the limits granted by Section 230. [86]
The Supreme Court heard the cases of Gonzalez v. Google LLC and Twitter, Inc. v. Taamneh in the 2022 term. Gonzalez involved Google's liability for the YouTube recommendation options that appeared to promote recruitment videos for ISIS that led to the death of a U.S. citizen in a 2015 Paris terrorist attack. Google has claimed it is not liable under Section 230 protections. [87] In Taamneh, the company had been found liable for hosting terrorism-related content from third-party users under the Antiterrorism and Effective Death Penalty Act of 1996, Section 230's protections. [88] The Supreme Court ruled for both Google and Twitter, asserting that neither company aided or abetted in terrorism under existing laws, but did not address the Section 230 question. [89]
Many social media sites use in-house algorithmic curation through recommender systems to provide a feed of content to their users based on what the user has previously seen and content similar to that. Such algorithms have been criticized for pushing violent, racist, and misogynist content to users, [90] and influence on minors to become addicted to social media and affect their mental health. [91]
Whether Section 230 protects social media firms from what their algorithms produce remains a question in case law. The Supreme Court considered this question in regard to terrorism content in the forementioned Gonzalez and Taamneh cases, but neither addressed if Section 230 protected social media firms for the product of their algorithms. [89] A ruling by the Third Circuit Court in August 2024 stated that a lawsuit against TikTok, filed by parents of a minor that died from attempting the blackout challenge and who argued TikTok's algorithm that promoted the challenge led to the minor's death, can proceed after ruling that because TikTok has curated its algorithm, it is not protected by Section 230. [92] Separately, Ethan Zuckerman and the Knight First Amendment Institute at Columbia filed a lawsuit against Facebook, arguing that if the company claims their algorithm for showing content on the Facebook feed is protected by Section 230, then users have the right to use third-party tools to customize what the algorithm shows them to block unwanted content. [93]
In February 2020, the United States Department of Justice held a workshop related to Section 230 as part of an ongoing antitrust probe into "big tech" companies. Attorney General William Barr said that while Section 230 was needed to protect the Internet's growth while most companies were not stable, "No longer are technology companies the underdog upstarts...They have become titans of U.S. industry" and questioned the need for Section 230's broad protections. [94] Barr said that the workshop was not meant to make policy decisions on Section 230, but part of a "holistic review" related to Big Tech since "not all of the concerns raised about online platforms squarely fall within antitrust" and that the Department of Justice would want to see reform and better incentives to improve online content by tech companies within the scope of Section 230 rather than change the law directly. [94] Observers to the sessions stated the focus of the talks only covered Big Tech and small sites that engaged in areas of revenge porn, harassment, and child sexual abuse, but did not consider much of the intermediate uses of the Internet. [95]
The DOJ issued their four major recommendations to Congress in June 2020 to modify Section 230. These include: [96] [97]
In 2020, several bills were introduced through Congress to limit the liability protections that Internet platforms had from Section 230 as a result of events in the preceding years.
President Donald Trump, during his first administration, was a major proponent of limiting the protections of technology and media companies under Section 230 due to claims of an anti-conservative bias. In July 2019, Trump held a "Social Media Summit" that he used to criticize how Twitter, Facebook, and Google handled conservative voices on their platforms. During the summit, Trump warned that he would seek "all regulatory and legislative solutions to protect free speech". [122]
In late May 2020, President Trump made statements that mail-in voting would lead to massive fraud, in a pushback against the use of mail-in voting due to the COVID-19 pandemic for the upcoming 2020 primary elections, in both his public speeches and his social media accounts. In a Twitter message on May 26, 2020, he stated that, "There is NO WAY (ZERO!) that Mail-In Ballots will be anything less than substantially fraudulent." Shortly after its posting, Twitter moderators marked the message with a "potentially misleading" warning (a process it had introduced a few weeks earlier that month primarily in response to misinformation about the COVID-19 pandemic) [123] linking readers to a special page on its site that provided analysis and fact-checks of Trump's statement from media sources like CNN and The Washington Post , the first time it had used the process on Trump's messages. [124] Jack Dorsey, Twitter's former CEO, defended the moderation, stating that they were not acting as a "arbitrator of truth" but instead "Our intention is to connect the dots of conflicting statements and show the information in dispute so people can judge for themselves." [125] Trump was angered by this, and shortly afterwards threatened that he would take action to "strongly regulate" technology companies, asserting these companies were suppressing conservative voices. [126]
On May 28, 2020, Trump signed the "Executive Order on Preventing Online Censorship" (EO 13925), an executive order directing regulatory action at Section 230. [127] Trump stated in a press conference before signing his rationale for it: "A small handful of social media monopolies controls a vast portion of all public and private communications in the United States. They've had unchecked power to censor, restrict, edit, shape, hide, alter, virtually any form of communication between private citizens and large public audiences." [128] The EO asserts that media companies that edit content apart from restricting posts that are violent, obscene or harassing, as outlined in the "Good Samaritan" clause §230(c)(2), are then "engaged in editorial conduct" and may forfeit any safe-harbor protection granted in §230(c)(1). [129] From that, the EO specifically targets the "Good Samaritan" clause for media companies in their decisions to remove offensive material "in good faith". Courts have interpreted the "in good faith" portion of the statute based on its plain language; the EO purports to establish conditions where that good faith may be revoked, such as if the media companies have shown bias in how they remove material from the platform. The goal of the EO is to remove the Section 230 protections from such platforms, thus leaving them liable for content. [130] Whether a media platform has bias would be determined by a rulemaking process to be set by the Federal Communications Commission in consultation with the Commerce Department, the National Telecommunications and Information Administration (NTIA), and the Attorney General, while the Justice Department and state attorneys general will handle disputes related to bias, gather these to report to the Federal Trade Commission, who would make determinations if a federal lawsuit should be filed. Additional provisions prevent government agencies from advertising on media company platforms that are demonstrated to have such bias. [128]
The EO came under intense criticism and legal analysis after its announcement. [131] Senator Wyden stated that the EO was a "mugging of the First Amendment", and that there does need to be a thoughtful debate about modern considerations for Section 230, though the political spat between Trump and Twitter is not a consideration. [132] Professor Kate Klonick of St. John's University School of Law in New York considered the EO "political theater" without any weight of authority. [130] The Electronic Frontier Foundation's Aaron Mackey stated that the EO starts with a flawed misconstruing of linking sections §230(c)(1) and §230(c)(2), which were not written to be linked and have been treated by case law as independent statements in the statute, and thus "has no legal merit". [129]
By happenstance, the EO was signed on the same day that riots erupted in Minneapolis, Minnesota in the wake of the murder of George Floyd, an African-American from an incident involving four officers of the Minneapolis Police Department. Trump had tweeted on his conversation with Minnesota's governor Tim Walz about bringing National Guard to stop the riots, but concluded with the statement, "Any difficulty and we will assume control but, when the looting starts, the shooting starts", a phrase attached to Miami Police Chief Walter E. Headley to deal with violent riots in 1967. [133] [134] After internal review, Twitter marked the message with a "public interest notice" that deemed it "glorified violence", which they would normally remove for violating the site's terms, but stated to journalists that they "have kept the Tweet on Twitter because it is important that the public still be able to see the Tweet given its relevance to ongoing matters of public importance." [135] Following Twitter's marking of his May 28 tweet, Trump said in another tweet that due to Twitter's actions, "Section 230 should be revoked by Congress. Until then, it will be regulated!" [136]
By June 2, 2020, the Center for Democracy & Technology filed a lawsuit in the United States District Court for the District of Columbia seeking preliminary and permanent injunction from the EO from being enforced, asserting that the EO created a chilling effect on free speech since it puts all hosts of third-party content "on notice that content moderation decisions with which the government disagrees could produce penalties and retributive actions, including stripping them of Section 230's protections". [137]
The Secretary of Commerce via the NTIA sent a petition with a proposed rule to the FCC on July 27, 2020, as the first stage of executing the EO. [138] [139] FCC chair Ajit Pai stated in October 2020 that after the Commission reviewed what authority they have over Section 230 that the FCC will proceed with putting forth their proposed rules to clarify Section 230 on October 15, 2020. [140] Pai's announcement, which came shortly after Trump again called for Section 230 revisions after asserting Big Tech was purposely hiding a reporting of leaked documents around Hunter Biden, Joe Biden's son, was criticized by the Democratic FCC commissioners Geoffrey Starks and Jessica Rosenworcel and the tech industry, with Rosenworcel stating "The FCC has no business being the president's speech police." [141] [142]
A second lawsuit against the EO was filed by activist groups including Rock the Vote and Free Press on August 27, 2020, after Twitter had flagged another of Trump's tweets for misinformation related to mail-in voting fraud. The lawsuit stated that should the EO be enforced, Twitter would not have been able to fact-check tweets like Trump's as misleading, thus allowing the President or other government officials to intentionally distribute misinformation to citizens. [143]
President Biden rescinded the EO on May 14, 2021, along with several of Trump's other orders. [144]
Following the November election, Trump has made numerous claims on his social media accounts contesting the results, including claims of fraud. Twitter and other social media companies have marked these posts as potentially misleading, similar to previous posts Trump has made. As a result, Trump threatened to veto the defense spending bill for 2021 if it did not contain language to repeal Section 230. [145] Trump made good on his promise, vetoing the spending bill on December 23, 2020, in part for not containing a repeal of Section 230. [146] The House voted to overturn the veto on December 28, 322–87, sending the bill to the Senate to vote to overturn. The Senate similarly voted to override the veto on January 1, 2021, without adding any Section 230 provisions. [147]
During this, Trump urged Congress to expand the COVID-19 relief payments in the Consolidated Appropriations Act, 2021 that he had signed into law on December 27, 2020, but also stated that they should address the Section 230 repeal and other matters that were not addressed in the defense bill. The Senate majority leader Mitch McConnell stated on December 28 that he would bring legislation later that week that would include the expanded COVID-19 relief along with legislation to deal with Section 230, as outlined by Trump. [148] Ultimately, no additional legislation was introduced.
In the wake of the 2021 United States Capitol attack on January 6, 2021, Pai stated that he would not be seeking any Section 230 reform before his prior planned resignation from office on January 20, 2021. Pai stated that this was mostly due to the lack of time to implement such rule making before his resignation, but also said that he would not "second-guess those decisions" of social media networks under Section 230 to block some of Trump's messages from January 6 that contributed to the violence. [149] In the days that followed, Twitter, Facebook, and other social media services blocked or banned Trump's accounts claiming his speech during and after the riot was inciting further violence. These actions were supported by politicians, but led to renewed calls by Democratic leaders to reconsider Section 230, as these politicians believed that Section 230 led the companies to fail to take any preemptive action against the people who had planned and executed the Capitol riots. [150] [151] Separately, Trump filed class-action lawsuits against Twitter, Facebook, and YouTube in July 2021 related to his bans from the January 2021 period, claiming their actions were unjustifiable, and claiming that Section 230 was unconstitutional. [152] Trump's lawsuit against Twitter was dismissed by a federal judge in May 2022, stating that the suit's First Amendment claims were inactionable against non-government groups like Twitter, though allowed an amended claim to be filed. [153]
In March 2021, Facebook's Mark Zuckerberg, Alphabet's Sundar Pichai, and Twitter's Jack Dorsey were asked to testify to the House Committee on Energy and Commerce relating to the role of social media in promoting extremism and misinformation following the 2020 election, of which Section 230 was expected to be a topic. Prior to the event, Zuckerberg proposed an alternate change to Section 230 compared to previously proposed bills. Zuckerberg stated that it would be costly and impractical for social media companies to traffic all problematic material, and instead it would be better to tie Section 230 liability protection to companies that have demonstrated that they have mechanisms in place to remove this material once it is identified. [154]
In July 2021, Democratic senators Amy Klobuchar and Ben Ray Luján introduced the Health Misinformation Act, which is intended primarily to combat COVID-19 misinformation. It would add a carveout to Section 230 to make companies liable for the publication of "health misinformation" during a "public health emergency" — as established by the Department of Health and Human Services — if the content is promoted to users via algorithmic decisions. [155]
Following Frances Haugen's testimony to Congress that related to her whistleblowing on Facebook's internal handling of content, House Democrats Anna Eshoo, Frank Pallone Jr., Mike Doyle, and Jan Schakowsky introduced the "Justice Against Malicious Algorithms Act" in October 2021, which is in committee as H.R.5596. The bill would remove Section 230 protections for service providers related to personalized recommendation algorithms that present content to users if those algorithms knowingly or recklessly deliver content that contributes to physical or severe emotional injury. [156] "The last few years have proven that the more outrageous and extremist content social media platforms promote, the more engagement and advertising dollars they rake in," said Representative Frank Pallone Jr., the chairman of the Energy and Commerce Committee. "By now it's painfully clear that neither the market nor public pressure will stop social media companies from elevating disinformation and extremism, so we have no choice but to legislate, and now it's a question of how best to do it," he added. [157]
The state of Florida (predominantly Republican after the 2020 election) passed its "deplatforming" Senate Bill 7072 in May 2021, which had been proposed in February 2021 after Trump had been banned from several social media sites. SB 7072 prevents social media companies from knowingly blocking or banning politicians, and grants the Florida Elections Commission the ability to fine these companies for knowing violations, with fines as high as $250,000 per day for state-level politicians.
The bill exempts companies that own theme parks or other large venues within the state, thus would exempt companies such as Disney whose parks provide a significant tax revenue to the state. [158] The Computer & Communications Industry Association (CCIA) and NetChoice filed suit against the state to block enforcement of the law, in Netchoice v. Moody, asserting that the law violated the First Amendment rights of private companies. [159] Judge Robert Lewis Hinkle of the United States District Court for the Northern District of Florida issued a preliminary injunction against the law on June 30, 2021, stating that "The legislation now at issue was an effort to rein in social-media providers deemed too large and too liberal. Balancing the exchange of ideas among private speakers is not a legitimate governmental interest", and further that the law "discriminates on its face among otherwise identical speakers". [160]
Texas H.B. 20, enacted in September 2021, intended to prevent large social media providers from banning or demonetizing their users based on the user's viewpoint, including for views expressed outside of the social media platform, as well as to increase transparency in how these providers moderate content. [161] The CCIA and NetChoice filed suit to prevent enforcement of the law in NetChoice v. Paxton . A federal district judge placed an injunction on this law in December 2021, stating that the law's "prohibitions on 'censorship' and constraints on how social media platforms disseminate content violate the First Amendment". [162] However, the Fifth Circuit reversed the injunction on a 2–1 order without yet ruling on the merits of the case in May 2022, effectively allowing the Texas law to come into effect. [163] The CCIA and NetChoice appealed the Fifth Circuit decision directly to the U.S. Supreme Court seeking an emergency injunction to block the law. They argued that regulations on how social media platforms moderate users' content may prevent them from moderating at all in certain situations and thus force them to publish material they find objectionable, an outcome that would violate the social media platforms' First Amendment rights. [164]
On May 31, 2022, the Supreme Court restored the injunction by court order (with four Justices, Samuel Alito, Clarence Thomas, Elena Kagan, and Neil Gorsuch dissenting) while lower court litigation continued. [165] The Fifth Circuit reversed the district court ruling in September 2022, with Judge Andy Oldham stating in the majority opinion, "Today we reject the idea that corporations have a freewheeling First Amendment right to censor what people say." [166] The decision creates a circuit split with the potential to be heard by the Supreme Court. [166] The Fifth Circuit agreed to rejoin enforcement of the law in October 2022 while several tech companies petitioned the case to the Supreme Court. [167]
Both the Florida and the Texas law cases were heard by the Supreme Court, who ruled in July 2024 to vacate and remand both circuit Court decisions due to their failure to evaluate both laws across all aspects of the social media sites rather than the specific functions targeted by the law. [168]
This section needs expansion. You can help by adding to it. (October 2022) |
Numerous cases involving Section 230 have been heard in the judiciary system since its introduction, many which are rote applications of Section 230.
The following is a partial list of legal cases that have been established as case law that have influenced the interpretation of Section 230 in subsequent cases or have led to new legislation around Section 230.
Directive 2000/31/EC, [218] the e-Commerce Directive, establishes a safe harbor regime for hosting providers:
The updated Directive on Copyright in the Digital Single Market (Directive 2019/790) Article 17 makes providers liable if they fail to take "effective and proportionate measures" to prevent users from uploading certain copyright violations and do not respond immediately to takedown requests. [219]
In Dow Jones & Company Inc v Gutnick, [220] the High Court of Australia treated defamatory material on a server outside Australia as having been published in Australia when it is downloaded or read by someone in Australia.
Gorton v Australian Broadcasting Commission & Anor (1973) 1 ACTR 6
Under the Defamation Act 2005 (NSW), [221] s 32, a defence to defamation is that the defendant neither knew, nor ought reasonably to have known of the defamation, and the lack of knowledge was not due to the defendant's negligence.
The Electronic Commerce Directive 2000 [218] (e-Commerce Directive) has been implemented in Italy by means of Legislative Decree no. 70 of 2003. The provisions provided by Italy are substantially in line with those provided at the EU level. However, at the beginning, the Italian case-law had drawn a line between so-called "active" hosting providers and "passive" Internet Service Providers, arguing that "active" Internet Service Providers would not benefit from the liability exception provided by Legislative Decree no. 70. According to that case-law, an ISP is deemed to be active whenever it carries out operations on the content provided by the user, such as in case it modifies the content or makes any enrichment of the content. Under certain cases, courts have held ISPs liable for the user's content for the mere facts that such content was somehow organised or enriched by the ISP (e.g. by organizing the contents in libraries or categories, etc. or monetised by showing ads).
Failing to investigate the material or to make inquiries of the user concerned may amount to negligence in this context: Jensen v Clark [1982] 2 NZLR 268.
Directive 2000/31/CE was transposed into the LCEN law. Article 6 of the law establishes safe haven for hosting provider as long as they follow certain rules.
In LICRA vs. Yahoo! , the High Court ordered Yahoo! to take affirmative steps to filter out Nazi memorabilia from its auction site. Yahoo!, Inc. and its then president Timothy Koogle were also criminally charged, but acquitted.
In 1997, Felix Somm, the former managing director for CompuServe Germany, was charged with violating German child pornography laws because of the material CompuServe's network was carrying into Germany. He was convicted and sentenced to two years probation on May 28, 1998. [222] [223] He was cleared on appeal on November 17, 1999. [224] [225]
The Oberlandesgericht (OLG) Cologne, an appellate court, found that an online auctioneer does not have an active duty to check for counterfeit goods (Az 6 U 12/01). [226]
In one example, the first-instance district court of Hamburg issued a temporary restraining order requiring message board operator Universal Boards to review all comments before they can be posted to prevent the publication of messages inciting others to download harmful files. The court reasoned that "the publishing house must be held liable for spreading such material in the forum, regardless of whether it was aware of the content." [227]
The laws of libel and defamation will treat a disseminator of information as having "published" material posted by a user, and the onus will then be on a defendant to prove that it did not know the publication was defamatory and was not negligent in failing to know: Goldsmith v Sperrings Ltd (1977) 2 All ER 566; Vizetelly v Mudie's Select Library Ltd (1900) 2 QB 170; Emmens v Pottle & Ors (1885) 16 QBD 354.
In an action against a website operator, on a statement posted on the website, it is a defence to show that it was not the operator who posted the statement on the website. The defence is defeated if it was not possible for the claimant to identify the person who posted the statement, or the claimant gave the operator a notice of complaint and the operator failed to respond in accordance with regulations.
The Communications Decency Act of 1996 (CDA) was the United States Congress's first notable attempt to regulate pornographic material on the Internet. In the 1997 landmark case Reno v. ACLU, the United States Supreme Court unanimously struck the act's anti-indecency provisions.
Carafano v. Metrosplash.com, Inc., 339 F.3d 1119, is an American legal case dealing with the protection provided an internet service provider under the Communications Decency Act (CDA) United States Code Title 47 section 230(c)(1). It is also known as the Star Trek actress case as the plaintiff, Chase Masterson – whose legal name is Christianne Carafano – is well known for having appeared on Star Trek: Deep Space Nine. The case demonstrated that the use of an online form with some multiple choice selections does not override the protections against liability for the actions of users or anonymous members of a Web-based service.
Online service provider law is a summary and case law tracking page for laws, legal decisions and issues relating to online service providers (OSPs), like the Wikipedia and Internet service providers, from the viewpoint of an OSP considering its liability and customer service issues. See Cyber law for broader coverage of the law of cyberspace.
Stratton Oakmont, Inc. v. Prodigy Services Co., 23 Media L. Rep. 1794, is a decision of the New York Supreme Court holding that online service providers can be liable for the speech of their users. The ruling caused controversy among early supporters of the Internet, including some lawmakers, leading to the passage of Section 230 of the Communications Decency Act in 1996.
Zeran v. America Online, Inc., 129 F.3d 327, is a case in which the United States Court of Appeals for the Fourth Circuit determined the immunity of Internet service providers for wrongs committed by their users under Section 230 of the Communications Decency Act. The statute states that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
Barrett v. Rosenthal, 40 Cal.4th 33 (2006), was a California Supreme Court case concerning online defamation. The case resolved a defamation claim brought by Stephen Barrett, Terry Polevoy, and attorney Christopher Grell against Ilena Rosenthal and several others. Barrett and others alleged that the defendants had republished libelous information about them on the internet. In a unanimous decision, the court held that Rosenthal was a "user of interactive computer services" and therefore immune from liability under Section 230 of the Communications Decency Act.
The Online Copyright Infringement Liability Limitation Act (OCILLA) is United States federal law that creates a conditional 'safe harbor' for online service providers (OSP), a group which includes Internet service providers (ISP) and other Internet intermediaries, by shielding them for their own acts of direct copyright infringement as well as shielding them from potential secondary liability for the infringing acts of others. OCILLA was passed as a part of the 1998 Digital Millennium Copyright Act (DMCA) and is sometimes referred to as the "Safe Harbor" provision or as "DMCA 512" because it added Section 512 to Title 17 of the United States Code. By exempting Internet intermediaries from copyright infringement liability provided they follow certain rules, OCILLA attempts to strike a balance between the competing interests of copyright owners and digital users.
Doe v. MySpace, Inc., 528 F.3d 413 (2008), is a Fifth Circuit Court of Appeals ruling that MySpace was immune under Section 230 of the Communications Decency Act of 1996 from liability for a sexual assault of a minor that arose from posts on the MySpace platform.
Perfect 10, Inc. v. CCBill LLC, 488 F.3d 1102, is a U.S. court case between a publisher of an adult entertainment magazine and the webhosting, connectivity, and payment service companies. The plaintiff Perfect 10 asserted that defendants CCBill and CWIE violated copyright, trademark, and state law violation of right of publicity laws, unfair competition, false and misleading advertising by providing services to websites that posted images stolen from Perfect 10's magazine and website. Defendants sought to invoke statutory safe harbor exemptions from copyright infringement liability under the Digital Millennium Copyright Act, 17 U.S.C. § 512, and from liability for state law unfair competition, false advertising claims and right of publicity based on Section 230 of the Communications Decency Act, 47 U.S.C. § 230(c)(1).
Thomas Dart, Sheriff of Cook County v. Craigslist, Inc., 665 F. Supp. 2d 961, is a decision by the United States District Court for the Northern District of Illinois in which the court held that Craigslist, as an Internet service provider, was immune from wrongs committed by their users under Section 230 of the Communications Decency Act (CDA). Sheriff Thomas Dart had sought to hold Craigslist responsible for allegedly illegal content posted by users in Craigslist's erotic services section, but Section 230(c)(1) of the CDA provides that "no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
In Milgram v. Orbitz Worldwide, LLC, the New Jersey Superior Court held that online ticket resellers qualified for immunity under Section 230 of the Communications Decency Act (CDA), and that such immunity preempted a state law consumer fraud statute. The opinion clarified the court's test for determining whether a defendant is acting as a publisher, the applicability of the CDA to e-commerce sites, and the extent of control that an online intermediary may exercise over user content without becoming an "information content provider" under the CDA. The opinion was hailed by one observer as a "rare defeat for a consumer protection agency" and the "biggest defense win of the year" in CDA § 230 litigation.
Barnes v. Yahoo!, Inc., 570 F.3d 1096, is a United States Court of Appeals for the Ninth Circuit case in which the Ninth Circuit held that Section 230 of the Communications Decency Act (CDA) rules that Yahoo!, Inc., as an Internet service provider cannot be held responsible for failure to remove objectionable content posted to their website by a third party. Plaintiff Cecilia Barnes made claims arising out of Defendant Yahoo!, Inc.'s alleged failure to honor promises to remove offensive content about the plaintiff posted by a third party. The content consisted of a personal profile with nude photos of the Plaintiff and her contact information. The United States District Court for the District of Oregon had dismissed Barnes' complaint.
Fair Housing Council of San Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157, is a case in which the United States Court of Appeals for the Ninth Circuit, sitting en banc, held that immunity under Section 230 of the Communications Decency Act (CDA) did not apply to an interactive online operator whose questionnaire violated the Fair Housing Act. However, the court found that Roommates.com was immune under Section 230 of the CDA for the “additional comments” portion of the website. This case was the first to place a limit on the broad immunity that Section 230(c) gives to service providers that has been established under Zeran v. AOL (1997).
Backpage was a classified advertising website founded in 2004 by the alternative newspaper chain New Times Inc./New Times Media as a rival to Craigslist.
Jane Doe No. 14 v. Internet Brands, Inc., 767 F.3d 894 (2014), is a ruling at the Ninth Circuit Court of Appeals on the legal liability of an Internet service provider for criminal offenses committed by its users. The ultimate ruling in the case has caused confusion over the amount of liability faced by service providers during such incidents.
Contributory copyright infringement is a way of imposing secondary liability for infringement of a copyright. It is a means by which a person may be held liable for copyright infringement even though he or she did not directly engage in the infringing activity. It is one of the two forms of secondary liability apart from vicarious liability. Contributory infringement is understood to be a form of infringement in which a person is not directly violating a copyright but induces or authorizes another person to directly infringe the copyright.
FOSTA and SESTA are U.S. Senate and House bills which became law on April 11, 2018. They clarify the country's sex trafficking law to make it illegal to knowingly assist, facilitate, or support sex trafficking, and amend the Section 230 safe harbors of the Communications Decency Act to exclude enforcement of federal or state sex trafficking laws from its immunity. Senate sponsor Rob Portman had previously led an investigation into the online classifieds service Backpage, and argued that Section 230 was protecting its "unscrupulous business practices" and was not designed to provide immunity to websites that facilitate sex trafficking.
The EARN IT Act is a proposed legislation first introduced in 2020 in the United States Congress. It aims to amend Section 230 of the Communications Act of 1934, which allows operators of websites to remove user-posted content that they deem inappropriate, and provides them with immunity from civil lawsuits related to such posting. Section 230 is the only surviving portion of the Communications Decency Act, passed in 1996.
Moody v. NetChoice, LLC and NetChoice, LLC v. Paxton, 603 U.S. ___ (2024), were United States Supreme Court cases related to protected speech under the First Amendment and content moderation by interactive service providers on the Internet under Section 230 of the Communications Decency Act. Moody and Paxton were challenges to two state statutes – enacted in Florida and Texas, respectively – that sought to limit this moderation. In July 2024, the justices vacated the lower-court decisions in both cases due to both courts failing to perform a full First Amendment assessment of the laws, and remanded them for further consideration.
Twitter, Inc. v. Taamneh, 598 U.S. 471 (2023), was a case of the Supreme Court of the United States. The case considered whether Internet service providers are liable for "aiding and abetting" a designated foreign terrorist organization in an "act of international terrorism", on account of recommending such content posted by users, under Section 2333 of the Antiterrorism and Effective Death Penalty Act of 1996. Along with Gonzalez v. Google LLC, Taamneh is one of two cases where social media companies are accused of aiding and abetting terrorism in violation of the law. The cases were decided together in a ruling by the United States Court of Appeals for the Ninth Circuit, which ruled that Taamneh's case could proceed. The cases challenge the broad liability immunity for hosting and recommending terrorist content that websites have enjoyed.