Internet censorship in the United States is the suppression of information published or viewed on the Internet in the United States. The First Amendment of the United States Constitution protects freedom of speech and expression against federal, state, and local government censorship.
Free speech protections allow little government-mandated Internet content restrictions. However, the Internet is highly regulated, supported by a complex set of legally binding and privately mediated mechanisms. [1]
Gambling, cyber security, and the dangers to children who frequent social media are important ongoing debates. Significant public resistance to proposed content restriction policies has prevented measures used in some other countries from taking hold in the US [1]
Many government-mandated attempts to regulate content have been barred, often after lengthy legal battles. [2] However, the government has exerted pressure indirectly. With the exception of child pornography, content restrictions tend to rely on platforms to remove/suppress content, following state encouragement or the threat of legal action. [3] [1]
Intellectual property protections yielded a system that predictably removes infringing materials. [1] [4] The US also seizes domains and computers, at times without notification. [5] [6] [7] [8]
The first wave of regulatory actions came about in the 1990s in response to the profusion of sexually explicit material on the Internet within easy reach of minors. Since that time, several legislative attempts at creating a mandatory system of content controls have failed to produce a comprehensive solution. Legislative attempts to control the distribution of socially objectionable material have given way to a system that limits liability over content for Internet intermediaries such as Internet service providers (ISPs) and content hosting companies. [1]
Websites shut down by the U.S for violating intellectual property rights include Napster, [9] [10] [11] WikiLeaks, [12] [13] The Pirate Bay, [14] and MegaUpload. [15]
In 2014, the United States was added to Reporters Without Borders (RWB)'s list of "Enemies of the Internet", a group of countries with the highest level of Internet censorship and surveillance. RWB stated that the US had "undermined confidence in the Internet and its own standards of security" and that "US surveillance practices and decryption activities are a direct threat to investigative journalists, especially those who work with sensitive sources for whom confidentiality is paramount and who are already under pressure". [16]
With limited exceptions, the free speech provisions of the First Amendment bar federal, state, and local governments from directly censoring the Internet. The primary exception has to do with obscenity, including child pornography, which is not given First Amendment protection. [17]
The Computer Fraud and Abuse Act (CFAA) was enacted in 1986 as an amendment to an existing computer fraud law (18 U.S.C. § 1030), which was part of the Comprehensive Crime Control Act of 1984. The CFAA prohibits accessing a computer without authorization, or in excess of authorization. [18] Since 1986, the Act was amended in 1989, 1994, 1996, 2001 (USA PATRIOT Act), 2002, and in 2008 (Identity Theft Enforcement and Restitution Act). The CFAA is a criminal law and also creates a private right of action, allowing individuals and companies to sue for damages.
Provisions of the CFAA effectively make it a federal crime to violate the terms of service of Internet sites, allowing companies to forbid legitimate activities such as research, or limit or remove protections found elsewhere in law. Terms of service can be changed at any time without notifying users. Tim Wu called the CFAA "the worst law in technology". [19]
Aggressive prosecution under the Computer Fraud and Abuse Act (CFAA) has fueled growing criticism of the law's scope and application. [20]
In 1996, the United States enacted the Communications Decency Act (CDA), which attempted to regulate both indecency (when available to children) and obscenity in cyberspace. [21] In 1997, in the case of Reno v. ACLU , the United States Supreme Court found the anti-indecency provisions of the Act unconstitutional. [22] Writing for the Court, Justice John Paul Stevens held that "the CDA places an unacceptably heavy burden on protected speech". [23]
Section 230 [24] is a separate portion of the CDA that remains in effect. Section 230 says that operators of Internet services are not legally liable for the words of third parties who use their services and also protects ISPs from liability for good faith voluntary actions taken to restrict access to certain offensive materials [25] or giving others the technical means to restrict access to that material.
In 1998, the United States enacted the Child Online Protection Act [26] (COPA) to restrict access by minors to any material defined as harmful to minors on the Internet. The law was found to be unconstitutional because it would hinder protected speech among adults. It never took effect, as three separate rounds of litigation led to a permanent injunction against the law in 2009. Had the law passed, it would have effectively made it an illegal act to post anything commercial on the internet that is knowingly harmful to children without some sort of vetting program to confirm user ages. [27] [28] [29] [30]
Enacted in 1998, the Digital Millennium Copyright Act (DMCA, 17 U.S.C. § 1201) criminalized the production and dissemination of technology that could be used to circumvent copyright protection mechanisms [4] and made it easier to act against alleged copyright infringement on the Internet. [31] The Online Copyright Infringement Liability Limitation Act (OCILLA) is included as Title II of the DMCA [32] and limits the liability of the online service providers for copyright infringement by their users. [33]
The Children's Online Privacy Protection Act (COPPA) went into effect on 21 April 2000. [34] It applies to the online collection of personal information by persons or entities under US jurisdiction from children under 13 and details what a website operator must include in a privacy policy, when and how to seek verifiable consent from a parent or guardian, and what responsibilities an operator has to protect children's privacy and safety including restrictions on the marketing to those under 13. [35] While children under 13 can legally offer personal information with their parents' permission, many websites prohibit underage children from using their services altogether, due to the cost and amount of paperwork necessary for compliance.
In 2000 the Children's Internet Protection Act (CIPA) [36] was signed into law.
CIPA requires K-12 schools and libraries receiving federal Universal Service Fund (E-rate) discounts or LSTA grants for Internet access or internal connections to: [37]
CIPA does not: [37]
In March 2008, the New York Times reported that a blocklist published by the Office of Foreign Assets Control (OFAC), an agency established under the Trading with the Enemy Act 1917 and other federal legislation, included websites, so that US companies are prohibited from doing business with those websites and must freeze their assets. The blocklist had the effect that US-based domain name registrars must block those websites. According to the article, eNom, a private domain name registrar and Web hosting company operating in the US, disables domain names that appear on the blocklist. [38] It described eNom's disabling of a European travel agent's web sites advertising travel to Cuba, which appeared on the list. [39] According to the report, the US government claimed that eNom was "legally required" to block the websites under US law, even though the websites were not hosted in the US, were not targeted at US persons, and were legal under foreign law.
The Cybersecurity Information Sharing Act (CISA) is intended to "improve cybersecurity in the United States through enhanced sharing of information about cybersecurity threats and for other purposes". [40] The law allows the sharing of Internet traffic information between the US government and technology and manufacturing companies. The bill's text was incorporated by amendment into a consolidated spending bill, [41] [42] [43] [44]
Opponents questioned the CISA's value, believing it would move responsibility from private business to the government, thereby increasing the vulnerability of personal private information, as well as dispersing personal private information across seven government agencies, including the National Security Agency and local police. Some felt that the act was more conducive to surveillance than security after many of the privacy protections from the original bill were removed. [45]
The Stop Advertising Victims of Exploitation Act of 2015 (SAVE) is part of the larger Justice for Victims of Trafficking Act of 2015. [46] The SAVE Act makes it illegal to knowingly advertise content related to sex trafficking, including online advertising. The law established federal criminal liability for third-party content. One concern was that this would lead companies to over-censor, or to limit the practice of monitoring content altogether to avoid "knowledge" of illegal content. [47]
In 2016, complainants from Gallaudet University brought a lawsuit against UC Berkeley for not adding closed captioning to the recorded lectures it made free to the public. In an unintended consequence of the Americans with Disabilities Act of 1990, the Department of Justice ruling resulted in Berkeley deleting 20,000 freely licensed videos instead of making them more accessible, which Berkeley had described as cost prohibitive. [48]
Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) was introduced in the US House of Representatives in 2017. Stop Enabling Sex Traffickers Act (SESTA) was a similar US Senate bill. The combined FOSTA-SESTA package was enacted in 2018. [49] [50]
The bill amended Section 230 of the Communications Decency Act to exclude the enforcement of federal and state sex trafficking laws from immunity and clarified SESTA to define participation in a venture as knowingly assisting, facilitating, or supporting sex trafficking. [51]
The bills were criticized as a "disguised internet censorship bill" that weakened Section 230 safe harbors, placed unnecessary burdens on internet companies and intermediaries that handle user-generated content or communications with service providers required to proactively take action against sex trafficking activities, and required a "team of lawyers" to evaluate all possible scenarios. [52] [53] [54] [55] Sex workers argued that the bill would harm their safety, as the platforms they utilize for offering and discussing sexual services (as an alternative to street prostitution) had reduced their services or shut down entirely due to the threat of liability. [56] [57]
The Deleting Online Predators Act of 2006 would have required schools, some businesses, and libraries to block minors' access to social networking websites. The bill was controversial because, according to its critics, it would limit access to a wide range of websites, including many with harmless and educational material. [58] Two similar bills were introduced in 2007, but neither became law. [59] [60]
The 2010 Protecting Cyberspace as a National Asset Act. [61] generated controversy for what critics perceived as its authorization for the US president to apply a full block of the Internet in the US [62]
The Executive Cyberspace Coordination Act of 2011 [63] took a different approach.
The 2010 Combating Online Infringement and Counterfeits Act [64] would have allowed the US Attorney General to bring an in rem action against an infringing domain name in the United States District Court, and seek an order requesting injunctive relief. If granted, such an order would compel the registrar of the domain name in question to suspend the operation of, and may lock, the domain name. [64]
The US Justice Department would maintain two publicly available lists of domain names. [64] The first list would contain domain names against which the Attorney General has obtained injunctions. The second list would contain domains alleged to be infringing, but against which no action had been taken. Any service provider who willingly took steps to block access to sites on this second list would be immune from prosecution.
The 2011 Stop Online Piracy Act (SOPA) would have allowed the US Department of Justice, as well as copyright holders, to seek court orders against websites accused of enabling or facilitating copyright infringement. Depending on who requested the court orders, the actions could include barring online advertising networks and payment facilitators such as PayPal from doing business with the allegedly infringing website, barring search engines from linking to such sites, and requiring Internet service providers to block access to such sites. Many argued that requiring ISP's to block access to certain websites constituted censorship. On January 18, 2012, the English Wikipedia shut down for 24 hours to protest SOPA and PIPA. In the wake of many protests, consideration of the legislation was put on hold. [65]
The 2011 Protect Intellectual Property Act (Preventing Real Online Threats to Economic Creativity and Theft of Intellectual Property Act, or PIPA) was attempted to give the US government and copyright holders additional tools to curb "rogue websites dedicated to infringing or counterfeit goods", especially those registered outside the US. [66] PIPA was a re-write of the Combating Online Infringement and Counterfeits Act (COICA), [67] which failed to pass in 2010. In the wake of protests the bill was put on hold. [65] [68]
The 2011, 2013, and 2015 Cyber Intelligence Sharing and Protection Act (CISPA) attempted to give the government additional options and resources to ensure network security. [69] CISA was a similar Senate bill that was enacted. [70]
CISPA was supported by trade groups representing more than eight hundred private companies, including the Business Software Alliance, CTIA – The Wireless Association, Information Technology Industry Council, Internet Security Alliance, National Cable & Telecommunications Association, National Defense Industrial Association, TechAmerica and United States Chamber of Commerce, in addition to major telecommunications and information technology companies including AT&T, Facebook, IBM, Intel, Oracle Corporation, Symantec, and Verizon. [71] [72]
Reporters Without Borders expressed concern that in the name of fighting cybercrime, it would allow the government and private companies to monitor, even censor, the Web. [73] Other organizations that oppose the bill include the Constitution Project, American Civil Liberties Union, Electronic Frontier Foundation, Center for Democracy and Technology, Fight for the Future, Free Press, Sunlight Foundation, and TechFreedom. Google lobbied for it. [74]
In 2020 and 2023, the United States Government tried to ban social media app TikTok. The DATA Act would have banned the selling of non-public personal data to third party buyers. [75] The RESTRICT Act would allow the United States Secretary of State to review any attempt of a tech company to "sabotage" the United States. In this scenario, after a review by the Secretary and other relevant departments found "security risks" then the government can restrict a company, service, or product. [76] This would let the government investigate and possibly ban any site they deem a threat to national security. Violation by a US citizen would result in a fine of up to $1,000,000 or up to 20 years in prison. While the RESTRICT act does not mention TikTok by name, it was implied as this bill paralleled calls to ban TikTok. [77]
In November 2019 the National Conference of State Legislatures listed twenty-seven states with laws that apply to Internet use at publicly funded schools or libraries: [78]
The majority of these states simply require school boards/districts or public libraries to adopt Internet use policies to prevent minors from gaining access to sexually explicit, obscene or harmful materials. However, some states also require publicly funded institutions to install filtering software on library terminals or school computers.
The states that require schools and libraries to adopt policies to protect minors include: California, Delaware, Georgia, Indiana, Iowa, Kentucky, Louisiana, Maryland, Massachusetts, New Hampshire, New York, Rhode Island, South Carolina, and Tennessee. Florida law "encourages public libraries to adopt an Internet safety education program, including the implementation of a computer-based educational program". [78]
The states that require Internet filtering in schools and libraries to protect minors are: Arizona, Arkansas, Colorado, Idaho, Kansas, Michigan, Minnesota, Missouri, Ohio, Pennsylvania, South Dakota, Utah, and Virginia. Five states require Internet service providers to make a product or service available to subscribers to control use of the Internet. They are: Louisiana, Maryland, Nevada, Texas, and Utah. [78]
In July 2011 Missouri lawmakers passed the Amy Hestir Student Protection Act which included a provision that barred K-12 teachers from using websites that allow "exclusive access" in communications with current students or former students who are 18 or younger, such as occurs with private messages. [79] A circuit court order issued before the law went into effect blocked the provision because "the breadth of the prohibition is staggering" and the law "would have a chilling effect" on free-speech rights. [80] In September the legislature replaced the controversial provision with a requirement that local school districts develop their own policies on the use of electronic communication between employees and students. [81] [82]
In May 2023, Montana enacted a ban on TikTok from operating within or offering its services to anyone within the state's borders. [83] In December 2023, the ban was blocked by a federal judge who ruled that the statute was an unconstitutional restriction on free speech. [84]
In April 2022, District Judge Katherine Polk Failla issued a site blocking order against three piracy websites, which were cited in lawsuits brought upon by a group of Israeli media companies, but whose operators failed to appear in court. The order mandated that the three websites, as well as any "newly-discovered websites" found to be operated by the defendants, be blocked by all US ISPs. It prohibited any third-party service operator from doing business with or offering services to the defendants, and ordered that their domain names be seized and transferred to the plaintiffs. This order is similar to, but goes beyond what was proposed in SOPA. [85] [86]
The constitutional and other legal protections that prohibit or limit government censorship of the Internet do not generally apply to corporations. Corporations may choose to limit the content they make available or allow others to make available on the Internet. [3] Corporations may be encouraged by government pressure or required by law or court order to remove or limit access to content that is judged to be obscene (including child pornography), harmful to children, defamatory, pose a threat to national security, promote illegal activities such as gambling, prostitution, theft of intellectual property, hate speech, and inciting violence. [1] [2]
Public and private institutions that provide Internet access for their employees, customers, students, or members will sometimes limit this access in an attempt to ensure it is used only for the organization's purposes. This can include content-control software to limit access to entertainment content in business and educational settings and limits to high-bandwidth services. Some institutions also block outside e-mail services as a precaution, usually initiated out of concerns for network security or concerns that e-mail might be used intentionally or unintentionally to expose trade secrets or other confidential information.
K-12 schools and libraries that accept funds from the federal E-rate program or Library Services and Technology Act grants for Internet access or internal connections are required by CIPA to have an "Internet safety policy and technology protection measures". [37]
Many K-12 school districts use Internet filters to block material deemed inappropriate for a school setting. [87] [88] The federal government leaves decisions about what to filter or block to local authorities. However, critics assert that such decisions should be made by a student's parents or guardian. Concerns include: the risk of supporting a predominant ideology, that filter manufacturers' views are imposed on students, overblocking of useful information, and underblocking of harmful information. [89] A 2003 study reported that "blocking software overblocked state-mandated curriculum topics–for every web page correctly blocked, one or more was inappropriately blocked". [90]
Some libraries may block access to certain web pages, including pornography, advertising, chat, gaming, social networking, and online forum sites, [91] The use of filtering and blocking software in libraries remains controversial. [92]
This section needs expansion. You can help by adding to it. (October 2021) |
In 2007, Verizon attempted to block the abortion rights group NARAL Pro-Choice America from using their text messaging services to speak to their supporters. Verizon claimed it was in order to enforce a policy that does not allow their customers to use their service to communicate "controversial" or "unsavory" messages. [93] Comcast, AT&T and other ISPs have been accused of regulating internet traffic and bandwidth.
eNom, a private domain name registrar and Web hosting company operating in the US, disables domain names that appear on a US Treasury blocklist. [38] [39]
The Department of Defense prohibits its personnel from accessing certain IP addresses from DoD computers. [94] The US military's filtering policy is laid out in a report to Congress entitled "Department of Defense Personnel Access to the Internet". [95]
In October 2009, military blogger C.J. Grisham was pressured by his superiors at Redstone Arsenal to close his blog, A Soldier's Perspective, after complaining about local public school officials pushing a mandatory school uniform program without parental consent. [96]
The Monterey Herald reported on June 27, 2013, that the United States Army barred its personnel from accessing parts of The Guardian 's website after whistleblower Edward Snowden's revelations about the PRISM global surveillance program and the NSA were published there. [97] [98] The entire Guardian website was blocked for personnel stationed throughout Afghanistan, the Middle East, and South Asia, as well as personnel stationed at US Central Command headquarters in Florida. [99]
In 2019, social media app TikTok was banned on all military devices for what the Pentagon said was "potential security risks". [100]
In February 2008, the Bank Julius Baer vs. WikiLeaks lawsuit prompted the United States District Court for the Northern District of California to issue a permanent injunction against the website WikiLeaks' domain name registrar. The result was that WikiLeaks could not be accessed through its web address. This elicited accusations of censorship and resulted in the Electronic Frontier Foundation defending WikiLeaks. After a later hearing, the injunction was lifted. [101]
Assange said that WikiLeaks chose Amazon knowing they would probably be kicked off of "in order to separate rhetoric from reality". [102] [103] On December 1, 2010 Amazon.com cut off WikiLeaks 24 hours after it was contacted by the staff of Senator Joe Lieberman, Chairman of the US Senate Committee on Homeland Security. [104] In a statement Lieberman said it was "the right decision and should set the standard for other companies". [105] Constitutional lawyers say that this is not a first amendment issue because Amazon, as a private company, is free to make its own decisions. Kevin Bankston, a lawyer with the Electronic Frontier Foundation, agreed that this was not a violation of the first amendment. [106]
Some websites that allow user-contributed content practice self-censorship by adopting policies on how the web site may be used and by banning or requiring pre-approval of editorial contributions from users that violate the site's policies. For example, a social media platform may restrict speech that it considers to be hate speech more broadly than is required by US law, [107] and may restrict speech that it considers to be harassment and verbal abuse.
Restriction of hate speech and harassment on social media is the subject of debate. For example, two perspectives include that online hate speech should be removed because it causes serious intimidation and harm, [108] and that it should not be removed because it is "better to know that there are bigots among us" than to have an inaccurate picture of the world. [109]
US corporations including Google, Yahoo!, Microsoft, and MySpace practice greater levels of self-censorship in some international versions of their online services to comply with local laws/regulations. [110] [111] This is most notably the case in these corporations' dealings in China.
In October 2011 US-based Blue Coat Systems of Sunnyvale, California acknowledged that Syria was using its devices to censor Web activity, a possible violation of US trade embargoes. [112]
A January 4, 2007 restraining order issued by US District Court Judge Jack B. Weinstein forbade activists in the psychiatric survivors movement from posting links to ostensibly leaked documents that purportedly show that Eli Lilly and Company intentionally withheld information as to the lethal side-effects of Zyprexa. The Electronic Frontier Foundation appealed this as prior restraint, saying that citizen-journalists should have the same First Amendment rights as major media outlets. [113] It was later held that the judgment was unenforceable, though First Amendment claims were rejected. [114]
In May 2011 and January 2012 the US seized the domains of the non-US websites of the non-US citizens Richard O'Dwyer and Kim Dotcom, and sought to extradite them to the US, accusing them of copyright infringement. [5] [6] [7] [8]
In January 2015 details from the Sony Pictures Entertainment hack revealed the Motion Picture Association of America's lobbying of the United States International Trade Commission to mandate that US ISPs, either at the internet transit or internet service provider level, implement IP address blocking of unauthorized file sharing as well as linking websites. [115]
On July 3, 2011, two officers of the Bay Area Rapid Transit (BART) Police shot and killed Charles Hill at Civic Center Station in San Francisco. [116] On August 12, 2011, BART shut down phone services, including mobile Internet access, for three hours in an effort to limit possible protests against the shooting [117] [118] and to limit communications from protesters at the station. [119] The shutdown drew the attention of international media, along with comparisons to former Egyptian president Hosni Mubarak. [120]
On August 29, 2011, a coalition of nine public interest groups led by Public Knowledge filed an Emergency Petition asking the US Federal Communications Commission (FCC) to declare these actions illegal. [121] [122]
In December 2011 BART adopted a new "Cell Service Interruption Policy" that allows shutdowns of phone services within BART facilities only "in the most extraordinary circumstances that threaten the safety of District passengers, employees and other members of public, the destruction of District property, or the substantial disruption of public transit service". [123] According to a spokesperson, under the new policy the phone system would not be disable under circumstances similar to those in August 2011. Instead police officers would arrest individuals who break the law. [124]
In 2014 the FCC issued an Enforcement Advisory warning the public that "it is illegal to use a cell phone jammer or any other type of device that blocks, jams or interferes with authorized communications" and that "this prohibition extends to every entity that does not hold a federal authorization, including state and local law enforcement agencies". [125]
In 2016 the California Law Revision Commission issued a recommendation on "Government Interruption of Communication Service". [126] The Commission concluded that government action to interrupt communications can be constitutional if the government acts pursuant to procedures that are designed to protect constitutional free expression and due process rights. To be constitutional the action usually needs to be approved by a judicial officer who has found: (i) probable cause that the communication service is or will be used for an unlawful purpose, (ii) that immediate action is required to protect public health, safety, or welfare, and (iii) the affected customer must have a prompt opportunity for adjudication of the government's contentions. For a general interruption of communication service that affects a large number of people or a large geographic area, judicial approval also requires that the action (iv) is necessary to avoid a serious threat of violence that is both imminent and likely to occur or (v) that the effect on expression is incidental to some other valid government purpose, and (vi) is reasonable, (vii) is content-neutral, (viii) would impair no more speech than is necessary, and (ix) leaves open other ample means of communication. Prior judicial approval is not required in extreme emergencies involving immediate danger of death or great bodily injury where there is insufficient time to obtain a court order. [126]
Beyond constitutional law, a state or local government's ability to effect a general interruption of wireless communications is also subject to the federal "Emergency Wireless Protocol (EWP)" or "Standard Operating Procedure 303" which established a process for interrupting and restoring wireless communication service during times of national emergency. The effect of this protocol is that state and local government officials can initiate an interruption of communication service, but cannot directly order wireless communication service providers to take action. Such orders to private providers must come from the National Coordinating Center for Communications (NCC) within the Department of Homeland Security (DHS), as designated by the EWP. If an order authorizing an interruption does not fall within the EWP, it is served directly on the relevant communication service provider. [126]
An Internet filter is software that restricts or controls the content an Internet user is capable to access, especially when utilized to restrict material delivered over the Internet via the Web, Email, or other means. Such restrictions can be applied at various levels: a government can attempt to apply them nationwide, or they can, for example, be applied by an Internet service provider to its clients, by an employer to its personnel, by a school to its students, by a library to its visitors, by a parent to a child's computer, or by an individual user to their own computers. The motive is often to prevent access to content which the computer's owner(s) or other authorities may consider objectionable. When imposed without the consent of the user, content control can be characterised as a form of internet censorship. Some filter software includes time control functions that empowers parents to set the amount of time that child may spend accessing the Internet or playing games or other computer activities.
The Communications Decency Act of 1996 (CDA) was the United States Congress's first notable attempt to regulate pornographic material on the Internet. In the 1997 landmark case Reno v. ACLU, the United States Supreme Court unanimously struck the act's anti-indecency provisions.
Internet censorship in Australia is enforced by both the country's criminal law as well as voluntarily enacted by internet service providers. The Australian Communications and Media Authority (ACMA) has the power to enforce content restrictions on Internet content hosted within Australia, and maintain a blocklist of overseas websites which is then provided for use in filtering software. The restrictions focus primarily on child pornography, sexual violence, and other illegal activities, compiled as a result of a consumer complaints process.
The Children's Internet Protection Act (CIPA) is one of a number of bills that the United States Congress proposed to limit children's exposure to pornography and explicit content online.
Internet censorship is the legal control or suppression of what can be accessed, published, or viewed on the Internet. Censorship is most often applied to specific internet domains but exceptionally may extend to all Internet resources located outside the jurisdiction of the censoring state. Internet censorship may also put restrictions on what information can be made internet accessible. Organizations providing internet access – such as schools and libraries – may choose to preclude access to material that they consider undesirable, offensive, age-inappropriate or even illegal, and regard this as ethical behavior rather than censorship. Individuals and organizations may engage in self-censorship of material they publish, for moral, religious, or business reasons, to conform to societal norms, political views, due to intimidation, or out of fear of legal or other consequences.
Internet censorship in the United Kingdom is conducted under a variety of laws, judicial processes, administrative regulations and voluntary arrangements. It is achieved by blocking access to sites as well as the use of laws that criminalise publication or possession of certain types of material. These include English defamation law, the Copyright law of the United Kingdom, regulations against incitement to terrorism and child pornography.
Copyright infringement is the use of works protected by copyright without permission for a usage where such permission is required, thereby infringing certain exclusive rights granted to the copyright holder, such as the right to reproduce, distribute, display or perform the protected work, or to produce derivative works. The copyright holder is usually the work's creator, or a publisher or other business to whom copyright has been assigned. Copyright holders routinely invoke legal and technological measures to prevent and penalize copyright infringement.
The Digital Millennium Copyright Act (DMCA) is a 1998 United States copyright law that implements two 1996 treaties of the World Intellectual Property Organization (WIPO). It criminalizes production and dissemination of technology, devices, or services intended to circumvent measures that control access to copyrighted works. It also criminalizes the act of circumventing an access control, whether or not there is actual infringement of copyright itself. In addition, the DMCA heightens the penalties for copyright infringement on the Internet. Passed on October 12, 1998, by a unanimous vote in the United States Senate and signed into law by President Bill Clinton on October 28, 1998, the DMCA amended Title 17 of the United States Code to extend the reach of copyright, while limiting the liability of the providers of online services for copyright infringement by their users.
Although Internet censorship in Germany is traditionally been rated as low, it is practised directly and indirectly through various laws and court decisions. German law provides for freedom of speech and press with several exceptions, including what The Guardian has called "some of the world's toughest laws around hate speech". An example of content censored by law is the removal of web sites from Google search results that deny the holocaust, which is a felony under German law. According to the Google Transparency Report, the German government is frequently one of the most active in requesting user data after the United States. However, in Freedom House's Freedom On the Net 2022 Report, Germany was rated the eighth most free of the 70 countries rated.
Notice and take down is a process operated by online hosts in response to court orders or allegations that content is illegal. Content is removed by the host following notice. Notice and take down is widely operated in relation to copyright infringement, as well as for libel and other illegal content. In United States and European Union law, notice and takedown is mandated as part of limited liability, or safe harbour, provisions for online hosts. As a condition for limited liability online hosts must expeditiously remove or disable access to content they host when they are notified of the alleged illegality.
United States Senate Bill S.3804, known as the Combating Online Infringement and Counterfeits Act (COICA) was a bill introduced by Senator Patrick Leahy (D-VT) on September 20, 2010. It proposed amendments to Chapter 113 of Title 18 of the United States Code that would authorize the Attorney General to bring an in rem action against any domain name found "dedicated to infringing activities," as defined within the text of the bill. Upon bringing such an action, and obtaining an order for relief, the registrar of, or registry affiliated with, the infringing domain would be compelled to "suspend operation of and lock the domain name."
Internet censorship in South Korea is prevalent, and contains some unique elements such as the blocking of pro-North Korea websites, and to a lesser extent, Japanese websites, which led to it being categorized as "pervasive" in the conflict/security area by OpenNet Initiative. South Korea is also one of the few developed countries where pornography is largely illegal, with the exception of social media websites which are a common source of legal pornography in the country. Any and all material deemed "harmful" or subversive by the state is censored. The country also has a "cyber defamation law", which allow the police to crack down on comments deemed "hateful" without any reports from victims, with citizens being sentenced for such offenses.
There is medium internet censorship in France, including limited filtering of child pornography, laws against websites that promote terrorism or racial hatred, and attempts to protect copyright. The "Freedom on the Net" report by Freedom House has consistently listed France as a country with Internet freedom. Its global ranking was 6 in 2013 and 12 in 2017. A sharp decline in its score, second only to Libya was noted in 2015 and attributed to "problematic policies adopted in the aftermath of the Charlie Hebdo terrorist attack, such as restrictions on content that could be seen as 'apology for terrorism,' prosecutions of users, and significantly increased surveillance."
The PROTECT IP Act was a proposed law with the stated goal of giving the US government and copyright holders additional tools to curb access to "rogue websites dedicated to the sale of infringing or counterfeit goods", especially those registered outside the U.S. The bill was introduced on May 12, 2011, by Senator Patrick Leahy (D-VT) and 11 bipartisan co-sponsors. The Congressional Budget Office estimated that implementation of the bill would cost the federal government $47 million through 2016, to cover enforcement costs and the hiring and training of 22 new special agents and 26 support staff. The Senate Judiciary Committee passed the bill, but Senator Ron Wyden (D-OR) placed a hold on it.
The Stop Online Piracy Act (SOPA) was a proposed United States congressional bill to expand the ability of U.S. law enforcement to combat online copyright infringement and online trafficking in counterfeit goods. Introduced on October 26, 2011, by Representative Lamar Smith (R-TX), provisions included the requesting of court orders to bar advertising networks and payment facilities from conducting business with infringing websites, and search engines from linking to the websites, and court orders requiring Internet service providers to block access to the websites. The proposed law would have expanded existing criminal laws to include unauthorized streaming of copyrighted content, imposing a maximum penalty of five years in prison.
There were different but similar copyright bills in the 112th United States Congress: The Stop Online Piracy Act (SOPA) in the House of Representatives and the PROTECT IP Act (PIPA) in the Senate. A typical route for legislation like this is to pass some version in both houses, then refer the two bills to a conference committee, which would produce a single bill likely to pass both houses.
Florence v. Shurtleff, Civil No. 2:05CV000485, was a case in which the U.S. District Court for the District of Utah issued an order stating that individuals could not be prosecuted for posting adult content that was constitutionally protected on general access websites, nor could they be civilly liable for failing to prevent access to adult content, so long as the material is identifiable by filtering software. The order was the result of a 2005 lawsuit, The King's English v. Shurtleff, brought by Utah bookstores, artists, Internet Service Providers and the other organizations challenging the constitutionality of certain portions of a Utah law intended to protect minors from adult content.
The precise number of websites blocked in the United Kingdom is unknown. Blocking techniques vary from one Internet service provider (ISP) to another with some sites or specific URLs blocked by some ISPs and not others. Websites and services are blocked using a combination of data feeds from private content-control technology companies, government agencies, NGOs, court orders in conjunction with the service administrators who may or may not have the power to unblock, additionally block, appeal or recategorise blocked content.
Internet censorship in Switzerland is regulated by the Federal Supreme Court of Switzerland on a case by case basis. Internet services provided by the registered with BAKOM Internet service providers (ISPs) are subject to a "voluntary recommendation" by the Federal Supreme Court of Switzerland, which requires blocking of websites just after 18 December 2007. As of October 2015, this might change soon and additional topics like Online gambling are on the focus now.
This list of Internet censorship and surveillance in Europe provides information on the types and levels of Internet censorship and surveillance that is occurring in countries in Europe.
This article incorporates licensed material from the Regional Overviews and other sections of the OpenNet Initiative web site. [127]
{{cite web}}
: CS1 maint: numeric names: authors list (link){{cite web}}
: CS1 maint: multiple names: authors list (link){{cite web}}
: CS1 maint: numeric names: authors list (link){{cite web}}
: CS1 maint: multiple names: authors list (link)