Internet censorship in New Zealand

Last updated

Internet censorship in New Zealand refers to the New Zealand Government's system for filtering website traffic to prevent Internet users from accessing certain selected sites and material. While there are many types of objectionable content under New Zealand law, the filter specifically targets content depicting the sexual abuse or exploitation of children and young persons. The Department of Internal Affairs runs the filtering system, dubbed the Digital Child Exploitation Filtering System (DCEFS). It is voluntary for Internet Service Providers (ISPs) to join. [1]

Contents

History

In August 1993, the New Zealand Parliament passed the Films, Videos, and Publications Classification Act 1993, which made it the responsibility of the Department of Internal Affairs (DIA) to restrict objectionable content in the country. [2] This act did not include any provisions for Internet content. [2] In February 2005, the New Zealand Parliament amended the act to explicitly prevent ISPs from being prosecuted for their users transmitting objectionable content. [3] [4]

In March 2009, the Minister for Communications and IT, Steven Joyce, stated that the government had been following the controversy surrounding Internet censorship in Australia, and had no plans to introduce something similar in New Zealand. He acknowledged that filtering can cause delays for all Internet users, and that those who are determined to get around any filter will find a way to do so. [5] Later in July of the same year, it was reported that the Department of Internal Affairs had plans to introduce Internet filtering in New Zealand. [6] [7] The project, using Swedish software, cost $150,000. [8] February 2010 saw the first meeting of the Independent Reference Group, who are tasked with overseeing the responsible implementation of the DCEFS. [9] In March 2010, a year after Joyce stated that there were no plans to do so, the Department of Internal Affairs stated that the filter was operational and in use. [10] Tech Liberty NZ objected to the launch of the filter, but DIA defended the system and noted that trials over two years showed that the filter did not affect the speed or stability of the internet. [11]

In March 2019, several websites disseminating footage of the Christchurch mosque shooting were censored by major ISPs in Australia and New Zealand, including 4chan, 8chan, and LiveLeak. [12] [13]

In June 2024, the DCEFS web filter was upgraded to implement the Internet Watch Foundation filter. [14]

Technical details

A diagram of the request life cycle for a user of an ISP who has implemented the DCEFS NZ DCEFS Diagram.svg
A diagram of the request life cycle for a user of an ISP who has implemented the DCEFS

The Department of Internal Affairs maintains a hidden list of banned URLs and their internet addresses on a NetClean WhiteBox server, which as of 2009 contained over 7000 websites. [15] The DIA then uses the Border Gateway Protocol to tell ISPs that they have the best connection to those internet addresses. [16]

When a user tries to access a website, the ISP will automatically send their data through the best connection possible. If the user is trying to access a website hosted at an internet address that the DIA claims to have the best connection to, the ISP will divert the traffic to the DIA. [16]

If the website the user is trying to access is on the DIA's list of banned URLs, then the connection is blocked by the WhiteBox server. [16] The user instead sees a filter notice page and has the option of getting counselling or anonymously appealing the ban. [17]

If the website is not on the list of banned URLs, then the DIA transparently passes on the data to the actual website and the user is left unaware that the request was checked. [16]

ISPs using the system

Some of the largest ISPs in New Zealand, including Spark New Zealand, One NZ, 2degrees, Compass, Kordia, Maxnet, Now, and Xtreme Networks are using the DCEFS, which as of 2017 make up over 75% of the domestic market, as well as 100% of cellular carriers. [3] [18] [19]

The Films, Videos, and Publications Classification Act of 1993 (FVPC Act) makes it the responsibility of the Department of Internal Affairs (DIA) to administer the restriction of objectionable content in the country. [2] This includes the power to seize the offending publication, which was later interpreted to include images and video posted online, given that the original FVPC Act gave no guidelines for Internet content. [3] [2]

However, while the FVPC Act was interpreted to include the search and seizure of Internet content hosted in New Zealand, it wasn't possible for the DIA to directly take down the website in another jurisdiction. [20] Furthermore, the FVPC Act doesn't give the DIA the right to mandate a block of objectionable content hosted in other jurisdictions, meaning that they can't create a compulsory filter. [21]

The FVPC Act defines many forms of objectionable content, such as depictions of torture, degrading sexual acts, bestiality, sexual violence, abuse of children, and necrophilia, especially in conjunction with the promotion of discrimination, crime, terrorism or dehumanization. Given the fact that the DIA couldn't make the filter compulsory for ISPs, they chose to choose to limit the filter to block the exploitation of children rather than targeting all objectionable content, as it is easy to garner public support for fighting child abuse. [3] [15]

Positions

Support

The DIA implemented the DCEFS with that stated intent of preventing child predators from accessing child abuse images, thereby preventing their spread as much as possible. [21] Proponents of the system tout its over one million blocks per month as evidence of its necessity as part of a multifaceted approach to combating child exploitation. [22]

The DIA claims the system is helpful in educating users about this type of child abuse. [20] The system also prevents innocent users from accidentally accessing images of child abuse, which the DIA claims is a public expectation of the government and ISPs. [20]

In addition, supporters of the system argue that there is nothing inherently bad in ISPs offering internet filtering, as many ISPs offered it before the DCEFS was even built. [20]

Against

Critics of the DCEFS have cited numerous problems including performance, transparency, and security concerns. While the DIA claims that the filter will not cause issues, opponents of the system claimed that it has made major missteps, such as catching a Google-owned internet address in the filter, causing significant slowdowns. [11] [23] There are also concerns that the filter simply won't work, as it can be bypassed by commonly available technologies such as using encryption or non-HTTP based file sharing methods. [24]

Civil rights groups, such as TechLibertyNZ have criticized the system for its lack of transparency due to their refusal to release the list of what is being banned, as well as what they view as a purposefully hidden launch of the system. [11] TechLibertyNZ claims that the government could secretly add other sites they want to restrict to the hidden list. [24]

Finally, there are concerns over the security of such a system, mainly due to its use of the trust-based BGP protocol. [25] If someone got access to the system, they could redirect any internet traffic in between New Zealand ISPs. [25] The DIA argues this is not a vulnerability unique to the DCEFS and that their security is industry standard. [20]

See also

Related Research Articles

An Internet filter is software that restricts or controls the content an Internet user is capable to access, especially when utilized to restrict material delivered over the Internet via the Web, Email, or other means. Such restrictions can be applied at various levels: a government can attempt to apply them nationwide, or they can, for example, be applied by an Internet service provider to its clients, by an employer to its personnel, by a school to its students, by a library to its visitors, by a parent to a child's computer, or by an individual user to their own computers. The motive is often to prevent access to content which the computer's owner(s) or other authorities may consider objectionable. When imposed without the consent of the user, content control can be characterised as a form of internet censorship. Some filter software includes time control functions that empowers parents to set the amount of time that child may spend accessing the Internet or playing games or other computer activities.

The Australian Communications and Media Authority (ACMA) is an Australian government statutory authority within the Communications portfolio. ACMA was formed on 1 July 2005 with the merger of the Australian Broadcasting Authority and the Australian Communications Authority.

Internet censorship in Australia is enforced by both the country's criminal law as well as voluntarily enacted by internet service providers. The Australian Communications and Media Authority (ACMA) has the power to enforce content restrictions on Internet content hosted within Australia, and maintain a blocklist of overseas websites which is then provided for use in filtering software. The restrictions focus primarily on child pornography, sexual violence, and other illegal activities, compiled as a result of a consumer complaints process.

<span class="mw-page-title-main">Classification Office (New Zealand)</span> Media classification agency in New Zealand

The Office of Film and Literature Classification, branded as the Classification Office, is an independent Crown entity established under Films, Videos, and Publications Classification Act 1993 responsible for censorship and classification of publications in New Zealand. A "publication" is defined broadly to be anything that shows an image, representation, sign, statement, or word. This includes films, video games, books, magazines, CDs, T-shirts, street signs, jigsaw puzzles, drink cans, and slogans on campervans. The Chief Censor, Caroline Flora, is the chair of the Office.

Definitions and restrictions on pornography vary across jurisdictions. The production, distribution, and possession of pornographic films, photographs, and similar material are activities that are legal in many but not all countries, providing that any specific people featured in the material have consented to being included and are above a certain age. Various other restrictions often apply as well. The minimum age requirement for performers is most typically 18 years.

<span class="mw-page-title-main">Internet censorship in Pakistan</span>

Internet censorship in Pakistan is due to the governments attempts to control information sent and received using social media and the Internet in Pakistan. Presently, as of December 2024, X is banned, despite the government using the platform to issue official statements.

<span class="mw-page-title-main">Internet censorship in India</span>

Internet censorship in India is done by both central and state governments. DNS filtering and educating service users in suggested usages is an active strategy and government policy to regulate and block access to Internet content on a large scale. Measures for removing content at the request of content creators through court orders have also become more common in recent years. Initiating a mass surveillance government project like Golden Shield Project is an alternative that has been discussed over the years by government bodies.

Cleanfeed is the name given to various privately administered ISP level content filtering systems operating in the United Kingdom and Canada, and as of May 2012 undergoing testing in Australia with a view to future mandatory implementation. These government-mandated programs originally attempted to block access to child pornography and abuse content located outside of the nation operating the filtering system.

<span class="mw-page-title-main">Internet censorship</span> Legal control of the internet

Internet censorship is the legal control or suppression of what can be accessed, published, or viewed on the Internet. Censorship is most often applied to specific internet domains but exceptionally may extend to all Internet resources located outside the jurisdiction of the censoring state. Internet censorship may also put restrictions on what information can be made internet accessible. Organizations providing internet access – such as schools and libraries – may choose to preclude access to material that they consider undesirable, offensive, age-inappropriate or even illegal, and regard this as ethical behavior rather than censorship. Individuals and organizations may engage in self-censorship of material they publish, for moral, religious, or business reasons, to conform to societal norms, political views, due to intimidation, or out of fear of legal or other consequences.

Internet censorship in the United States is the suppression of information published or viewed on the Internet in the United States. The First Amendment of the United States Constitution protects freedom of speech and expression against federal, state, and local government censorship.

Internet censorship in Singapore is carried out by the Infocomm Media Development Authority (IMDA). Internet services provided by the three major Internet service providers (ISPs) are subject to regulation by the MDA, which requires blocking of a symbolic number of websites containing "mass impact objectionable" material, including Playboy, YouPorn and Ashley Madison. The civil service, tertiary institutions and Institute of Technical Education has its own jurisdiction to block websites displaying pornography, information about drugs and online piracy.

Internet censorship in the United Kingdom is conducted under a variety of laws, judicial processes, administrative regulations and voluntary arrangements. It is achieved by blocking access to sites as well as the use of laws that criminalise publication or possession of certain types of material. These include English defamation law, the Copyright law of the United Kingdom, regulations against incitement to terrorism and child pornography.

Censorship in New Zealand has been present since around 1850 and is currently managed by the Classification Office under the Films, Videos, and Publications Classification Act 1993.

<span class="mw-page-title-main">Internet Watch Foundation</span> Registered charity in Cambridge, England

The Internet Watch Foundation (IWF) is a global registered charity based in Cambridge, England. It states that its remit is "to minimise the availability of online sexual abuse content, specifically child sexual abuse images and videos hosted anywhere in the world and non-photographic child sexual abuse images hosted in the UK." Content inciting racial hatred was removed from the IWF's remit after a police website was set up for the purpose in April 2011. The IWF used to also take reports of criminally obscene adult content hosted in the UK. This was removed from the IWF's remit in 2017. As part of its function, the IWF says that it will "supply partners with an accurate and current URL list to enable blocking of child sexual abuse content". It has "an excellent and responsive national Hotline reporting service" for receiving reports from the public. In addition to receiving referrals from the public, its agents also proactively search the open web and deep web to identify child sexual abuse images and videos. It can then ask service providers to take down the websites containing the images or to block them if they fall outside UK jurisdiction.

<span class="mw-page-title-main">Internet Watch Foundation and Wikipedia</span> Blacklist of Wikipedia in the UK

On 5 December 2008, the Internet Watch Foundation (IWF), a British watchdog group, blacklisted content on the English Wikipedia related to Scorpions' 1976 studio album Virgin Killer, due to the presence of its controversial cover artwork, depicting a young girl posing nude, with a faux shattered-glass effect obscuring her genitalia. The image was deemed to be "potentially illegal content" under English law which forbids the possession or creation of indecent photographs of children. The IWF's blacklist is used in web filtering systems such as Cleanfeed.

There is medium internet censorship in France, including limited filtering of child pornography, laws against websites that promote terrorism or racial hatred, and attempts to protect copyright. The "Freedom on the Net" report by Freedom House has consistently listed France as a country with Internet freedom. Its global ranking was 6 in 2013 and 12 in 2017. A sharp decline in its score, second only to Libya was noted in 2015 and attributed to "problematic policies adopted in the aftermath of the Charlie Hebdo terrorist attack, such as restrictions on content that could be seen as 'apology for terrorism,' prosecutions of users, and significantly increased surveillance."

Internet filtering in Indonesia was deemed "substantial" in the social arena, "selective" in the political and internet tools arenas, and there was no evidence of filtering in the conflict/security arena by the OpenNet Initiative in 2011 based on testing done during 2009 and 2010. Testing also showed that Internet filtering in Indonesia is unsystematic and inconsistent, illustrated by the differences found in the level of filtering between ISPs. Indonesia was rated "partly free" in Freedom on the Net 2020 with a score of 49, midway between the end of the "free" range at 30 and the start of the "not free" range at 60.

The precise number of websites blocked in the United Kingdom is unknown. Blocking techniques vary from one Internet service provider (ISP) to another with some sites or specific URLs blocked by some ISPs and not others. Websites and services are blocked using a combination of data feeds from private content-control technology companies, government agencies, NGOs, court orders in conjunction with the service administrators who may or may not have the power to unblock, additionally block, appeal or recategorise blocked content.

The child abuse image content list is a list of URLs and image hashes provided by the Internet Watch Foundation to its partners to enable the blocking of child pornography & criminally obscene adult content in the UK and by major international technology companies.

<span class="mw-page-title-main">Internet censorship and surveillance in Oceania</span>

This list of Internet censorship and surveillance in Oceania provides information on the types and levels of Internet censorship and surveillance that is occurring in countries in Oceania.

References

  1. "Internet and website filter". www.dia.govt.nz. Retrieved 9 November 2018.
  2. 1 2 3 4 "Films, Videos, and Publications Classification Act 1993 No 94 (as at 01 October 2018), Public Act Contents – New Zealand Legislation". www.legislation.govt.nz. Retrieved 15 November 2018.
  3. 1 2 3 4 Ctrl + Alt + Delete? Challenges to New Zealand censorship law in the internet age (Thesis). Hastings, Bill. Victoria University of Wellington. 2016.{{cite thesis}}: CS1 maint: others (link)
  4. "Films, Videos, and Publications Classification Amendment Act 2005 No 2, Public Act 25 New sections 122 and 122A substituted – New Zealand Legislation". www.legislation.govt.nz. Retrieved 15 November 2018.
  5. Keall, Chris (20 March 2009). "Joyce: Internet filtering off the agenda in NZ". NBR. Archived from the original on 10 March 2018. Retrieved 12 July 2009.
  6. Beagle, Thomas (10 May 2009). "The Response from Internal Affairs" . Retrieved 12 July 2009.
  7. Freitas, Mauricio Freitas (11 July 2009). "Government plans to filter New Zealand Internet" . Retrieved 12 July 2009.
  8. Hendery, Simon (16 July 2009). "Internet filter sparks outrage" . Retrieved 15 December 2014.
  9. "Independent Reference Group Meeting Minutes February 2010". www.dia.govt.nz. Retrieved 8 December 2018.
  10. "NZ government now filtering internet". Tech Liberty NZ. 11 March 2010. Retrieved 12 March 2010.
  11. 1 2 3 "New Zealand's internet filter goes live". Stuff. Retrieved 20 September 2018.
  12. Kelly, Makena (18 March 2019). "New Zealand ISPs are blocking sites that do not remove Christchurch shooting video". The Verge. Retrieved 18 March 2019.
  13. Brodkin, Jon (20 March 2019). "4chan, 8chan blocked by Australian and NZ ISPs for hosting shooting video". Ars Technica. Retrieved 20 March 2019.
  14. "Improvements to stopping Digital Child Exploitation | Beehive.govt.nz". www.beehive.govt.nz. Retrieved 7 December 2024.
  15. 1 2 "Web filter will focus solely on child sex abuse images". www.dia.govt.nz. Retrieved 9 November 2018.
  16. 1 2 3 4 "Technical FAQ". Tech Liberty NZ. 16 March 2010. Retrieved 9 November 2018.
  17. "Stop". www.dce.net.nz. Retrieved 9 November 2018.
  18. "Internet Service Providers using the filter". www.dia.govt.nz. Retrieved 9 November 2018.
  19. "2017 Annual Telecommunications Monitoring Report" (PDF). Commerce Commission New Zealand. 20 December 2017. Retrieved 9 November 2018.
  20. 1 2 3 4 5 "Common questions and answers". www.dia.govt.nz. Retrieved 15 November 2018.
  21. 1 2 "Explanatory statement". www.dia.govt.nz. Retrieved 15 November 2018.
  22. "Banned website visits shock experts". Stuff. Retrieved 8 December 2018.
  23. "DIA now filtering .. Google?". Tech Liberty NZ. 28 May 2013. Retrieved 8 December 2018.
  24. 1 2 "Why we oppose internet filtering". Tech Liberty NZ. 10 March 2010. Retrieved 8 December 2018.
  25. 1 2 "Guest article: Security risks of centralised filtering". Tech Liberty NZ. 14 March 2010. Retrieved 8 December 2018.