Act of Parliament | |
![]() | |
Long title | An Act to make provision for and in connection with the regulation by Ofcom of certain internet services; for and in connection with communications offences; and for connected purposes. |
---|---|
Citation | 2023 c. 50 |
Introduced by | Michelle Donelan, Secretary of State for Science, Innovation and Technology (Commons) Lord Parkinson of Whitley Bay, Parliamentary Under-Secretary of State for Arts and Heritage (Lords) |
Territorial extent | United Kingdom (most parts) Great Britain (certain parts) |
Dates | |
Royal assent | 26 October 2023 |
Commencement | On royal assent and by regulations. |
Status: Current legislation | |
History of passage through Parliament | |
Text of statute as originally enacted | |
Text of the Online Safety Act 2023 as in force today (including any amendments) within the United Kingdom, from legislation.gov.uk. |
The Online Safety Act 2023 [1] [2] [3] (c. 50) is an Act of the Parliament of the United Kingdom to regulate online content. It was passed on 26 October 2023 and gives the relevant Secretary of State the power to designate, suppress, and record a wide range of online content that the United Kingdom deems illegal or harmful to children. [4] [5]
The Act creates a new duty of care for online platforms, requiring them to take action against illegal content, or legal content that could be harmful to children where children are likely to access it. Platforms failing this duty would be liable to fines of up to £18 million or 10% of their annual turnover, whichever is higher. It also empowers Ofcom to block access to particular websites. However, it obliges large social media platforms not to remove, and to preserve access to, journalistic or "democratically important" content such as user comments on political parties and issues.
The Act also requires platforms, including end-to-end encrypted messengers, to scan for child pornography, which experts say is not possible to implement without undermining users' privacy. [6] The government has said it does not intend to enforce this provision of the Act until it becomes "technically feasible" to do so. [7] The Act also obliges technology platforms to introduce systems that will allow users to better filter out the harmful content they do not want to see. [8] [9]
The legislation has drawn criticism both within the UK and overseas from politicians, academics, journalists and human rights organisations, who say that it poses a threat to the right to privacy and freedom of speech and expression. [10] [11] [12] Supporters of the Act say it is necessary for child protection. [13] The Wikimedia Foundation and Wikimedia UK have said they will not implement age verification or identity checks, and in 2023 requested that lawmakers exempt public interest platforms from the Act's scope. [14] [15] In August 2025, the Wikimedia Foundation lost a challenge to aspects of the Act in the High Court. [16]
Within the scope of the Act is any "user-to-user service". This is defined as an Internet service by means of which content that is generated by a user of the service, or uploaded to or shared on the service by a user of the service, may be read, viewed, heard or otherwise experienced ("encountered") by another user, or other users. Content includes written material or messages, oral communications, photographs, videos, visual images, music and data of any description. [17]
The duty of care applies globally to services with a significant number of United Kingdom users, or which target UK users, or those which are capable of being used in the United Kingdom where there are reasonable grounds to believe that there is a material risk of significant harm. [17] The idea of a duty of care for Internet intermediaries was first proposed in Thompson (2016) [18] and made popular in the UK by the work of Woods and Perrin (2019). [19]
The duty of care in the Act refers to a number of specific duties to all services within scope: [17]
For services "likely to be accessed by children", adopting the same scope as the Age Appropriate Design Code, two additional duties are imposed: [17]
For "category 1" services, which will be defined in secondary legislation but are limited to the largest global platforms, there are four further new duties: [17]
The Act empowers Ofcom, the national communications regulator, to block access to particular user-to-user services or search engines from the United Kingdom, including through interventions by internet access providers and app stores. The regulator can also impose, through "service restriction orders", requirements on ancillary services which facilitate the provision of the regulated services. [20] [21] [22]
The Act lists in section 92 as examples (i) services which enable funds to be transferred, (ii) search engines which generate search results displaying or promoting content, and (iii) services which facilitate the display of advertising on a regulated service (for example, an ad server or an ad network). Ofcom must apply to a court for both Access Restriction and Service Restriction Orders. [17]
Section 44 of the Act also gives the Secretary of State the power to direct Ofcom to modify a draft code of practice for online safety if deemed necessary for reasons of public policy, national security, or public safety. Ofcom must comply with the direction and submit a revised draft to the Secretary of State. The Secretary of State may give Ofcom further directions to modify the draft, and once satisfied, must lay the modified draft before Parliament. Additionally, the Secretary of State can remove or obscure information before laying the review statement before Parliament. [23]
The Act contains provisions allowing eligible entities to bring super-complaints on behalf of consumers. [24] The process for doing so was set out in regulations in July 2025. [25]
The Act has provisions to impose legal requirements ensuring that content removals do not arbitrarily remove or infringe access to what it defines as journalistic content. [20] Large social networks are required to protect "democratically important" content, such as user-submitted posts supporting or opposing particular political parties or policies. [26] The government stated that news publishers' own websites, as well as reader comments on such websites, are not within the intended scope of the law. [20] [22]
Section 12 of the Act states that service providers have a duty to prevent children from seeing "primary priority content that is harmful to children". This includes pornographic images, and content that encourages, promotes, or provides instructions for eating disorders, self-harm, or suicide. The Act says that service providers must use age verification or age estimation technology in order to prevent users from being able to access primary priority content unless they are appropriately-aged: the provision applies to all services that allow categories of primary priority content to be made available, including social networks and internet pornography services. [1] [27]
The Act adds two new offences to the Sexual Offences Act 2003: sending images of a person's genitals (cyberflashing), [28] or sharing or threatening to share intimate images. [29] The first conviction for cyberflashing under the new law occurred in March 2024 following a guilty plea. [30] [31]
The Act also updates and extends a number of existing communication offences. The false communications offence contained in section 179 replaces the offence previously found in s127(2)(a) and (b) of the Communications Act 2003. [32] [33] (The existing s127(1) offence remains in force.) [34] To be prosecuted under section 179, a defendant must be shown to have known the information they transmitted to be false, and that they had an intention to cause non-trivial psychological or physical harm to those receiving the message. Peter Coe of Birmingham Law School has suggested these requirements might make it hard to successfully prosecute offenders under this offence as "falsity is often difficult to ascertain", and "the offence's two-pronged mens rea, combined with the general difficulty in ascertaining falsity and non-trivial harm, and the current lack of clarity over the threshold, will make the offence difficult to prove, particularly in respect of borderline cases, which could further limit the offence's scope". [35] After the 2024 Southport stabbings and ensuing riots, several individuals were prosecuted for knowingly spreading "fake news". For example, Dimitrie Stoica was jailed for three months for falsely claiming in a TikTok livestream that he was "running for his life" from rioters in Derby. [36]
Section 181 creates an offence of sending a message (via electronic or non-electronic means) that "conveys a threat of death or serious harm". [37] This can be tried summarily or on indictment. [38]
Section 183 creates an offence of "sending or showing flashing images electronically" if it is "reasonably foreseeable that an individual with epilepsy would be among the individuals who would view it", the sender intends to cause that person harm, and they have "no reasonable excuse". [39] [40] This is intended to try and prevent "epilepsy trolling". [41]
Section 184 makes "encouraging or assisting serious self-harm" a criminal offence. This is similar to the offence of encouraging or assisting suicide contained in the Suicide Act 1961. [42] The first conviction under this section occurred in July 2025. Tyler Webb used the messaging app Telegram to encourage a woman he had met on a mental health support forum to harm herself and send him pictures of the resulting injuries, and to attempt suicide while he watched on camera. [43] [44]
In 2021, the draft bill was given pre-legislative scrutiny by a joint committee of Members of the House of Commons and peers from the House of Lords. [45] The 2021 draft bill included within scope any pornographic site which has functionality to allow for user-to-user services, but those which do not have this functionality, or choose to remove it, were not in scope in the draft published by the government. [17] Oliver Dowden, the Secretary of State for Digital, Culture, Media and Sport, addressed the House of Commons DCMS Select Committee to confirm he would consider a proposal during the pre-legislative scrutiny to extend the scope of the Act to all commercial pornographic websites. [46]
Section 212 of the final Act repeals part 3 of the Digital Economy Act 2017, which required mandatory age verification to access online pornography but was subsequently not enforced by the government. [47] According to the then government, the Act addresses the major concern expressed by campaigners such as the Open Rights Group about the risk to user privacy with the Digital Economy Act's requirement for age verification by creating, on services within scope of the legislation, "A duty to have regard to the importance of... protecting users from unwarranted infringements of privacy, when deciding on, and implementing, safety policies and procedures." [48] [49] [17] In February 2022, the Digital Economy Minister, Chris Philp, announced that the bill would be amended to bring commercial pornographic websites within its scope. [50]
Civil liberties organisations such as Big Brother Watch and the Open Rights Group criticised the bill for its proposals to restrain the publication of lawful speech that was otherwise deemed harmful, which may amount to censorship or the "silencing of marginalised voices and unpopular views". [21] [26] [51] As a result, in November 2022, measures intended to require big technology platforms to take down "legal but harmful" materials were replaced with the requirement to provide systems to avoid viewing such content. [8]
In September 2023, during the third reading in the Lords, Lord Parkinson presented a ministerial statement from the government stating that the controversial powers allowing Ofcom to break end-to-end encryption would not be used immediately. This followed statements from several tech firms, including Signal, suggesting they would withdraw from the UK market rather than weaken their encryption. Nevertheless, the provisions pertaining to end-to-end encryption weakening were not removed from the Act and Ofcom can at any time issue notices requiring the breaking of end-to-end encryption technology. [6]
Supporters of the Act have said it is necessary for online child protection. [13] Critics have said the Act grants the Government of the United Kingdom extensive powers to regulate speech, set enforcement priorities, and pressure platforms into removing content without judicial oversight. [12] [52] [11]
Alan Woodward, a cybersecurity expert at the University of Surrey, described the Act's surveillance provisions as "technically dangerous and ethically questionable", stating that the government's approach could make the internet less safe, not more. He added that the Act makes mass surveillance "almost an inevitability" as security forces would be liable to mission creep, using the justification of "exceptional circumstances" to extend searches beyond their original remit. [7] Elena Abrusci, a scholar at Brunel Law School, suggests that the OSA provides adequate legal basis for service providers to remove illegal content, but does not adequately protect users from disinformation and online harassment, which is necessary for ensuring political participation. [53]
The National Society for the Prevention of Cruelty to Children (NSPCC) said the Act's passage was "a momentous day for children" and that it would help prevent abuse. [54] The Samaritans, which had lobbied to expand the Bill's reach to protect vulnerable users, also lent the final Act its cautious support. The organisation said it was a step forward, but criticised the government for not fulfilling its ambition to make the United Kingdom the "safest place to be online". [55] [56]
British human rights organisation Article 19 warned that the Act is "an extremely complex and incoherent piece of legislation" and "fails in effectively addressing the threat to human rights" such as freedom of expression and access to information. [52] Mark Johnson, Legal and Policy Officer at Big Brother Watch, said it was a "censor's charter" that undermines the right to freedom of expression and privacy. [12] In February 2024, the European Court of Human Rights had previously ruled that requiring degraded end-to-end encryption "cannot be regarded as necessary in a democratic society" and was incompatible with Article 6 of the European Convention on Human Rights. [11]
A number of websites have stated that they would close as a result of the Act. London Fixed Gear and Single Speed, a forum for bicycle enthusiasts, announced their closure citing the high cost of legal compliance, along with Microcosm, a provider of forum hosting for non-commercial, non-profit communities. [57] [58] Some sites have instead blocked UK users, including the alt-tech social networking service Gab [59] [60] and Gen AI platform Civit.ai. [61]
Major technology firms have raised concerns over the Act's implications for user privacy and encryption. Apple Inc. called the legislation a "serious threat" to end-to-end encryption, warning that it could force the company to weaken security features designed to protect users from surveillance. [10] Meta Platforms similarly stated that it would rather have its WhatsApp and Facebook Messenger services blocked in the UK than weaken encryption standards. [62]
Some websites and apps stated they would introduce age verification systems for users in response to a 25 July 2025 deadline set by Ofcom. [63] These include pornographic websites, but also other websites and services such as networks Bluesky (verification via Kids Web Services (KWS)), Discord, Tinder, Bumble, Feeld, Grindr, Hinge, Reddit (verification via Persona), X, and Spotify. [64] [65] [66]
In response to criticisms of the Act, Prime Minister Sir Keir Starmer has said it is necessary to protect children from online content such as "suicide sites". [67] The National Crime Agency, an agency under the Home Office, also insisted the legislation is necessary to safeguard children from online harms. [13]
Reform UK leader Nigel Farage, one of the bill's most vocal opponents, has called the Act "borderline dystopian" and said he would repeal it if elected to government. [68] Fellow Reform leader Zia Yusuf described the legislation as "an assault on freedom". [69]
The government of Jersey will not enforce the law in the territory, citing "inadequacies" in the legislation. There were early objections from the Jersey government that the law did not include a permissive extent clause to enable it to join in on enforcement, but acknowledged that the failure to include such a clause "in hindsight, might be a good thing". The territory will instead develop its own legislation for online safety. [70]
Following the enactment of this law, there was a significant rise in downloads of VPN services by users in the UK, since these can circumvent age verification requirements by routing traffic through another country without such regulations. [71] Some users had also successfully bypassed photo-based age verification services, such as Persona, using images of characters from the video game Death Stranding . [72] A petition calling for the repeal of the law attracted over 500,000 signatures on the UK Parliament petitions website. [73]
Public interest platforms such as Wikipedia have also raised strong objections, suggesting that the legislation risks undermining non-profit and community-governed websites. Rebecca MacKinnon of the Wikimedia Foundation said the Act was "harsh" and failed to distinguish between commercial tech giants and public knowledge projects. [74] Both the Foundation and Wikimedia UK have rejected calls to implement age verification or identity checks, citing concerns about data minimisation, privacy, and editorial independence. [14] [75] In June 2023, they issued an open letter urging lawmakers to exempt public interest platforms from the Act's scope. [15] [76]
In May 2025, the Wikimedia Foundation launched a judicial review against the potential designation of Wikipedia as a "category one" service under the Act, which would subject the site to the most stringent requirements. [77] The foundation warned that complying with the law would compromise Wikipedia's open editing model and invite state-driven censorship or manipulation. The Daily Telegraph reported in July 2025 that Wikipedia may restrict access for UK users if the government insists on full compliance. [78] The High Court of Justice granted permission for judicial review but rejected the challenge in August 2025, [16] stating that Wikipedia was not yet designated a category one service, but that the foundation could rechallenge the ruling if it is designated as such. [79] The following month, the foundation said it would not appeal the court's decision, and that it would monitor how the court's guidance is followed, and how Wikipedia's status is protected. [80]
The U.S. Department of State Human Rights Practices report for 2024, published in August 2025, criticised the Online Safety Act as a hazard to the freedom of the press because it pressed U.S. social media platforms to "censor speech deemed misinformation or 'hate speech'". [81] [82]
In particular the foundation is concerned the extra duties required - if Wikipedia was classed as Category 1 - would mean it would have to verify the identity of its contributors, undermining their privacy and safety. The only way it could avoid being classed as Category 1 would be to cut the number of people in the UK who could access the online encyclopaedia by about three-quarters, or disable key functions on the site.
{{citation}}
: CS1 maint: location missing publisher (link)Gab, a U.S.-based platform that hosts Nazi and other extremist content, has gone completely dark in the UK to avoid financial and criminal penalties under the safety act
More than 500,000 people have signed a petition to repeal the Online Safety Act