Incident Log · 47 Events Recorded

Censorship Incidents

Laws passed, enforcement actions, platform mandates, and policy changes restricting speech — sourced from public records.

47 incidents found
2025-12
European Union
CRITICALEnforcement

EU Issues First-Ever DSA Fine — Against X

The European Commission issued the first fine under the Digital Services Act in December 2025, targeting X (formerly Twitter). According to a February 2026 report by the US House Judiciary Committee — based on nonpublic documents — the DSA fine follows a decade-long effort beginning in 2015–2016 by the Commission to pressure platforms into censoring lawful political speech through informal forums, codes of practice, and ultimately binding law. The Committee's report alleges the Commission used disinformation frameworks to suppress political speech rather than merely illegal content.

DSAx-twitterplatform-finecensorship
2025-03
United Kingdom
HIGHPolicy Change

UK Bill Proposes Ministerial Powers to Ban Children from Social Media by Order

Labour MP Olivia Bailey introduced a bill that would grant the Science Secretary (Liz Kendall) sweeping executive powers to ban children of specified ages from entire social media platforms and chatbots, restrict "addictive" features, limit children's VPN use to circumvent content blocks, and raise the age of digital consent — all by ministerial order rather than primary legislation. Critics warned the VPN restriction and minister-by-order model created a template for broader internet access controls without parliamentary scrutiny.

children-onlinesocial-media-banage-verificationVPN-restriction
2025-03
United Kingdom
CRITICALPolicy Change

UK Reviews Expanding Ofcom Powers Over Encrypted Messaging

The UK government signalled intent to revisit Online Safety Act provisions targeting end-to-end encrypted messaging apps. Ofcom would gain powers to mandate "accredited technology" scanning of encrypted messages — a position cryptographers and privacy advocates described as technically equivalent to a backdoor. Signal and WhatsApp both publicly threatened to withdraw from the UK market rather than comply. The government later paused this enforcement track but did not remove the power from the Act.

encryptionbackdoorsurveillancemessaging
2025-01
United Kingdom
CRITICALPolicy Change

UK Mandates Age Verification for Social Media

Ofcom finalised requirements compelling major platforms to implement age verification for users under 18. Critics warned age verification inherently requires identity documentation, creating mass surveillance infrastructure and chilling anonymous speech. Implementation effectively requires platforms to collect and verify government-issued ID, creating a database of identity-linked internet usage for every UK user.

age-verificationsurveillanceanonymitychildren
2025-01
Virginia, United States
CRITICALLaw Passed

Virginia HB61 — SWaM Mandate Bars Non-Certified Businesses from Contracts Under $100K

Virginia House Bill 61, championed by Governor Abigail Spanberger, mandated that 42% of state government contracts be awarded to certified SWaM (Small, Women-owned, and Minority-owned) businesses. Contracts under $100,000 were set aside exclusively for SWaM-certified vendors, structurally excluding white male-owned small businesses from bidding on a major tranche of state procurement. Critics argued the scheme constituted explicit race and sex discrimination in government contracting and may face constitutional challenge under Equal Protection principles established in Adarand Constructors v. Peña (1995) and reaffirmed by SFFA v. Harvard (2023).

SWaMprocurement-quotagovernment-contractingrace-discrimination
2024-12
European Union
HIGHEnforcement

EU Threatens X with DSA Ban Over Content Moderation

The European Commission signalled it may pursue a temporary ban on X in the EU under DSA powers if the platform did not comply with moderation demands. Commissioner Jourová stated platforms must "act responsibly." Critics noted the EU was effectively coercing a private company to suppress lawful speech through regulatory threat, with no judicial oversight of the underlying content determinations.

DSAx-twitterplatform-bancensorship
2024-11
Australia
CRITICALLaw Passed

Australia Passes World-First Social Media Ban for Under-16s

Australia passed the Online Safety Amendment (Social Media Minimum Age) Act 2024 — the world's first legislation banning children under 16 from social media platforms. Platforms face fines of AU$50 million for systemic failures. Implementation requires age verification infrastructure — critics warned this creates nationwide identity-linked internet access records and sets a global precedent for government-controlled social media access based on identity verification.

age-verificationsocial-media-bansurveillanceaustralia
2024-10
United Kingdom / United States (Florida)
CRITICALEnforcement

BBC Panorama Splices Trump Jan. 6 Speech Pre-Election — $10B Lawsuit, Leadership Resignations

BBC Panorama's documentary "Trump: A Second Chance?" (aired 28 October 2024 — one week before the US presidential election) spliced two sections of Trump's January 6, 2021 Capitol speech taken 54 minutes apart, removing Trump's explicit call for peaceful protest and manufacturing the false impression he directly incited the riot. The edit was exposed by a leaked internal memo from BBC standards adviser Michael Prescott, published by The Telegraph in November 2025, which described "serious and systemic bias" across BBC reporting. BBC Director-General Tim Davie and BBC News CEO Deborah Turness both resigned on 9 November 2025 — the most significant leadership collapse in the BBC's modern history. The BBC issued a formal apology acknowledging the edit created "the mistaken impression that President Trump had made a direct call for violent action" but rejected compensation. Trump filed a $10 billion lawsuit in Florida federal court in December 2025 ($5B defamation, $5B Florida Deceptive Trade Practices), calling the documentary "a brazen attempt to interfere in and influence the 2024 US Presidential Election." The 33-page complaint argues it was "impossible" to splice sections 55 minutes apart accidentally. On 16 March 2026, the BBC filed a motion to dismiss arguing the Florida court lacks jurisdiction as the documentary never aired in the US. A trial date of February 2027 has been provisionally set.

BBCPanoramaTrumpJan6
2024-09
Global / United Nations
HIGHPolicy Change

UN Global Digital Compact — International Framework for Platform Content Governance Adopted

The United Nations adopted the Global Digital Compact at the Summit of the Future in September 2024 — a multilateral framework committing signatory governments and platforms to cooperate on "information integrity" and suppress "mis- and disinformation." The Compact, championed by Secretary-General António Guterres, established principles for platform governance that critics warned would export the speech standards of the UN's least free member states into democratic regulatory frameworks. It called for "multi-stakeholder" content governance — a structure in which governments, platforms, and UN-adjacent civil society organisations jointly determine what speech is permissible online. Freedom House, the Electronic Frontier Foundation, and Article 19 all raised concerns that the Compact's disinformation provisions lacked free expression safeguards and could be used to legitimise authoritarian content controls internationally.

UNglobal-digital-compactdisinformationplatform-governance
2024-09
United Kingdom
CRITICALLaw Passed

UK Online Safety Act — First Enforcement Phase Begins

Ofcom began the first wave of enforcement under the Online Safety Act 2023. Platforms were required to complete illegal content risk assessments. Failure carries fines of up to 10% of global annual turnover or £18 million. Critics noted the Act's broad "legal but harmful" provisions — which have not yet been fully activated — could compel platforms to over-censor lawful speech, and that the risk assessment framework created permanent surveillance obligations over platform content.

online-safety-actplatform-liabilityofcomcontent-moderation
2024-08
France
CRITICALEnforcement

France Arrests Telegram CEO Pavel Durov — Prosecution of Platform for User Content

Pavel Durov, founder and CEO of Telegram, was arrested at Le Bourget airport near Paris on 24 August 2024 under a broad investigation into crimes allegedly facilitated through Telegram. Charges included complicity in drug trafficking, CSAM distribution, and fraud — all based on content posted by third-party users on the platform, rather than actions by Durov personally. He was held for several days before being released under judicial supervision. Free speech advocates including the ACLU and EFF warned the arrest represented an alarming attempt to hold a platform operator personally criminally liable for user speech — a precedent that could chill encrypted communication platforms globally. Durov later reached a deal with French prosecutors.

franceTelegramDurovplatform-liability
2024-08
United Kingdom
CRITICALEnforcement

UK Prosecutes Citizens for Memes and Reposts — Multi-Year Sentences for Single Social Media Posts

In the weeks following the August 2024 riots, UK courts handed down custodial sentences of between 18 months and 3 years for individual social media posts, memes, and reposts — many made by people with no connection to any physical disorder. Cases included: a woman sentenced to 15 months for a Facebook post she later deleted; a man jailed for sharing an anti-immigration meme; and multiple individuals prosecuted for posts made years before the riots under the Malicious Communications Act and the Communications Act 2003. The Crown Prosecution Service confirmed over 480 individuals had been charged with communications offences by October 2024. Civil liberties groups including Liberty and Big Brother Watch described the prosecutions as disproportionate and politically selective, noting that comparable content from different political perspectives was not prosecuted.

UKriotsmalicious-communicationsmemes
2024-08
European Union / United States
CRITICALEnforcement

Thierry Breton Sends Public Warning Letter to Elon Musk Before Trump Interview

EU Commissioner Thierry Breton sent a highly publicised warning letter to Elon Musk ahead of a scheduled interview between Musk and Donald Trump on X, warning of "specific DSA obligations" regarding content that could "incite violence." Critics described the letter as naked political intimidation — a government official threatening regulatory consequences to a media platform about an interview with a political candidate days before the US presidential election. The letter drew condemnation from US politicians and free speech advocates as an extraterritorial attempt to interfere in American political discourse.

DSABretonMuskTrump
2024-08
United Kingdom
CRITICALEnforcement

UK Post-Riot Social Media Arrests — Hundreds Jailed for Online Posts

Following riots in several English towns in late July and early August 2024 (sparked by the Southport stabbings), UK authorities mounted one of the largest peacetime crackdowns on social media speech in a Western democracy. Over 800 people were arrested; many were charged with offences including "stirring up racial hatred" and communications offences for posts, memes, and reposts — some made years before the riots. Sentences of several years were handed down for individual social media posts. Prime Minister Keir Starmer personally warned that "the full force of the law" would be applied to online speech. Critics, including US politicians and free speech advocates, described the prosecutions as disproportionate and politically selective.

riotsSouthportsocial-mediahate-speech
2024-07
European Union
HIGHEnforcement

European Commission Opens DSA Proceedings Against X, TikTok, Meta

The European Commission formally opened Digital Services Act enforcement proceedings against X, TikTok, and Meta. The DSA requires platforms to proactively suppress "systemic risks" — a vague standard critics say mandates over-removal of political and minority speech. Potential fines of up to 6% of global annual revenue. The proceedings followed intensive lobbying by EU officials and were criticised as a regulatory weapon against politically disfavoured platforms.

DSAplatform-regulationenforcementdisinformation
2024-06
European Union
CRITICALPolicy Change

EU Chat Control (Article 45) — Proposal for Mass Scanning of Private Messages Advances

The EU's proposed "Chat Control" regulation (formally, amendments to the Child Sexual Abuse Regulation) advanced through committee, proposing mandatory client-side scanning of all private encrypted messages to detect CSAM. Cryptographers, privacy organisations, and digital rights groups warned the proposal was technically equivalent to abolishing end-to-end encryption — creating mass surveillance infrastructure. Multiple member states opposed it; the proposal was paused in June 2024 after failing to secure a majority, but was not withdrawn and continued to circulate in revised forms.

chat-controlencryptionsurveillanceCSAM
2024-06
European Union
HIGHPolicy Change

EU Code of Practice on Disinformation — Platforms Report Removal Metrics

Platforms published transparency reports under the EU Code of Practice on Disinformation showing removal rates for "coordinated inauthentic behaviour." Critics argued the Code incentivised platforms to over-remove political content to avoid regulatory scrutiny. State-funded EDMO fact-checkers flagged content for removal, effectively giving government-adjacent bodies veto power over online political speech.

disinformationfact-checkingcensorshippolitical-speech
2024-04
United States
CRITICALLaw Passed

US Congress Passes TikTok Divest-or-Ban Law — Signed by Biden

The US Congress passed, and President Biden signed, legislation requiring ByteDance to divest TikTok's US operations within 270 days or face a US ban. TikTok challenged the law in federal court arguing it violated First Amendment free speech rights by discriminating against a specific speaker based on its content. The DC Circuit upheld the law in December 2024; the Supreme Court unanimously upheld it in January 2025. The law represented the most significant government action to restrict a major speech platform in US history — critics noted the First Amendment implications of government banning a platform used by 170 million Americans for expressive activity.

TikTokByteDanceplatform-banfirst-amendment
2024-04
Australia
CRITICALEnforcement

Australia Orders Global Takedown of X Content — Courts Push Back

eSafety Commissioner Julie Inman Grant issued orders requiring X (Twitter) to remove footage of the Wakeley church stabbing globally — not just for Australian users. X complied with geoblocking within Australia but contested the global scope. The Federal Court initially granted an injunction; on appeal, the Full Federal Court overturned the global injunction, ruling Australian law does not have extraterritorial application over a US company's global content decisions. The case raised worldwide alarm about a single national regulator claiming global censorship jurisdiction.

takedownglobal-censorshipextraterritorialx-twitter
2024-04
Ireland
CRITICALPolicy Change

Ireland's Criminal Justice (Incitement to Violence or Hatred) Bill — Withdrawn After Public Backlash

Ireland's government attempted to pass sweeping new hate speech legislation that would have criminalised possession of material "likely to incite hatred" — including material on private devices — and created broadly defined offences for "condoning, denying or grossly trivialising" designated historical events. The bill passed the Dáil but faced intense public opposition and was withdrawn by incoming Minister for Justice Helen McEntee in November 2024 before completing Seanad passage. Critics had warned provisions on "possession" of hateful material were totalitarian in scope; comedian Dave Rubin and international commentators highlighted the bill as an example of Western democratic speech criminalisation.

irelandhate-speechthought-crimepossession-offence
2024-03
United Kingdom
HIGHPolicy Change

UK Ofcom Publishes Online Safety Act Codes of Practice — Platform Liability for User Speech

Ofcom published its first Codes of Practice under the Online Safety Act, setting out in detail the measures platforms must take to manage "illegal content" risk assessments and implement safety systems. The Codes place specific obligations on platforms regarding detection, reporting, and removal of content — creating mandatory infrastructure for content surveillance. Platforms that do not comply risk fines of up to 10% of global revenue. Critics warned the Codes' prescriptive requirements would cause platforms to implement automated content scanning as the only economically viable compliance approach.

ofcomonline-safety-actcodes-of-practiceplatform-liability
2024-03
European Union
HIGHPolicy Change

EU DSA "Trusted Flagger" System — Governments and NGOs Gain Priority Content Removal Access

The Digital Services Act's "trusted flagger" provisions came into operational effect, requiring very large platforms to treat notices from EU-designated trusted flaggers as priority removal requests — with platforms obligated to process them with "priority" over ordinary user reports. Trusted flagger status was granted to a range of government bodies and NGOs across EU member states. Critics warned the system created a privatised censorship pipeline giving government-adjacent organisations fast-track access to remove content without judicial oversight, and that the designation criteria lacked transparency.

DSAtrusted-flaggersgovernment-accesspriority-removal
2024-02
Canada
CRITICALPolicy Change

Canada's Online Harms Act (Bill C-63) — Retroactive Hate Speech Liability

The Canadian government introduced Bill C-63, the Online Harms Act, which would create a new Digital Safety Commission with powers to order content removal, impose hate speech provisions including lifetime house arrest for "feared" future offences, and — most controversially — allow complaints about speech made years or decades before the law's passage. Section 23 would allow individuals to file Canadian Human Rights Commission complaints about online content posted at any time in the past. Critics including the Canadian Civil Liberties Association described the retroactivity provision as a fundamental violation of rule of law. The bill stalled in Parliament and lapsed with the dissolution of the 43rd Parliament.

canadaonline-harmshate-speechretroactive-liability
2023-07
United States
HIGHCourt Ruling

X Corp Sues CCDH — Alleges NGO Manipulated Platforms Using Data Access

X Corp (Twitter) filed a lawsuit in federal court against the Center for Countering Digital Hate, alleging the CCDH had misused data access to manufacture pressure campaigns against the platform and had tortiously interfered with X's business. The complaint alleged CCDH used its "trusted researcher" data access to cherry-pick content to produce misleading reports designed to drive advertisers away from X. The case was ultimately dismissed by a federal judge in March 2024 on standing and pleading grounds — but not before generating significant discovery disputes over CCDH's internal communications and methodology. Critics of the dismissal noted the judge did not rule that the CCDH's methodology was sound, only that X had not adequately pleaded its claims.

CCDHx-twitterlawsuitadvertiser-pressure
2023-06
Australia
CRITICALPolicy Change

Australia's ACMA Misinformation Bill — Government Power to Define "False" Content

The Australian Government introduced the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2023, granting the Australian Communications and Media Authority (ACMA) powers to enforce codes and standards against online "misinformation and disinformation." Critics warned the bill's definition of misinformation — content that is "false, misleading or deceptive" and causes or is likely to cause "serious harm" — was so broad it would capture legitimate political debate. Notably, government and authorised electoral content was explicitly exempted from the misinformation definition — critics described this as the government licensing itself to spread disinformation while criminalising public dissent. The bill was withdrawn in November 2024 after Senate opposition, but the regulatory intent was maintained through ongoing ACMA activity.

misinformationACMAdisinformationgovernment-exemption
2023-06
Canada
HIGHLaw Passed

Canada's Online News Act (Bill C-18) — Platforms Block News; Meta Withdraws

Canada passed Bill C-18 (Online News Act), requiring digital platforms to pay news publishers for linking to or displaying news content. Meta responded by blocking all Canadian news content from Facebook and Instagram. Google threatened to do the same before reaching a deal. Critics argued the law distorted the news media market by subsidising established legacy outlets, created government-adjacent pressure on which journalism receives financial support, and demonstrated how regulation ostensibly protecting speech can produce mass censorship by causing platforms to withdraw news access entirely.

canadaonline-news-actbill-c18meta
2023-05
Germany
HIGHCourt Ruling

HateAid Secures German Court Orders Compelling Platforms to Reveal Anonymous Users

HateAid, the German NGO, secured court orders under NetzDG and related German law compelling Meta and X to reveal the real identities of anonymous users who had posted content HateAid classified as hate speech. The rulings set a precedent for court-ordered de-anonymisation in Germany — allowing private litigants to unmask anonymous speakers without criminal proceedings. Free speech and privacy advocates warned the precedent chilled anonymous online political speech and could be used against whistleblowers, activists, and dissidents.

HateAidde-anonymisationNetzDGanonymous-speech
2023-02
United States / United Kingdom
HIGHEnforcement

Global Disinformation Index — US Taxpayer-Funded NGO Systematically Defunds Conservative Outlets

Investigative reporting by the Washington Examiner exposed that the Global Disinformation Index (GDI), a UK-based NGO, had received funding from the US State Department and USAGM's Open Technology Fund while operating a "dynamic exclusion list" — a blocklist advising programmatic advertisers to defund specific news outlets. GDI's ratings systematically scored conservative and heterodox outlets (including the New York Post, Washington Examiner, Reason, and The Federalist) as high-risk for disinformation while rating mainstream liberal-leaning outlets as low-risk. The methodology was not peer-reviewed and GDI refused to disclose its rating criteria. The report prompted congressional investigations and ultimately the withdrawal of US government funding.

GDIdisinformationgovernment-fundedadvertiser-boycott
2023-01
United States
HIGHPolicy Change

US Congress Section 230 Reform Push — Threat to Platform Speech Immunity

Multiple bills in the 118th Congress proposed curtailing or repealing Section 230 of the Communications Decency Act — the foundational US law providing platforms immunity from liability for user-generated content. Bipartisan proposals ranged from removing immunity for algorithmically amplified content (SAFE TECH Act) to conditioning immunity on platform content moderation practices (PACT Act). Free speech advocates warned that removing or conditioning Section 230 protections would cause platforms to either massively over-censor (to reduce liability exposure) or refuse to host user content altogether. The Supreme Court heard two cases — Gonzalez v. Google and Twitter v. Taamneh — declining to narrow Section 230 in both.

section-230platform-immunityUS-congresscontent-moderation
2022-12
Ireland
CRITICALLaw Passed

Ireland Passes Online Safety & Media Regulation Act — Creates EU's De Facto Global Regulator

Ireland passed the Online Safety and Media Regulation (OSMR) Act 2022, creating Coimisiún na Meán (the Media Commission) and the Online Safety Commissioner role. Because most major US tech platforms — Meta, Google, TikTok, Apple — have European headquarters in Ireland, the Irish regulator became the designated Digital Services Coordinator for the EU's DSA, giving it primary jurisdiction over content moderation decisions affecting hundreds of millions of users globally. Critics noted that a single national regulator in a country of 5 million people effectively became a global content moderation authority.

irelandOSMRDSACoimisiún-na-Meán
2022-12
United States
CRITICALEnforcement

Twitter Files Released — Documents Show Government-Coordinated Speech Suppression

Following Elon Musk's acquisition of Twitter in October 2022, journalist Matt Taibbi, Michael Shellenberger, Bari Weiss, and others were given access to internal Twitter documents. The "Twitter Files" revealed: FBI and DHS routinely submitted user account suspension and content removal requests; the Hunter Biden laptop story was suppressed before the 2020 election at the request of Democratic Party officials; a blacklist ("trends blacklist," "search blacklist") was applied to hundreds of accounts without user notification; and a "Site Integrity Policy" gave staff broad discretion to suppress content without transparent public standards. The files also showed Twitter employees expressing political animus in internal discussions about conservative accounts.

twitter-filesFBIDHSgovernment-censorship
2022-11
United States
CRITICALEnforcement

CISA "Switchboarding" — US Government Uses Agency to Route Takedown Requests to Platforms

Documents released through the Twitter Files (2022–2023) and subsequent congressional investigations revealed that the Cybersecurity and Infrastructure Security Agency (CISA) operated a "switchboarding" system routing federal government content flagging requests to social media platforms. DHS-affiliated bodies including the Stanford Internet Observatory and the Election Integrity Partnership (EIP) participated in a coordinated infrastructure — later called the "Censorship Industrial Complex" by the House Judiciary Committee — that collated government, NGO, and platform moderation requests. A federal judge in Missouri v. Biden (2023) found this likely violated the First Amendment, calling it "the most massive attack against free speech in United States' history." The Fifth Circuit upheld the injunction; the Supreme Court reversed on standing grounds in Murthy v. Missouri (2024).

CISAgovernment-censorshipswitchboardingtwitter-files
2022-09
Global / United Nations
HIGHPolicy Change

Ardern's UN General Assembly Speech — Calls "Disinformation" a Weapon of War Requiring Global Regulation

In her address to the 77th UN General Assembly, New Zealand Prime Minister Jacinda Ardern delivered one of the most explicit calls by a Western democratic leader for international coordination against online speech. Ardern described "disinformation" and "hateful ideologies" as "weapons of war" and called on the international community to develop coordinated regulatory responses to online content. She explicitly framed free speech as insufficient justification for platform non-intervention, stating: "But what definition of 'free speech' says that freedom comes with no accountability?" Critics including Reason, the Foundation for Individual Rights and Expression (FIRE), and numerous civil liberties commentators described the speech as a world leader openly calling for government control of online political speech under humanitarian framing.

ardernUNdisinformationglobal-censorship
2022-06
Global
HIGHPlatform Action

Meta's "Trusted Partner" and "XCheck" Programs — VIP Exemptions Reveal Dual-Standard Moderation

Leaked internal documents (published by the Wall Street Journal in 2021 as the "Facebook Files" and confirmed in subsequent congressional testimony) revealed Meta's "XCheck" (cross-check) program gave millions of high-profile accounts — celebrities, politicians, news publishers — near-blanket exemptions from standard content moderation rules. Content that would be removed or penalised for ordinary users was allowed to remain for "whitelisted" accounts, creating a two-tier speech system. Internal documents showed Facebook was aware the XCheck program allowed harmful content from verified users to circulate unchecked. Separately, Meta's "Trusted Partner" designation gave governments and approved NGOs priority reporting access — allowing them to flag content for expedited review and removal.

metaxchecktrusted-partnertwo-tier-moderation
2022-02
Canada
CRITICALEnforcement

Canada Invokes Emergencies Act Against Freedom Convoy — Bank Accounts Frozen Without Court Order

Prime Minister Justin Trudeau invoked the Emergencies Act on 14 February 2022 — the first use of the Act since its passage in 1988 — in response to the Freedom Convoy protests against COVID vaccine mandates in Ottawa and at border crossings. The Act granted sweeping executive powers including the authority to freeze the bank accounts of donors and protesters without judicial process. Over 200 accounts were frozen. The RCMP and financial institutions were directed to act on government-provided lists of individuals. The Public Order Emergency Commission (the Rouleau inquiry) subsequently found the invocation was legally justified — but a Federal Court judge ruled in January 2024 that the invocation was unconstitutional as the legal threshold had not been met. Critics described the bank account freezing as the weaponisation of the financial system against political protest — a tactic associated with authoritarian regimes rather than liberal democracies.

canadaemergencies-actfreedom-convoybank-account-freezing
2021-11
Canada
HIGHLaw Passed

Canada's Bill C-10 (Online Streaming Act) — CRTC Regulation of User-Generated Content

Bill C-10 (the Online Streaming Act, later passed as Bill C-11) sought to extend Canadian Radio-television and Telecommunications Commission (CRTC) broadcast regulation to user-generated content on platforms including YouTube, TikTok, and Spotify. An amendment removed Section 4.1 — a clause that had exempted user-generated content — alarming free speech advocates who noted it would subject ordinary Canadians' posts to state broadcast regulation. The final Act passed in 2023, granting the CRTC powers to require platforms to promote Canadian content — critics argued this created a content-prioritisation regime with government-dictated speech hierarchies.

canadabill-c10CRTCuser-generated-content
2021-07
China / Global
CRITICALPlatform Action

Apple Removes Thousands of Apps from China App Store Under Government Pressure

Apple removed more than 1,000 apps from its Chinese App Store between 2017 and 2021 following demands from Chinese regulators, including VPN apps, news applications, and apps supporting Taiwanese and Tibetan content. A 2021 investigation by the New York Times found Apple had removed apps at a rate far higher than publicly disclosed and had stored Chinese user data on servers controlled by a Chinese state-owned enterprise. Critics argued Apple's compliance with Chinese censorship demands — which removed tools enabling free expression for hundreds of millions of users — demonstrated how corporate platform governance can become a vector for state censorship at global scale.

applechinaapp-storeVPN-removal
2021-03
United States / United Kingdom (Global)
HIGHPlatform Action

CCDH "Disinformation Dozen" Report — Directly Triggers Mass Deplatforming

The Center for Countering Digital Hate published its "Disinformation Dozen" report, claiming 12 individuals were responsible for 65% of anti-vaccine misinformation on social media. The figure — subsequently disputed by independent researchers — was used as the basis for demands that platforms permanently ban the named individuals. Several were deplatformed or demonetised following the report's publication. The Biden White House publicly cited the CCDH report when pressuring platforms to remove vaccine-sceptic content, with Press Secretary Jen Psaki stating "12 individuals are driving 65% of the anti-vaccine misinformation on social media platforms." X Corp later sued the CCDH alleging it misused data access to manufacture political pressure campaigns; the lawsuit was dismissed on standing grounds.

CCDHdisinformation-dozendeplatformingvaccine-speech
2021-02
India
CRITICALLaw Passed

India IT Rules 2021 — Government Takedown Powers and Anonymity Undermined

India's Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021 required platforms to appoint local government grievance officers, comply with takedown orders within 36 hours for "emergency" requests, and — most controversially — allow traceability of the "first originator" of messaging app messages. WhatsApp challenged the traceability requirement in Indian courts, arguing it was technically impossible without breaking end-to-end encryption for all users. The rules also created a government oversight body for online news and OTT content, which critics said gave authorities sweeping censorship powers over digital journalism.

indiaIT-rulestraceabilityencryption
2021-01
United States / Global
CRITICALPlatform Action

Twitter Permanently Bans Sitting US President Donald Trump

Twitter permanently suspended the account of sitting US President Donald Trump on 8 January 2021, two days after the Capitol events, citing risk of "further incitement of violence." The decision was made by Vijaya Gadde and Twitter CEO Jack Dorsey. No external judicial or regulatory process was involved. The ban was the most consequential single act of platform-level political deplatforming in history — removing a sitting head of state from the world's primary political discourse platform. Gadde later acknowledged in congressional testimony that the decision was made under intense external pressure and in an atmosphere of internal political consensus at the company. The Twitter Files subsequently showed internal discussion among Twitter employees expressing satisfaction at the ban and framing it in explicitly political terms.

twittertrump-bandeplatforminggadde
2021-01
United States / Global
CRITICALPlatform Action

Meta Bans Trump — Oversight Board Upholds Suspension, Declines to Rule on Permanence

Following the January 6, 2021 Capitol events, Facebook and Instagram permanently banned President Donald Trump. Meta referred the case to its Oversight Board — the private quasi-judicial content moderation body. In May 2021 the Board upheld the suspension but ruled the permanent, indefinite nature was not consistent with Meta's own policies, requiring Meta to re-evaluate within six months. Meta subsequently reinstated Trump's accounts in February 2023, following a "guard rails" assessment. Critics on both sides noted the episode illustrated the lack of due process in platform bans affecting major political figures, and the Oversight Board's inability to constrain Meta's ultimate discretion.

metafacebooktrump-banoversight-board
2020-10
United States
CRITICALPlatform Action

Twitter Suppresses New York Post Hunter Biden Story Days Before 2020 Election

Twitter blocked users from sharing a New York Post story about Hunter Biden's laptop and the business dealings it documented — locking the Post's account and preventing users from posting the article's URL, including via direct message. The decision was made days before the 2020 US presidential election. Vijaya Gadde personally approved the suppression. Twitter's then-head of Trust & Safety Yoel Roth later acknowledged the action had no clear policy basis. The story's authenticity was subsequently confirmed by the New York Times and Washington Post. The Twitter Files (2022) revealed the suppression followed informal outreach from Democratic Party operatives and an FBI briefing warning platforms to be alert to "hack and leak" operations — which had pre-emptively framed the authentic story as potential disinformation.

twitterhunter-bidenlaptopelection-interference
2020-06
France
HIGHCourt Ruling

France Avia Law — Struck Down by Constitutional Court After Imposing 24h Takedown Mandate

France's "Loi Avia" (named for its sponsor, MP Laetitia Avia) was passed requiring platforms to remove content reported as "manifestly illicit" hate speech or terrorist content within 24 hours or face fines of up to €1.25 million per violation. France's Constitutional Council struck down key provisions in June 2020, finding the 24-hour requirement was so extreme it would force platforms to remove content without adequate review, constituting a disproportionate restriction on freedom of expression. The ruling was a landmark in European constitutional law — recognising that excessive takedown speed mandates structurally produce censorship.

franceAvia-lawtakedown-mandateconstitutional-court
2019-10
Singapore
CRITICALLaw Passed

Singapore POFMA — Ministers Gain Direct Power to Order Content "Corrections" or Removal

Singapore's Protection from Online Falsehoods and Manipulation Act (POFMA) came into force, granting individual government ministers — not courts — direct power to issue correction directions or removal orders against online content they deemed false. POFMA had been used repeatedly against opposition politicians, critics of the government, and foreign media. The Singapore government issued POFMA directions against the Workers' Party, academic sites, and international news outlets including The Online Citizen. Critics described POFMA as a targeted tool against political dissent dressed as anti-misinformation law.

singaporePOFMAdisinformationministerial-censorship
2019-05
New Zealand / France / Global
CRITICALPolicy Change

Christchurch Call — Governments and Platforms Commit to Remove "Extremist" Content Without Judicial Oversight

New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron launched the Christchurch Call in Paris — a joint government-platform commitment to eliminate "terrorist and violent extremist content" online. Signatories included major platforms (Facebook, Google, Twitter, Microsoft) and 17 governments. The Call established no independent judicial oversight mechanism, no appeals process, and left content classification to a combination of government pressure and platform discretion. Critics including the Electronic Frontier Foundation warned the Call's vague definitions — "violent extremist content" was never precisely defined — created a framework governments could use to pressure platforms into removing legitimate political speech, journalism, and historical documentation under counter-terrorism framing. The US government notably declined to sign, with the Trump administration citing First Amendment concerns.

christchurch-callnew-zealandfranceextremist-content
2019-03
New Zealand
CRITICALEnforcement

New Zealand Blocks Websites and Criminalises Possession of Christchurch Footage

In the days following the Christchurch mosque attacks, New Zealand's Chief Censor used emergency powers under the Films, Videos, and Publications Classification Act to classify the attacker's manifesto and livestream footage as "objectionable" — making possession a criminal offence carrying up to 14 years imprisonment. The Department of Internal Affairs activated website blocking, and ISPs were instructed to block access to sites hosting the footage. Several New Zealanders were subsequently investigated or prosecuted for sharing the footage. Free speech advocates warned the rapid criminalisation of possession — rather than just distribution — and the extrajudicial website blocking set a dangerous precedent for emergency executive content control without judicial process.

new-zealandchristchurchcontent-blockingpossession-offence
2017-10
Germany
CRITICALLaw Passed

Germany's NetzDG — The Global Template for Platform Speech Laws

Germany's Netzwerkdurchsetzungsgesetz (NetzDG) came into force requiring social media platforms with over 2 million users to remove "obviously illegal" content within 24 hours (and other illegal content within 7 days) or face fines of up to €50 million. Critics warned the impossible compliance timeline and severe fines would cause platforms to over-remove content rather than risk regulatory penalties — creating structural censorship through liability. NetzDG became the global template for platform speech regulation, directly influencing the EU's DSA and similar laws in Australia, India, Singapore, Brazil, and the UK. Legal scholars documented systematic over-removal of political speech in the years following its introduction.

NetzDGgermanyplatform-liabilitytakedown