Saturday, April 11, 2026

Fw: Digital ID Fight Escalates and Grand Jury Targets Anonymous Reddit User



Plus: Congress Mandated the Backdoors That Got Hacked and Is Trying to Demand More
 ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌  ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
logo
Reclaim The Net is funded by the community. If you support free speech and restoring privacy and civil liberties, please become a supporter here. Thank you.
April 11, 2026

SUPPORTERS
1

Congress Mandated the Backdoors That Got Hacked and Is Trying to Demand More

Thirty years ago, Congress forced every phone company in America to build a surveillance backdoor into its network. 

Last year, a foreign government walked right through it, and what they accessed is worse than anything that's been publicly reported in the headlines. 

But the real story isn't the hack. It's what happened next: who lobbied whom, which rules got quietly killed, and how the government's supposed "fix" has nothing to do with security and everything to do with protecting the companies whose negligence made the breach possible. 

Today, we follow the money, pull the timelines apart, and found a pattern that keeps repeating, one that's about to repeat again with your private messages...

Become a supporter here.

Get the post here.
BECOME A SUPPORTER

PUSHING BACK
1

xAI Sues Colorado to Block AI Speech Regulation on First Amendment Grounds

Elon Musk's AI company filed a federal lawsuit on Thursday asking a judge to block Colorado from enforcing a law that would let the state dictate what Grok can and cannot say.

The complaint, lodged in the US District Court for the District of Colorado against Attorney General Philip Weiser, calls Senate Bill 24-205 unconstitutional on First Amendment, Dormant Commerce Clause, and Equal Protection grounds. xAI wants the whole thing thrown out before it takes effect on June 30.

We obtained a copy of the lawsuit for you here.

SB 24-205 defines "algorithmic discrimination" as any AI output that results in "unlawful differential treatment or impact that disfavors an individual or group" based on protected characteristics.

But the law then carves out an exemption for discrimination designed to "increase diversity or redress historical discrimination." The state, in other words, built a law that bans one kind of differential treatment while explicitly blessing another. The distinction rests entirely on whether Colorado approves of the reason.

That's a content and viewpoint distinction written into statute. xAI's complaint argues the law "compels Plaintiff xAI to alter Grok, forcing Grok's output on certain State-selected subjects to conform to a controversial, highly politicized viewpoint."

The company calls the measure "an effort to embed the State's preferred views into the very fabric of AI systems" and says it would force developers "to distort their AI models to seek and output progressive ideology instead of the truth."

The constitutional argument about the bill itself has real weight. A law that regulates AI outputs based on whether the resulting discrimination serves the state's preferred goals is the kind of viewpoint-based speech regulation that courts subject to the highest level of scrutiny.

The law's scope is enormous. A "high-risk" AI system is defined as one that "makes, or is a substantial factor in making, a consequential decision" in areas including employment, housing, education, healthcare, and financial services.

The definition of "substantial factor" is breathtakingly broad, covering "any use of an artificial intelligence system to generate any content, decision, prediction, or recommendation concerning a consumer that is used as a basis to make a consequential decision."

If someone uses Grok to draft interview questions or summarize a stack of CVs, that's enough. The AI doesn't have to make the hiring call itself. It just has to touch the process.

And the law applies wherever a single Colorado resident might be affected. xAI is incorporated in Nevada, headquartered in California, and has no offices in Colorado. The complaint argues that SB 24-205 regulates development and deployment activities that happen entirely outside the state, in violation of the Dormant Commerce Clause.

Colorado's own political leadership hasn't been able to settle on whether this law is a good idea. Governor Jared Polis signed it in May 2024 "with reservations," warning that "Government regulation that is applied at the state level in a patchwork across the country can have the effect to hamper innovation and deter competition in an open market."

Polis flagged something specific for the speech analysis. He noted that "[l]aws that seek to prevent discrimination generally focus on prohibiting intentional discriminatory conduct," but SB 24-205 "deviates from that practice by regulating the results of AI system use, regardless of intent."

That's a significant admission from the person who signed the bill into law. The measure creates liability not for intentional bias but for statistical outcomes, and only the outcomes Colorado doesn't like.

Attorney General Weiser himself called the bill "really problematic" in August 2025 and said it "needs to be fixed."

A joint letter in May 2025, signed by Polis, Weiser, two US representatives, a US senator, and Denver's mayor, asked the state legislature to delay the law until January 2027 so they could fix the problems.

The legislature instead just pushed the effective date to June 30, 2026, and left everything else untouched.

A March 2026 working group proposal to strip the algorithmic discrimination requirement hasn't been introduced as a bill by any legislator. So the law stands as written.

The First Amendment argument has multiple layers. xAI argues that every choice a developer makes when building an AI model is an expressive activity, from selecting training data to writing system prompts to calibrating guardrails.

The complaint cites the Supreme Court's 2024 decision in Moody v. NetChoice, which held that social media platforms engage in speech when curating content, and quotes the Court's observation that on "the spectrum of dangers to free expression, there are few greater than allowing the government to change the speech of private actors in order to achieve its own conception of speech nirvana."

The complaint also contends SB 24-205 burdens users' right to receive information.

The argument is straightforward enough. If the law forces developers to alter model outputs so they don't produce disfavored statistical patterns, then users get sanitised answers instead of whatever the model would have generated without state interference.

You don't have to agree with everything xAI does to recognize that a state compelling a specific ideological adjustment to AI training data is a genuinely alarming precedent for speech.

There's a vagueness problem, too. SB 24-205 prohibits "algorithmic discrimination" and exempts discrimination that redresses "historical discrimination," but never defines "historical discrimination."

It leaves that to the Attorney General to figure out through rulemaking. The law creates a $20,000-per-violation penalty for noncompliance, gives the AG exclusive enforcement authority, and has no private right of action. That concentration of definitional and enforcement power in one office, combined with terms that nobody can pin down, is the kind of arrangement that chills speech even before a single enforcement action.
If this coverage matters to you, please become a paid supporter today. The threats to privacy and free speech are only growing, and so is the work required to oppose them. Your support is what makes that possible.
BECOME A SUPPORTER
DIGITAL ID
2

Massachusetts House Passes Social Media Age Verification Digital ID Bill

Massachusetts just voted to force every social media user in the state to prove their age to a tech company. 

The bill passed the House 129-25 on Wednesday, banning children under 14 from social media entirely, requiring parental consent for 14- and 15-year-olds, and mandating that platforms build age verification systems to enforce all of it. If it becomes law, the policy takes effect on October 1.

We obtained a copy of the bill for you here.

House Speaker Ron Mariano and Ways and Means Chair Aaron Michlewitz framed the legislation as protection. "This ban would be among the most restrictive in the entire country, helping to protect young people from harmful content and addictive algorithms that have a proven negative impact on their mental health," they said in a joint statement. 

They also described the broader goal: "The simple reality is that Massachusetts must do more to ensure that our laws keep pace with modern challenges – especially when it comes to protecting our children, and to setting students up for success in the classroom and beyond."

The bill doesn't say how companies should verify ages. It leaves that to Attorney General Andrea Campbell, who would have until September 1 to write the implementing regulations. 

That vagueness is deliberate, according to Michlewitz, who said it gives the AG flexibility in a changing industry. 

But the practical reality of age verification is that someone has to prove who they are. 

That means government IDs, facial scans, or behavioral tracking, and those requirements don't just apply to kids. Every user on the platform has to go through the system, because you can't filter minors without checking adults, too.

We already know how this plays out. When Discord rolled out age verification for UK and Australian users, a third-party vendor handling the ID checks was breached within months. Approximately 70,000 users had their government-ID photos exposed, according to Discord's own disclosure. Hackers posted photos of people holding government ID cards to a Telegram channel, alongside names, email addresses, and partial financial data. Discord's response to that breach was to announce global mandatory age verification.

Massachusetts lawmakers seem unbothered. "We know that there could be some potential legal challenges," Michlewitz said. "We think it's the right thing to do, we think we're on solid ground."

Asked directly whether data privacy came up during the drafting process, Mariano gave an answer that says a lot about how seriously the legislature weighed the surveillance costs: "Well, I'm sure we have, but the issue is that we're doing it to protect kids, and a lot of it is aimed at an age group that we think is well worth the investment of time in getting the right ages and making sure that only kids who are maturing are involved in this."

That response sidesteps the central problem. Age verification doesn't only collect data about kids. It collects identity data from everyone, and that data has to go somewhere. 

It gets stored by third-party vendors, processed by facial recognition algorithms, and retained for periods that companies define in their own privacy policies. 
GET YOURS
2

Get Yours: Shop Now

Getting merchandise for yourself or as a gift helps support the mission to defend free speech and digital privacy.

It also helps raise awareness every time you wear or use it.

Your merch purchase goes directly toward sustaining our work and growing our reach. 

It's a simple, effective way to support. Get yours now.

SHOP NOW
TAKING A STAND
3

Idaho Bans Mandatory Digital ID With New Privacy Law

Idaho just became one of the few states to draw a line against mandatory digital identification. Governor Brad Little signed Senate Bill 1299 on April 1, 2026, and the new law does something genuinely unusual in American state politics right now: it pushes back against digital ID rather than pushing it forward.

We obtained a copy of the bill for you here.

The bill creates Section 67-2364 of the Idaho Code, prohibiting government entities from requiring "any person to obtain, maintain, present, or use digital identification."

Approximately three-quarters of US states are currently offering or developing electronic driver's licenses. The national momentum is clearly toward digital ID systems, with states like Arkansas, Texas, Georgia, and Utah all advancing their own versions in 2025 alone. Idaho is swimming against that current.

The bill, introduced by Senator Tammy Nichols, goes further than a simple opt-out. It prohibits public entities from denying, delaying, conditioning, or reducing "any service, benefit, license, employment, education, or access based on a person's refusal or inability to use digital identification."

That second clause, "or inability," protects people who can't use digital ID, not just those who won't. Anyone without a smartphone, without reliable internet, without the technical literacy to navigate a digital wallet, keeps full access to government services. Physical, non-digital identification remains "valid for all governmental purposes" under the law.

The bill also addresses what happens when someone voluntarily shows a digital ID during a government interaction. A government entity cannot "require a person to surrender, unlock, or relinquish control of a personal electronic device for identity verification." Handing your phone to a police officer or a clerk at the DMV is not the same as handing them a laminated card.

A phone contains your messages, your photos, your browsing history, and your location data. Presenting a digital ID "shall not constitute consent to search or access any other contents of a device."

That's a Fourth Amendment protection written directly into a state statute.
Government agencies are also barred from using digital ID as a surveillance tool. The law prohibits agencies from tracking individuals, retaining identity data beyond a single transaction, or using digital identification "as a universal or shared credential across agencies."

That last restriction is particularly significant. It blocks the creation of a de facto digital identity system where a single credential follows you from the tax office to the library to the health department, linking every interaction into a unified government profile.

The bill didn't survive the legislative process unscathed. A Senate amendment removed the original provision stating that "information incidentally observed on a device shall not be used to establish probable cause or justification for further search or seizure." That was a strong protection against the kind of casual surveillance that happens when a government employee glances at your phone screen while checking your ID. Its removal is a real loss.

The amendment also weakened enforcement. The original bill provided statutory damages of $500 to $2,500 and civil penalties up to $5,000 against government entities that violated the law. The amended version strips those out.

Enforcement now rests with the attorney general, who must give a public entity 15 days' written notice to fix a violation before taking any action.

Citizens can still seek declaratory or injunctive relief, and prevailing plaintiffs get attorney's fees, but the direct financial penalties that would have made agencies think twice are gone.

The amendment also added a provision stating that "no public employee shall be personally liable for actions taken within the employee's scope of employment," which removes individual accountability almost entirely.
Across the country, lawmakers are introducing "child safety," "age-verification," and "digital modernization" bills that expand digital-identity systems.

Idaho's own legislature considered HB 542 this same session, a bill that would have compelled platforms to continuously track, estimate, and verify the identities of all users, including minors. The fact that both bills moved through the same legislature in the same session tells you something about the competing pressures these lawmakers face.

The broader context is hard to ignore. Digital ID systems are convenient, and convenience is how surveillance expands.

You start with a voluntary app, then agencies quietly stop supporting the physical alternative, then you can't renew your vehicle registration or pick up a prescription without pulling out your phone and authenticating through a system that logs when you were there, what you needed, and which device you used. Idaho's law is designed to prevent exactly that slow erosion.

Whether the weakened enforcement provisions give it enough teeth to actually do so is another question.
UNMASKING
4

Secret Grand Jury Convened to Unmask Anonymous Government Critic on Reddit

Federal prosecutors have ordered Reddit to appear before a grand jury in Washington, D.C., and hand over the personal data of an anonymous user who posted criticism of Immigration and Customs Enforcement. The company has until April 14 to comply. Reddit has declined to say whether it plans to fight the order.

The user, identified in court filings as John Doe, is a US citizen in the Pacific Northwest. Doe’s attorneys reviewed the account's post history and found nothing resembling criminal activity.

The most aggressive posts they could locate: sharing already-public biographical details about Jonathan Ross, the ICE agent who killed Renee Good in Minneapolis in January; suggesting "Urine speaks louder than words" as an anti-ICE protest sign (a reference to a song); and writing "TSA sucks and we all know it."

The First Attempt

It started on March 4, when an ICE agent in Fairfax, Virginia, sent Reddit an administrative summons demanding the user's name, address, phone number, banking and credit card information, IP addresses, phone model numbers, and the names of any other accounts tied to their Reddit profile.

The legal basis cited for this demand was a provision of the Smoot-Hawley Tariff Act of 1930, a statute that governs customs duties, boat show sales, wild animal imports, and forfeited wines and spirits.

The summons included a threat and a gag order. "Failure to comply with this summons will render you liable to proceedings in a U.S. District Court to enforce compliance with this summons as well as other sanctions," it read.
"You are requested not to disclose the existence of this summons for an indefinite period of time. Any such disclosure will impede the investigation and thereby interfere with the enforcement of federal law."

Reddit notified John Doe two days later. Doe retained lawyers, and on March 12 they filed a motion to quash the summons in the Northern California federal court.

We obtained a copy of the motion for you here.

The motion pointed out the obvious. John Doe is a US citizen who has never traveled outside the country, has no business dealings overseas, has not imported or exported anything, and primarily uses their Reddit account to talk about local politics in Oregon. Nothing about the account, the user, or any of their posts has the faintest connection to customs duties or international trade.

Faced with a legal challenge, the government withdrew its request.

Four Days Later

The withdrawal came on March 27. By March 31, Reddit received a new order. This time the demand came not from a field agent in Virginia but from a Special Assistant US Attorney in Washington, D.C., where the US Attorney's office is led by Jeanine Pirro, the former judge and Fox News host, confirmed to the role in August 2025. The new subpoena ordered Reddit itself to appear before a grand jury, and it sought roughly three times more data than the original request.

The change from an administrative summons to a grand jury subpoena is significant. An administrative summons can be challenged in open court, as John Doe's lawyers demonstrated. A grand jury operates in secret. The proceedings are not adversarial. There is no lawyer for the other side. There is no public record. The purpose of the proceeding is to let a prosecutor build a case toward criminal charges, and the person being investigated has almost no ability to contest what happens behind closed doors.

There is no known precedent in the current wave of immigration-related social media investigations for summoning a major tech company before a grand jury. The move represents a significant escalation, and it is worth understanding why. In a grand jury proceeding, First Amendment protections are at their weakest. The target of the investigation has almost no ability to assert their rights before the damage is done. By the time the secrecy lifts, if it ever does, the government already has what it wants.

Why Washington

The reason the government moved the case to DC after losing in California is not hard to figure out. Courts in the Northern District of California had repeatedly blocked ICE's attempts to unmask anonymous social media users.

Last fall, a federal magistrate judge ordered Meta not to hand over the information ICE sought about an anonymous Instagram user. The same legal team representing John Doe had intervened on the user's behalf and won.

The pattern held across multiple cases. The government would issue a subpoena, a challenge would be filed, and the government would fold. The grand jury route sidesteps that pattern entirely. It takes the question out of an open courtroom and puts it behind closed doors, in a jurisdiction of the government's choosing, under rules that overwhelmingly favor the prosecution.

None of the records associated with this grand jury will be accessible to the public. The government lost when it had to make its case in the open. So it stopped making its case in the open.

The Broader Campaign

Reddit's own transparency data reflects the pressure. The first half of 2025 marked the highest volume of law enforcement data requests the company has ever received in a single reporting period: 1,179 requests, including 423 subpoenas and 27 court orders. Sixty-six percent came from US agencies. Reddit disclosed user data in 82 percent of those cases.

Washington, D.C., is the district from which Reddit receives the most federal law enforcement requests.

Reddit's public statement on the John Doe case says the right things.
"Privacy is central to how Reddit operates, and we take our commitment to protecting that seriously," the company said. "We do not voluntarily share information with any government, especially not on users exercising their rights to criticize the government or plan a protest."

The company says it reviews requests for "legal sufficiency," objects to overbroad demands, notifies users "whenever possible," and provides only the "minimum" data required.

An 82 percent compliance rate with law enforcement requests is worth keeping in mind while reading those assurances.

What This Means

The legal question here is narrow. The practical question is not.
A US citizen posted criticism of a federal agency on the internet using a pseudonymous account. They shared biographical details about an ICE agent that were already public. They suggested a crude joke for a protest sign. They said TSA is bad.

For this, the federal government issued a summons backed by a 1930 tariff law that has nothing to do with Reddit posts.

When that was challenged in court, the government withdrew. Four days later, it came back with a grand jury subpoena, moved the proceedings to a different jurisdiction, expanded the scope of the data request, and wrapped the entire thing in secrecy.

The point is not just to identify one Reddit user. Every person who reads about this case and decides not to post something critical, not to share information about federal enforcement, not to make a joke at the government's expense, every one of those decisions is the policy working as designed.

The Digital ID Agenda

The push to mandate online age verification is building the infrastructure that will make cases like John Doe's unnecessary. A dozen "child online safety" bills are advancing through Congress with bipartisan support, and half of US states have already enacted laws requiring government ID submission, biometric facial scans, or third-party verification before users can access certain websites.

The justification is protecting children. The consequence is eliminating anonymity for everyone. There is no way to reliably verify that a user is 16 without verifying who they are. Every age-check system that actually works requires collecting identifying information, whether that means scanning a passport, submitting a credit card, or handing biometric data to a third-party vendor. Once a platform has linked a user's legal identity to their account, that identity can be subpoenaed, hacked, or handed over to law enforcement.

The anonymous Reddit user who posts about local politics in Oregon ceases to exist. Meta's Mark Zuckerberg has told a court that Apple and Google should verify the identity of every smartphone user at the operating system level, a proposal that would end anonymous internet access at the root.

Consider what that world would mean in the context of the John Doe case. Right now, the government has to convene a secret grand jury and drag a tech company to Washington just to find out who posted "TSA sucks and we all know it." That process is slow, legally fraught, and publicly embarrassing when it leaks. Digital ID systems would eliminate the need for any of it. The identity would already be on file, pre-collected, waiting for the next subpoena. The surveillance would be baked into the platform before the user ever typed a word.
If this coverage matters to you, please become a paid supporter today. The threats to privacy and free speech are only growing, and so is the work required to oppose them. Your support is what makes that possible.
BECOME A SUPPORTER
Thanks for reading,

Reclaim The Net





























Friday, March 27, 2026

Fw: The Age Verification Con



How Big Tech and politicians built a digital id system for everyone while pretending to fight each other.
 ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌  ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
logo
Reclaim The Net is funded by the community. If you support free speech and restoring privacy and civil liberties, please become a supporter here. Thank you.
MARCH 27, 2026

POLITICAL THEATER
1

The Age Verification Con: How Big Tech and Politicians Built a Digital ID System For Everyone While Pretending to Fight Each Other

Politicians on both sides of the Atlantic are competing to look tough on Silicon Valley. They hold hearings, write bills, and pose for photographs with parents who say their kids' lives were ruined by social media algorithms they somehow couldn't pull them away from.

The cause is protecting children from social media, and it supposedly polls so well that it has achieved something almost unheard of in modern politics: genuine bipartisan consensus. Republicans and Democrats in Washington. Labour and Conservatives in Westminster. The Australian parliament voted the whole thing through with barely a whisper of dissent.

There is just one problem with the narrative. The tech giants these politicians claim to be fighting are spending record sums to help them do it. And the tool they have all converged on, age verification, is not really about checking whether someone is 15 or 16. It is the architecture for a verified internet, one where anonymous access is replaced by identity checkpoints, and where using a social media account, downloading an app, or browsing a website requires you to show your papers first.

The campaign is presented as protecting children. The infrastructure being built will apply to everyone.

The Political Performance

Keir Starmer set the tone in February when he announced plans to push through age restrictions for social media far faster than the eight years it took to grind the Online Safety Act through Parliament. "Technology is moving really fast, and the law has got to keep up," the British Prime Minister said. He followed up with a direct challenge to the platforms: "And if that means a fight with the big social media companies, then bring it on."

He went further this week, casting the issue as a moral confrontation: "Some of this will require a fight. If we're going to do more to protect children, we're going to have to fight some of the platforms that are putting the material up there because they're putting this addictive stuff up there for a reason. They want more children to spend more time online and we've got to fight them and be clear whose side we're on here."

The rhetoric plays well to some. Starmer is a father of two teenagers, a fact he mentions regularly, and he has positioned himself as the parent-in-chief who understands what families are dealing with.

Technology Secretary Liz Kendall has talked about wanting to announce a social media ban for under-16s by summer, and she has floated the threat of fines or outright blocking for platforms that break the law in the UK.

The posture is this: government versus Big Tech, parents versus algorithms, democracy versus corporate greed.

Keep that framing in mind, because the money tells a different story.

Australia got there first. Its Social Media Minimum Age Act, which took effect on December 10 2025, bans under-16s from holding accounts on platforms including Facebook, Instagram, TikTok, Snapchat, X, YouTube, and Reddit. By mid-January 2026, more than 4.7 million accounts had been deactivated, removed, or restricted. Platforms that fail to take "reasonable steps" to keep minors off face fines of up to 49.5 million Australian dollars.

The eSafety Commissioner, Julie Inman Grant, has become the international face of this movement. Recognized by Time Magazine's Global Health 100 for 2026, she has described the age restrictions as part of a "holistic approach to protecting children online."

She has registered a series of industry codes expanding platform obligations and is overseeing the rollout of age assurance technologies across the country. She has also attracted attention from US Congressman Jim Jordan, who summoned her to testify before the House Judiciary Committee on allegations of global censorship demands.

Inman Grant, who spent 17 years at Microsoft and later worked at Twitter, has pushed back, calling it "a very unprecedented request for another legislative body to try and compel a senior bureaucrat from another government doing the job that the government set out for her to do."

France followed quickly. In January 2026, the National Assembly voted 130-21 to ban social media for children under 15, with enforcement planned for the start of the school year in September 2026.

President Emmanuel Macron fast-tracked the legislation and framed it in characteristically grand terms: "Because our children's brains are not for sale — neither to American platforms nor to Chinese networks. Because their dreams must not be dictated by algorithms."

France had tried this once before, in 2023, with a law establishing a "digital age of consent" at 15. That version never took effect because it clashed with EU regulations. The new text is designed to align with the Digital Services Act, and Macron has signaled he wants harmonized rules across the entire bloc.

That push is already underway. In November 2025, the European Parliament voted to recommend an EU-wide minimum age of 16 for social media access. The European Commission has built what it calls an age verification "mini wallet," a prototype app aligned with the European Digital Identity Wallets that every EU member state is expected to roll out by the end of 2026. Denmark, France, Greece, Italy, and Spain are piloting the system.

In June 2025, 21 ministers from 13 member states signed a joint declaration calling the existing framework "insufficient" and demanding mandatory age verification on all social networks. EC President Ursula von der Leyen set the tone at a September 2025 summit, declaring that "parents, not algorithms, should be raising children."

The EU's age verification blueprint is built on the same technical specifications as its forthcoming digital identity wallets, ensuring that what begins as a child safety tool becomes part of a permanent identity infrastructure across the continent.

From Canberra to Brussels, the pattern is identical. Politicians frame themselves as taking on powerful tech companies. They use the language of confrontation, of fighting, of whose side we're on. What none of them mention is that the world's largest social media company is lobbying harder and spending more money than anyone to make sure these exact laws get passed.

The American version of this push has multiple fronts. Senator Ted Cruz, the Republican chair of the Senate Commerce Committee, teamed up with Democrat Brian Schatz to introduce the Kids Off Social Media Act, which would set a minimum age of 13 for social media accounts and ban platforms from serving algorithmically targeted content to anyone under 17.

"Kids need time to be kids to experience the real world, not to get lost in the virtual one," Cruz said at a committee markup. The bill passed the Commerce Committee with overwhelming bipartisan support.

It is far from the only proposal. Senators Marsha Blackburn and Richard Blumenthal reintroduced the Kids Online Safety Act (KOSA) with the backing of Senate Majority Leader John Thune and Minority Leader Chuck Schumer.

The bill would create a "duty of care" requiring platforms to proactively prevent a list of harms, including eating disorders, depression, anxiety, and "patterns of compulsive use."

"Big Tech platforms have shown time and time again they will always prioritize their bottom line over the safety of our children," Blackburn said.

Senator Chris Murphy added: "As a parent, I've seen firsthand how these platforms use intentionally addictive algorithms to spoon-feed young people horrifying content glorifying everything from suicide to eating disorders."

The cosponsor list is bipartisan: Katie Britt, John Fetterman, Peter Welch, Ted Budd, Angus King, and Mark Warner. Senator Schatz captured the mood: "When you've got Ted Cruz and myself in agreement on something, you've pretty much captured the ideological spectrum of the whole Congress."

Everyone agrees the children must be protected. The question nobody seems to want to answer is what the protection actually looks like, who benefits from the particular form it's taking when the state gets involved, and why the companies supposedly being punished are spending billions to make it happen.

What Age Verification Actually Means

Every one of these proposals requires the same thing: knowing how old the person behind the screen is. That sounds simple enough. But the mechanism for knowing someone's age online is the mechanism for knowing their identity. And once you build the system that verifies identity, you have built the system that can track, restrict, and control what people access.

Age verification is identity verification, repackaged with a child safety label. The practical consequence of every proposal now moving through legislatures in Washington, Westminster, Canberra, Paris, and Brussels is the same: the end of anonymous access to the internet. You will need to prove who you are before you post, before you browse, before you download an app.

The question of whether a 14-year-old can use Instagram becomes the mechanism by which every adult is required to show a government-issued ID to use their own phone.

Australia's eSafety Commissioner has said platforms can no longer rely on users simply entering a birthdate at sign-up. They are expected to stop people from faking their age using false documents, AI tools, deepfakes, and even VPNs.

The methods under consideration include facial age estimation, where AI scans a selfie to guess how old someone looks, credit card verification, and government-issued ID checks. The legislation technically prohibits platforms from requiring government ID as the only option, but the alternatives all involve some form of biometric or financial identity data.

The UK consultation, launched in March 2026 under the title "Growing up in the online world," is considering an under-16 ban and measures to stop children using VPNs to circumvent restrictions. The countries whose governments currently restrict VPN usage include China, Russia, Iran, North Korea, and Turkey.

KOSA, the American bill, would direct federal agencies to develop age verification at the device or operating system level. That is the endpoint every version of this legislation points toward: your phone verifying your identity before you can use it.

Apple Goes Further Than the Law Requires

This week, Apple demonstrated what that future looks like.

With the release of iOS 26.4 on 24 March 2026, UK iPhone users were confronted with a mandatory prompt: "Confirm You Are 18+." The options are to scan a credit card or a government-issued ID. Debit cards are not accepted. Passports are reportedly failing for many users.

Those who cannot or will not verify their age get locked into a restricted version of their own device, with content filters turned on across Safari and third-party browsers, communication safety features activated in Messages and FaceTime, and access to age-restricted apps blocked.

Here is the detail to note: the Online Safety Act does not require Apple to do this. The law applies to websites and platforms, not to operating systems or app stores. Apple chose to go beyond the legislation. Ofcom, the UK regulator, welcomed the move, calling it "a real win for children and families." Apple has been "working closely" with the regulator, Ofcom said.

Users have been reporting problems immediately. People in their 50s, 60s, 70s, and 80s with decades-old Apple accounts found themselves locked into child-restricted modes because their credit card scan failed or their driving license would not register. A 57-year-old user on Apple's support forums wrote that they do not have a credit card, and the scanner will not read a driving license. "Guess I'll be forever under 18!" Some users have described it as "regulatory ransomware."

Apple says the process is handled on-device and that scanned information is not stored, but the company has not documented exactly which signals trigger the verification flow. It has built a five-tier age rating system for the App Store (4+, 9+, 13+, 16+, 18+) and created a Declared Age Range API that lets developers request a user's age bracket without receiving a birthdate.

What Apple has built is a prototype for the verified internet. Once the device knows who you are, every app and every website you access through that device can be filtered according to that identity. The infrastructure for it is now installed on every iPhone updated to iOS 26.4 in the UK.

Proton, the encrypted email and VPN provider, published an analysis this week noting that a system designed to confirm age can be adapted to confirm any attribute tied to identity. "When identity becomes part of the access layer," Proton wrote, "restrictions can be applied with greater consistency and less reliance on individual platforms."

The conditions travel with the system.

So Apple is volunteering to build identity infrastructure that the law does not require, and the regulator is cheering. That alone should complicate the story politicians are telling about brave governments standing up to reluctant tech companies. But it gets worse.

Zuckerberg Hands Over the Blueprint

Meta CEO Mark Zuckerberg spent more than five hours on the witness stand in Los Angeles Superior Court in early March, testifying in a child safety lawsuit where a jury eventually found Meta and YouTube negligent in the design of their platforms and awarded $3 million in damages.

Under cross-examination, prosecutors showed internal emails including a 2015 estimate that 4 million users under 13 were on Instagram, roughly 30 percent of all American children aged 10 to 12. An old email from former public policy head Nick Clegg (and, keep in mind, a former Deputy UK Prime Minister) was read into the record: "The fact that we say we don't allow under-13s on our platform, yet have no way of enforcing it, is just indefensible."

Zuckerberg's response, repeated multiple times from the witness stand, was to call for age verification at the operating system level, handled by Apple and Google rather than by individual apps. He told jurors that operating system providers "were better positioned to implement age verification tools, since they control the software that runs most smartphones." He added: "Doing it at the level of the phone is just a lot cleaner than having every single app out there have to do this separately."

Think about what the CEO of the world's largest social media company proposed while under oath. Not that Instagram would verify the ages of its users. That Apple and Google should verify the identity of every smartphone user, for every app, at the operating system level. Every app installed on the device, every website accessed through the phone's browser, every message sent through any app on the phone.

The proposal solves Zuckerberg's immediate legal problem. If Apple and Google own age enforcement, Meta is no longer responsible for enforcement failures or the costs of implementation.

It also solves something much bigger for him…

The Business Case for Killing Anonymity

To understand why Meta is not resisting age verification but actively pushing for it, you have to understand what identity verification does for a social media company's bottom line.

Social media platforms have a bot problem, and it is getting worse. A 2024 report from data security firm Imperva found that over half of all internet traffic was non-human, with 37 percent consisting of malicious bots, a five percent increase from the previous year.

Cybersecurity reports estimate that 8 to 12 percent of all social media profiles across major platforms are fake, automated, or impersonation accounts. On networks with billions of users, that translates to hundreds of millions of questionable profiles operating at any given time.

AI has made it dramatically worse: bots in 2026 can hold conversations, generate realistic replies, and mimic human behavior well enough to fool most users. The FTC has reported to Congress on the use of social media bots in online advertising, highlighting how fake engagement may constitute a deceptive practice.

This is a problem for advertisers. They are paying to reach real people, and they are getting bots. Advertisers have been pressing platforms to guarantee that their ads are reaching verified human beings, not automated accounts inflating engagement numbers.

Identity verification at the device or platform level would solve this problem overnight. If every user has to prove they are a real person with a real ID, the bot problem disappears, and the advertising inventory becomes dramatically more valuable.

Every impression is suddenly verifiable. Every click comes from a confirmed identity. For a company that made $201 billion in revenue in 2025, almost entirely from advertising, the commercial incentive to support mandatory identity verification is enormous.

There is another commercial benefit that nobody in these legislative hearings is talking about. A verified, identity-linked internet is an internet where controversial speech is easier to suppress.

Advertisers have spent years pressuring platforms to keep their ads away from content that might generate negative brand associations. "Brand safety" is the industry term. It means ensuring that an advertisement for a family car does not appear next to a heated political argument, a conspiracy theory, or a piece of journalism that names a powerful company.

Platforms that can demonstrate a sanitized, identity-verified user base with robust content controls can charge premium rates for advertising. A less anonymous internet is a more commercially predictable internet, and that is worth a fortune.

None of this is new for Meta. Facebook first launched with a real name policy and enforced it aggressively for years. The policy required users to register their "authentic identity," and the company suspended accounts that used pseudonyms, stage names, or anything it deemed not a real name.

In a 2015 Q&A, Zuckerberg defended the policy by asserting that it "helps keep people safe" because people are "much less likely to try to act abusively towards other members of the community if they have to stand behind everything they say."

The EFF, the ACLU, and other advocacy groups pushed back hard, documenting how the policy harmed domestic abuse survivors, political dissidents, and journalists working under pseudonyms for their own safety. The backlash eventually forced Facebook to make modest concessions.

Meanwhile, Meta acquired Instagram in 2012, a platform that allowed pseudonymous handles and had no real name requirement, absorbing a user base that had grown precisely because it offered the flexibility Facebook did not.

The real name policy remained a point of friction on Facebook itself, and Meta gradually softened its enforcement as the political cost of maintaining it grew.

What age verification legislation offers Meta is something the company could not achieve on its own: the real name policy it always wanted, imposed by law, applied universally, and with the compliance cost shifted to someone else.

Meta does not have to be the bad guy demanding your ID. The government does it. Apple and Google do it. Meta just receives the verified signal and reaps the commercial benefits. Zuckerberg tried to build a verified-identity platform through corporate policy and faced a public revolt. Now governments are building it for him and calling it child safety.

Follow the Money

Remember the story: politicians are fighting Big Tech. Starmer says, "Bring it on." Cruz and Schatz say they're holding companies accountable. Macron says children's brains are not for sale. The framing depends on the idea that these laws are being imposed on resistant corporations.

An open-source investigation published in March by the TBOTE Project traced the money behind age verification lobbying and found the opposite. Meta is not fighting these laws. Meta is the largest corporate force pushing for them.

The investigation, which used IRS filings, Senate lobbying disclosures, state lobbying registrations, and campaign finance databases, documented that Meta spent a record $26.3 million on federal lobbying in 2025, more than Lockheed Martin or Boeing.

The company deployed 86 lobbyists across 45 states. 85 percent of those lobbyists had prior government service.
1
The centerpiece of Meta's lobbying is the App Store Accountability Act, which would require Apple and Google to verify user ages before anyone can download any app from their stores. Meta's own Senate filings list the bill as a lobbied priority.

The filing narrative includes "protecting children, bullying prevention and online safety; youth safety and federal parental approval; youth restrictions on social media."

The catch: the App Store Accountability Act imposes requirements on app stores and operating systems. It imposes no new requirements on social media platforms. If it becomes law, Apple and Google absorb the compliance cost, the infrastructure burden, and the regulatory liability. Meta's apps face zero new mandates.

The investigation also uncovered that Meta covertly funded a group called the Digital Childhood Alliance (DCA) to advocate for the legislation.

Bloomberg exposed the funding relationship in July 2025. The DCA's executive director, Casey Stefanski, admitted receiving tech company funding under oath at a Louisiana Senate committee hearing but refused to name donors.

The DCA is registered as a 501(c)(4) in Delaware with a minimum-disclosure IRS filing showing gross receipts under $25,000 for its first tax year, despite coordinating legislative campaigns across more than 20 states.

Its domain was registered on 18 December 2024. The website was live and fully operational the next day, 77 days before Utah's SB-142 (the first App Store Accountability Act to become law) was signed.
1
1
Almost every post on the DCA website targets Apple and Google. Meta is never criticized.
Meta is not the only social media company backing this approach. Snap, X, and Pinterest have all confirmed support for App Store Accountability Act bills. Every confirmed supporter is a social media platform that benefits from moving age verification to the app store layer. Every confirmed opponent operates an app store that would bear the compliance burden.

In Louisiana, a Meta lobbyist brought the legislative language for HB-570 directly to the bill's sponsor, who confirmed this publicly. The bill passed 99-0. In California, Meta spent more than $1 million on direct lobbying in the first three quarters of 2025 alone. The company committed over $70 million to four state-level super PACs, including one in Texas whose stated policy priority uses language that mirrors the App Store Accountability Act exactly.

The Heritage Foundation, which funds three of the six named DCA coalition organizations, staffs the pipeline from Capitol Hill to state legislatures and has merged leadership with another coalition member, Moms for Liberty, at the executive level. A former Senate staffer from Senator Mike Lee's office (who introduced the federal version of the Act) moved to Heritage and then endorsed the DCA on its launch day. Meta hired a Heritage fellow in May 2024.

The TBOTE investigation found the lobbying operation extends internationally. Meta spends €10 million annually on EU lobbying, the largest single company spend, and retains 18 or more consulting firms across jurisdictions, with at least three operating in both Brussels and Washington.

The Real Alignment

So here is the picture, once you strip away the posturing.

Keir Starmer says he will fight the big social media companies. Meta spent a record $26.3 million on lobbying in 2025 to pass the very type of legislation Starmer is championing, and it covertly funded an advocacy group to do the grassroots work.

Ted Cruz says he is holding Big Tech accountable. Meta's lobbyists are in 45 states pushing bills that exempt social media platforms from the age verification requirements they impose on everyone else.

Macron says children's brains are not for sale. Meta spends €10 million a year on EU lobbying, the largest single company spend on the continent, working the same legislative channels Macron's government is using.

The politicians get a cause that supposedly polls above 90 percent approval. The tech companies get to move the cost and liability of age verification onto their competitors while exempting their own platforms.

Everyone gets to say they're protecting children. The only thing anyone actually has to give up is the ability to use the internet without showing ID.

It's the political equivalent of a boxing match where both fighters split the purse, only the prize is a national identity database sold as child safety.

Consider what these bills collectively create. Australia's law is already in force, with eSafety overseeing compliance across ten platforms and pushing industry codes that extend to internet service providers, hosting services, and search engines.

The UK is launching trials with 300 teenagers and running a consultation that closes in May, with legislation expected to follow quickly. Apple has pre-emptively installed device-level identity verification on every UK iPhone.

Starmer wants powers to restrict VPN use by children. California's Digital Age Assurance Act will require users to enter their date of birth when setting up a new phone or computer, effective in 2027. Colorado is advancing a bill to require operating systems to collect and store user ages at device setup and expose that data to third-party apps via API.

The Kids Online Safety Act carries broad definitions of content that is "harmful" to minors, a category the government gets to define and that the bill leaves subject to government influence. It also directs agencies to develop verification at the device or operating system level. New York's SAFE for Kids Act permits facial analysis as an alternative to government ID submission, meaning biometric data collected to scroll a social media feed.

These identity databases will be breached. A Discord-related breach last year exposed approximately 70,000 government-issued IDs submitted through a third-party system. Every ID check creates a future breach waiting to happen.

Over 400 computer scientists signed an open letter arguing that these laws build surveillance architecture without meaningfully protecting children. The ACLU, the Center for Democracy and Technology, Fight for the Future, and the EFF wrote jointly to Congress that the legislation "would actively undermine child safety, harm marginalized youth, erode privacy, and impose unconstitutional restrictions on young people's ability to engage online."

GrapheneOS, the privacy-focused Android fork, announced it will refuse to implement age data collection entirely. "GrapheneOS will remain usable by anyone around the world without requiring personal information, identification, or an account," the project stated. "If GrapheneOS devices can't be sold in a region due to their regulations, so be it."

That is what it costs to refuse.

Who Loses

Anonymous and pseudonymous speech online protects real people. Whistleblowers. Abuse survivors. Political dissidents. People exploring medical questions or ideas they are not ready to attach their legal names to. Journalists protecting sources.

The stated goal of every age verification law is to protect 9-year-olds from Instagram. The mechanism is a national digital identity system baked into the operating systems that run the overwhelming majority of the world's smartphones.

The chilling effect is already visible. In the UK, image-hosting site Imgur blocked access to all UK users last year after tighter age verification rules, showing blank images instead.

Some websites blocked UK users entirely rather than verify their age. The choice for smaller platforms, independent developers, and open-source projects is even starker: build verification systems they cannot afford, geoblock entire countries, or shut down, giving their Big Tech rivals more power.

In Louisiana, 12 Meta lobbyists worked a single bill that passed 99-0. In the UK, Apple built verification infrastructure that the law does not even require, and the regulator applauded. In Los Angeles, the CEO of the company whose platform had 4 million underage users told a jury that the solution was to hand identity gatekeeping to two private companies already facing antitrust scrutiny.

The politicians say they are fighting Big Tech. The lobbying disclosures say Big Tech is paying for the fight. The bills say everyone needs to show ID.

And the age verification infrastructure, once installed, does not care whether you are nine or ninety. It just needs to know who you are.
If this coverage matters to you, please become a paid supporter today. The threats to privacy and free speech are only growing, and so is the work required to oppose them. Your support is what makes that possible.
BECOME A SUPPORTER
Thanks for reading,

Reclaim The Net