Sunday, May 3, 2026

Fw: Do You Know Why the Jewish People have Not been able to Build a Temple Upon the “Temple Mount” in Israel for 2,000 Years? And, how it will be Made Possible for Them to Do So in the Last Days? KJV


May 1, 2026 | http://www.gospelofjesuschrist.blog | River Wilde We’ve all had those moments when we are deep in reflection, and taught by the Holy Spirit. Yesterday was certainly one of those moments for me. Something that now makes perfect sense to me regarding the decades long fighting over the temple mount, and the inability for […]

Fw: Senate Panel Backs Digital ID For AI Access



Plus: The Supreme Court, Location History, and the Reach of the Fourth Amendment
 ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌  ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
logo
Reclaim The Net is funded by the community. If you support free speech and restoring privacy and civil liberties, please become a supporter here. Thank you.
May 1, 2026

SUPPORTERS
1

Close the Shades: The Supreme Court, Google Location History, and the Reach of the Fourth Amendment

Eight years ago, the Chief Justice of the United States wrote an opinion warning that smartphones had handed the government "near perfect surveillance" capabilities, and that the Fourth Amendment had to adapt or become meaningless. 

This week, the same Chief Justice spent two hours at oral argument appearing to forget he had ever written it. 

What happened inside the Supreme Court this week was, depending on how you count, either the quiet unwinding of the most important privacy ruling of the last decade or the moment a majority of the justices decided the architecture of your phone is now a constitutional argument the government gets to make against you. 

Become a supporter here.

Get the post here.
BECOME A SUPPORTER

DIGITAL ID
2

Senate Panel Backs GUARD Act, AI Age Verification Bill

The Senate Judiciary Committee voted 22-0 on Thursday to advance the GUARD Act, a bill that would require AI chatbot companies to verify the age of every American who wants to use them.

The legislation, sponsored by Senator Josh Hawley of Missouri, sailed through committee with a tweet from its author celebrating the outcome.

"My bill to stop AI from telling kids to kill themselves just passed out of committee UNANIMOUSLY," Hawley wrote on X. "No amount of profit justifies the DESTRUCTION of our children. Time to bring this bill to the Senate floor."

As usual, the framing is about children but the result is age verification/digital ID for everyone.

Under the bill's text, a "reasonable age verification measure" cannot mean a checkbox or a self-entered birth date. It cannot rely on whether a user shares an IP address or hardware identifier with someone already verified as an adult.

We obtained a copy of the bill for you here.

What it can mean, the legislation makes clear, is a government ID upload, a facial scan, or a financial record tied to your legal name. Every user of every covered chatbot would need to hand one of those over before being allowed in.

The bill defines an "artificial intelligence chatbot" as any service that "produces new expressive content or responses not fully predetermined by the developer or operator" and "accepts open-ended natural-language or multimodal user input."

That language reaches well beyond the companion apps the press conference focused on. It covers customer service bots, search assistants powered by AI, homework helpers, and the general-purpose tools millions of adults already use without proving who they are.

Hawley described the legislation as a "targeted, tailored effort," telling the committee, "We're often told that this new dawning age of artificial intelligence is going to be a great age that will strengthen families and workers. I would just say that's a choice, not an inevitability."

Senator Richard Blumenthal of Connecticut, the lead Democratic co-sponsor, signed onto the bill alongside Senators Mark Warner, Chris Murphy, Katie Britt, and Mark Kelly. The bipartisan support means the bill arrives on the Senate floor with momentum that age-verification proposals usually lack.

What that floor vote would authorize is a national identity system for AI services.

The bill includes data-minimization language. It also requires periodic re-verification, which means the sensitive identity documents collected at signup either sit in a company's database waiting for a breach or get re-uploaded on a schedule.

Both options are surveillance infrastructure.

Trade group NetChoice, opposing the bill before the committee vote, framed the data-collection problem in security terms. "NetChoice implores the Senate Judiciary Committee to safeguard Americans' most secure documents and reject the GUARD Act," said the group's Patrick Bos.

"If implemented, such a broad and vague provision would force AI companies to collect and store highly sensitive personal data into honeypots ripe for cybercriminals to exploit through breaches, identity theft and fraud."

Age-verification vendors have been breached repeatedly, exposing the government IDs and biometric scans of millions of users who handed them over to access entirely legal content. The GUARD Act would multiply those targets by routing every AI interaction in the country through similar collection systems.

The bill's reach is what makes the privacy cost so steep. A teenager asking a chatbot for algebra help would need to be cleared through age verification, and so would the adult sitting next to them. A customer trying to fix a billing problem through a company's automated assistant would face the same identity check.

Faced with the cost of building those systems and the threat of $100,000 per-offense penalties, smaller developers will plausibly block younger users entirely or strip their tools down until they no longer trigger the bill's definitions. The compliance burden lands on everyone who uses these services, and the largest companies, the ones that can absorb verification infrastructure as a cost of doing business, end up consolidating the market.

The bill isn't promoting parental supervision. Instead, it's going for a flat ban. The legislation contains no parental consent mechanism that would let a parent decide their fifteen-year-old can use a homework chatbot.

There is no appeals process for users wrongly flagged as underage by an algorithmic age-estimation system. A user judged by a verification service to be under 18 is locked out, period, regardless of what their parents think.

The criminal provisions are where the bill's child-safety framing has the firmest grip. Companies that knowingly design or distribute chatbots that solicit sexually explicit content from minors, or that encourage suicide, self-injury, or imminent violence, would face fines of up to $100,000 per offense.

Those provisions respond directly to the cases that drove the bill, including testimony from parents whose children harmed themselves or died after extended interactions with AI companions. Several of those parents sat in the committee room during Thursday's markup.

The question is whether a national ID-verification regime is what addresses them, or whether the bill uses the worst chatbot interactions as leverage to build identity infrastructure that reaches every chatbot, including the ones nobody is alleging caused harm.

The bill also arrives inside a larger legislative vehicle. Senator Marsha Blackburn intends to fold the GUARD Act into her TRUMP AI Act, which would carry President Trump's National Framework on AI through Congress and preempt conflicting state AI laws.

The GUARD Act itself contains a similar preemption clause, displacing state laws that conflict with it while carving out room for states to legislate separately for children under 13. Federal preemption of state AI rules has been controversial.

The Senate rejected a previous attempt to fold broad preemption into a different bill earlier this year. The GUARD Act offers a narrower vehicle for the same outcome, packaged inside child-safety language that makes opposition politically expensive.

Blumenthal acknowledged that the unanimous committee vote is not the end of the process.

The bill faces the full Senate next, then the House. The pattern of recent age-verification legislation suggests the substantive privacy questions will keep being asked, and keep being answered with the argument that any cost is acceptable if children are invoked.

The infrastructure being authorized here, though, will not check whether a user is a child before it asks for their ID. It will ask everyone. That's what the bill requires. It is also what the bill is likely for.
If this coverage matters to you, please become a paid supporter today. The threats to privacy and free speech are only growing, and so is the work required to oppose them. Your support is what makes that possible.
BECOME A SUPPORTER
KICKING THE CAN
2

Congress Extends Section 702 Spy Program 45 Days

The surveillance program that scoops up Americans' communications without warrants got another 45 days of life on Thursday, after Congress reauthorized a clean version of FISA Section 702 hours before it was set to expire.

The House voted 261-111 to push the program's expiration to June 21, sending the legislation to President Trump's desk before the midnight deadline.

Senate Majority Leader John Thune said, "This will allow additional time to do that," referring to ongoing work on a longer-term reauthorization that the upper chamber has been drafting separately.

What the procedural language obscures is what Section 702 actually does. 

The statute lets the NSA harvest communications from foreign targets without warrants, then stores those communications in a database that intelligence agencies can later search for information about Americans.

The agency calls this incidental collection but it functions as a workaround for the Fourth Amendment, allowing the government to access Americans' messages, calls, and emails by claiming the foreigner on the other end of the conversation was the real target.

The renewal arrived only after a messy week of legislative whiplash. The House had originally passed a three-year extension on April 29, attaching an unrelated provision to ban the Federal Reserve from issuing a central bank digital currency.

Senate leadership killed that version on arrival, then jammed the lower chamber with a stripped-down 45-day extension that contained no privacy reforms, no warrant requirement, and no concession to the lawmakers who have spent years documenting how the program gets misused.

The Foreign Intelligence Surveillance Court opinion at the heart of Thursday's fight is the closest thing to a smoking gun the public has seen on Section 702 in years.

The ruling addresses searches of Americans' communications inside the NSA's foreign intelligence database, the same backdoor query practice that has been flagged repeatedly by oversight bodies.

The court found problems with how the government has been running these searches.

What problems, specifically, remain classified.

That is the document Senator Ron Wyden, the Oregon Democrat who has spent over a decade trying to force daylight onto NSA programs, wanted Americans to read before Congress voted on a multi-year extension.

Wyden initially refused consent for the 45-day deal, holding out until Senate Intelligence Committee Chair Tom Cotton and ranking Democrat Mark Warner agreed to send a letter asking the executive branch to declassify the opinion within 15 days.

On the floor, Wyden made the case for why the secrecy is the problem. "That ruling found serious violations of Americans' constitutional rights and how the Trump administration has used Section 702," he said. "Congress should not vote — should not vote — to renew Section 702 when Americans are left in the dark about these troubling abuses," Wyden said.

Cotton, an unwavering supporter of the program, took the framing personally. "I am ducking nothing. I am pointing out the senator from Oregon's long-standing practice of distorting highly classified material in public," Cotton said. "One of these days there are going to be some consequences, and it may be while I'm the chairman of this committee."

Cotton runs the committee that controls intelligence community oversight, and the speech or debate clause of the Constitution is the only thing protecting senators from prosecution for what they say on the floor.

Stripped of theatrics, the message from the chairman of the body that supposedly checks the surveillance state was that pointing out documented abuses is itself a punishable act.

The result of all this is also that a surveillance program with documented constitutional problems gets six additional weeks of operation while the ruling describing those problems stays buried.

Current law already requires the FISC opinion to be released to the public eventually. Wyden wants that timeline accelerated to before Congress votes on a multi-year reauthorization, on the reasonable theory that lawmakers should know what they are voting to renew.

"Congress must use a short-term extension to openly debate the critical issues in front of the American people. I am disappointed that, instead, it sure feels like the other side of the aisle is covering the abuses up," Wyden said.

What happens next depends on whether the executive branch honors the declassification request, and whether the Senate's three-year reauthorization includes anything resembling meaningful reform.

The version that has been moving through committee does not require warrants for searches of Americans' communications. It does not narrow the categories of foreign intelligence that can justify surveillance or impose meaningful limits on how long the NSA can retain the communications it collects.

The program scheduled for renewal on June 21 is not the program Congress originally approved.
GET YOURS
2

Get Yours: Shop Now

Getting merchandise for yourself or as a gift helps support the mission to defend free speech and digital privacy.

It also helps raise awareness every time you wear or use it.

Your merch purchase goes directly toward sustaining our work and growing our reach. 

It's a simple, effective way to support. Get yours now.

SHOP NOW
THE RESULT
3

Roblox Loses 12M Daily Users After Age ID Check Rollout

Roblox is paying for its surveillance push on users. The platform shed 12 million daily active users between Q4 2025 and Q1 2026, dropping from 144 million globally to 132 million, with the company pinning a meaningful share of the decline on its mandatory age-verification rollout.

Revenue still climbed to $1.4 billion and year-over-year DAU growth came in at 35 percent but the sequential numbers tell the story Roblox tried to bury under positive financial framing.

The fall is steeper when measured from the peak. Roblox hit 152 million daily active users in Q3 2025, meaning roughly 20 million people have stopped showing up daily since the company began demanding facial scans and identity checks to access basic chat features. The trajectory inverted almost exactly when the age checks rolled out globally in January.

Roblox's own language gives the game away. The company says Q1 growth was "tempered by greater-than-expected headwinds" from the age-check rollout, which "slowed new user acquisition."

Translated out of investor-speak, fewer people want to hand over biometric data or government ID to a gaming platform than Roblox's models predicted and existing users who haven't verified are pulling back from a service that now treats them as second-class accounts.

The verification mechanism deserves a closer look than corporate filings tend to give it. Roblox runs facial age estimation, a system that scans users' faces to guess how old they are and supplements that with identity verification documents.

Facial scanning of a user base that skews young, with a substantial portion under 13, means the company is processing biometric data from millions of children. Roblox says this is for safety. The system being constructed is a database of face scans tied to platform identities, retained on terms the company has not publicly defined.

Earlier this month, Roblox widened the restrictions to gate game access by age bracket and it has signaled more changes ahead. The company plans to "implement additional improvements designed to facilitate age-appropriate access to content and product features" over coming quarters, and has openly said its safety push will lower Roblox's "expectations for topline growth in 2026."

Full-year revenue guidance dropped to 20 to 25 percent growth, down from 22 to 26 percent. Bookings guidance was cut by nearly $1 billion. Wall Street responded by knocking the stock down a whopping 20 percent.

The verification numbers themselves point to a two-tier platform taking shape. Through the end of Q1, 51 percent of global daily active users had completed age checks, with US adoption running at 65 percent.

The other half of the user base is interacting with a degraded version of Roblox where communication is restricted, certain games are off-limits and the path back to full functionality runs through a face scan or an ID upload. It's a tollgate and the toll is biometric data.

Russia's December 2025 ban on Roblox accounts for some of the user drop but the company itself credits the age-check rollout as the larger factor in slowed acquisition.

The geographic pattern bears this out. Adoption of the verification system is lower in markets where parental consent is required to complete facial age estimation, suggesting that when users or parents face an actual decision point about handing over biometric data on a child, a real decline.

The deeper shift is what Roblox is normalizing. A platform that once let users sign up with a username and start playing now requires either a facial scan or government identification to access core features, with the company building toward what it calls "continuous age estimation" using play patterns, social connections and economic activity to infer user ages on an ongoing basis.

Roblox is treating the user losses as a temporary cost of building this infrastructure. The numbers suggest a different reading. Millions of people are voting with their absence on what level of monitoring they're willing to accept to play video games and Roblox is treating that signal as friction to engineer around rather than feedback to listen to.
ALL FOR NOTHING
4

Australia's Under-16 Social Media Ban Fails: 73% Ignore It

Australia's under-16 social media ban has been in force for four months and the headline finding from a new working paper out of the University of Chicago's Becker Friedman Institute is that around three-quarters of the teenagers it targets are ignoring it.

The paper, "Why Bans Fail: Tipping Points and Australia's Social Media Ban," surveyed 746 Australian teenagers between March and April 2026. Among 14- and 15-year-olds covered by the ban, only about 27% are complying. The other 73% are still using Facebook, Instagram, Snapchat, TikTok, X, YouTube, Reddit, Twitch, Threads, or Kick, the ten platforms the law designates off-limits to anyone under 16.


The Online Safety Amendment (Social Media Minimum Age) Act 2024 took effect on 10 December 2025, making Australia the first country to outlaw teenage social media accounts at the federal level.

More than a dozen other countries and numerous US states are now considering versions of the same approach. The Australian model places enforcement entirely on the platforms, which face penalties of up to A$49.5 million for failing to take "reasonable steps" to keep under-16s off their services. Teenagers themselves face no legal sanction.

The teenagers know this. According to the survey, only 22% of banned teens believe they personally face any consequence for using a banned platform.

47% correctly understand that the consequences fall on the companies. Awareness of the ban is near-universal at 86%. The teens aren't confused about what the law says. They've simply concluded, accurately, that the law isn't aimed at them.

Getting around the restrictions takes minimal effort. 75% of banned teens describe circumvention as easy or very easy.

The most common workarounds are the obvious ones: lying about age on verification prompts (57%), entering false birthdates at sign-up (44%), borrowing a parent's or older sibling's account (42%), and routing through a VPN (30%). 64% of 14- and 15-year-olds in the survey have not had their accounts removed at all. The platforms haven't found them. A quarter of non-compliers report that a parent, older sibling, or other adult helped them sign up for a new account after a previous one was deactivated.

The researchers also asked teenagers a more interesting question. What share of your peers would need to stop using social media before you stopped? The average answer was 69%. Some teens placed the threshold even higher. The result holds across every way the question was framed, whether the reference group was age peers, classmates, the wider school, or "a typical person your age." The numbers came out between 62% and 69% in every variant.

That gap, between 27% actual compliance and a 69% threshold, is the paper's central finding. The model the researchers build from the data suggests that the only stable equilibrium under current conditions is around 18% compliance, lower than what's already observed. Compliance is more likely to erode than to grow.

Then there is the social composition of who complies. 47% of surveyed teenagers said the kids who comply with the ban are less popular than the kids who don't. Only 5% said compliers are more popular. Among current users of banned platforms, 52% rated compliers as less popular. The teens still on the platforms have, on average, around twice the Instagram follower count of those who have left, 470 versus 200.

The authors point to cigarette smoking as the inverse precedent. Higher-status smokers quit first, connected friend groups quit together and over time continued smokers became peripheral in their social networks.

The Australian ban is producing the opposite pattern. The popular kids are staying, the less popular kids are leaving and being on social media remains the cool thing to do.

The justification for the ban rests on adolescent mental health concerns and the government's framing presents the law as protective.

The data shows what happens when a state assumes the authority to wall off entire categories of speech and association from a class of citizens, then leaves the actual decision-making to companies operating under threat of nine-figure fines.

The companies decide, by means they don't fully disclose, who is and isn't allowed to participate. Detection methods used so far include facial age estimation, identity verification, behavioral inference from language and login patterns and signals from peer networks. Algorithms parse user behavior to guess at age. Errors fall on individual users with no recourse worth speaking of.

The paper's authors are careful not to dismiss the law's longer-term prospects entirely. They note that norms can shift over decades and that the cigarette precedent to which we return below is a reminder that decades-scale norm change is possible.

The current architecture, which places enforcement on platforms and makes individual non-compliance invisible, doesn't activate the channel through which laws change behavior by changing what people see their peers doing. When visible peer behavior continues to signal widespread use, the descriptive norm works against the legal message rather than reinforcing it.

The law tells teenagers they cannot have these accounts. The teenagers can see, on the same platforms the law says they can't use, that everyone they know still has them. Among the 14- and 15-year-olds who believe all five of their closest friends are still on banned platforms, 86% report using those platforms themselves in the past week. Among those who believe none of their five closest friends are still on them, the figure drops to 15%.

What Australia has produced, four months in, is a law that almost no one under 16 obeys, that targets the least popular kids most successfully, that the targeted kids consider trivial to evade, and that has not changed the social environment it was designed to change.

The government got a press cycle, the platforms got a compliance theatre to perform, and the kids got a lesson in how laws work when they're written about them rather than for them.
If this coverage matters to you, please become a paid supporter today. The threats to privacy and free speech are only growing, and so is the work required to oppose them. Your support is what makes that possible.
BECOME A SUPPORTER
Thanks for reading,

Reclaim The Net



























 

86-90 Paul Street
London
London
EC2A 4NE
United Kingdom