Law Senate Panel Backs GUARD Act, AI Age Verification Bill - Every American who wants to ask a chatbot for help would need to upload a government ID, scan their face, or hand over a financial record first.

Senate Panel Backs GUARD Act, AI Age Verification Bill
Reclaim The Net (archive.ph)
By Ken Macon
2026-05-01 19:50:00GMT

guard01.webp

The Senate Judiciary Committee voted 22-0 on Thursday to advance the GUARD Act, a bill that would require AI chatbot companies to verify the age of every American who wants to use them.

The legislation, sponsored by Senator Josh Hawley of Missouri, sailed through committee with a tweet from its author celebrating the outcome.

“My bill to stop AI from telling kids to kill themselves just passed out of committee UNANIMOUSLY,” Hawley wrote on X. “No amount of profit justifies the DESTRUCTION of our children. Time to bring this bill to the Senate floor.”

As usual, the framing is about children but the result is age verification/digital ID for everyone.

guard02.webp

Under the bill’s text, a “reasonable age verification measure” cannot mean a checkbox or a self-entered birth date. It cannot rely on whether a user shares an IP address or hardware identifier with someone already verified as an adult.

We obtained a copy of the bill for you here.

What it can mean, the legislation makes clear, is a government ID upload, a facial scan, or a financial record tied to your legal name. Every user of every covered chatbot would need to hand one of those over before being allowed in.

The bill defines an “artificial intelligence chatbot” as any service that “produces new expressive content or responses not fully predetermined by the developer or operator” and “accepts open-ended natural-language or multimodal user input.”

That language reaches well beyond the companion apps the press conference focused on. It covers customer service bots, search assistants powered by AI, homework helpers, and the general-purpose tools millions of adults already use without proving who they are.

Hawley described the legislation as a “targeted, tailored effort,” telling the committee, “We’re often told that this new dawning age of artificial intelligence is going to be a great age that will strengthen families and workers. I would just say that’s a choice, not an inevitability.”

Senator Richard Blumenthal of Connecticut, the lead Democratic co-sponsor, signed onto the bill alongside Senators Mark Warner, Chris Murphy, Katie Britt, and Mark Kelly. The bipartisan support means the bill arrives on the Senate floor with momentum that age-verification proposals usually lack.

What that floor vote would authorize is a national identity system for AI services.

The bill includes data-minimization language. It also requires periodic re-verification, which means the sensitive identity documents collected at signup either sit in a company’s database waiting for a breach or get re-uploaded on a schedule.

Both options are surveillance infrastructure.

Trade group NetChoice, opposing the bill before the committee vote, framed the data-collection problem in security terms. “NetChoice implores the Senate Judiciary Committee to safeguard Americans’ most secure documents and reject the GUARD Act,” said the group’s Patrick Bos.

“If implemented, such a broad and vague provision would force AI companies to collect and store highly sensitive personal data into honeypots ripe for cybercriminals to exploit through breaches, identity theft and fraud.”

Age-verification vendors have been breached repeatedly, exposing the government IDs and biometric scans of millions of users who handed them over to access entirely legal content. The GUARD Act would multiply those targets by routing every AI interaction in the country through similar collection systems.

The bill’s reach is what makes the privacy cost so steep. A teenager asking a chatbot for algebra help would need to be cleared through age verification, and so would the adult sitting next to them. A customer trying to fix a billing problem through a company’s automated assistant would face the same identity check.

Faced with the cost of building those systems and the threat of $100,000 per-offense penalties, smaller developers will plausibly block younger users entirely or strip their tools down until they no longer trigger the bill’s definitions. The compliance burden lands on everyone who uses these services, and the largest companies, the ones that can absorb verification infrastructure as a cost of doing business, end up consolidating the market.

The bill isn’t promoting parental supervision. Instead, it’s going for a flat ban. The legislation contains no parental consent mechanism that would let a parent decide their fifteen-year-old can use a homework chatbot.

There is no appeals process for users wrongly flagged as underage by an algorithmic age-estimation system. A user judged by a verification service to be under 18 is locked out, period, regardless of what their parents think.

The criminal provisions are where the bill’s child-safety framing has the firmest grip. Companies that knowingly design or distribute chatbots that solicit sexually explicit content from minors, or that encourage suicide, self-injury, or imminent violence, would face fines of up to $100,000 per offense.

Those provisions respond directly to the cases that drove the bill, including testimony from parents whose children harmed themselves or died after extended interactions with AI companions. Several of those parents sat in the committee room during Thursday’s markup.

The question is whether a national ID-verification regime is what addresses them, or whether the bill uses the worst chatbot interactions as leverage to build identity infrastructure that reaches every chatbot, including the ones nobody is alleging caused harm.

The bill also arrives inside a larger legislative vehicle. Senator Marsha Blackburn intends to fold the GUARD Act into her TRUMP AI Act, which would carry President Trump’s National Framework on AI through Congress and preempt conflicting state AI laws.

The GUARD Act itself contains a similar preemption clause, displacing state laws that conflict with it while carving out room for states to legislate separately for children under 13. Federal preemption of state AI rules has been controversial.

The Senate rejected a previous attempt to fold broad preemption into a different bill earlier this year. The GUARD Act offers a narrower vehicle for the same outcome, packaged inside child-safety language that makes opposition politically expensive.

Blumenthal acknowledged that the unanimous committee vote is not the end of the process.

The bill faces the full Senate next, then the House. The pattern of recent age-verification legislation suggests the substantive privacy questions will keep being asked, and keep being answered with the argument that any cost is acceptable if children are invoked.

The infrastructure being authorized here, though, will not check whether a user is a child before it asks for their ID. It will ask everyone. That’s what the bill requires. It is also what the bill is likely for.
 

Attachments

That's the second time this year I've seen a self doxing bill from someone named Josh. How about we ban people named Josh from holding office until about 2100? Maybe the evil Josh problem will be solved by then.
 
They tried to mandate people can only Fly with Real ID, if you don;t have one you can't fly after may 2025. People like me said no and may 2026 I can still Fly. Roblox enforces Age restrictions to use features on its servers and look at there stock price today lmfao. Discord wants to age Verify and ETC ETC ETC .......

Build your dystopian shit world where we all get Observed like an ant farm just expect a good chunk of your customers to nope the fuck out.
 
No SAVE Act to stop voter fraud, but they will mandate spyware. Classic neocons. I would like to say I hope Trump vetos these if they pass, but I doubt it.
 
Everything else aside why do these retards just not understand how computers work at all.

The AI is not telling children to kill themselves. Many of the cases they tried to fear monger about the lobotomized LLM told them NOT TO and they still did it they were just suicidal. The dullard boomer level of understanding the average person has around tech is a special kind of frustrating and I don't even think a lot of them do this simply to be manipulative, I really do think most of them are that retarded and don't understand the whacko sentence generator isn't causing a mass wave of child suicides.
 
I'll just repost what I said in another thread here
Funny how I predicted this exact thing happening. Republicans pushing for the most unpopular and dystopian fucking legislation in existence under the guise of "protecting the heckin children". Vance might as well just give up on 2028 because there is no fucking way anyone who isn't a boomer is voting Republican anymore.
 
It's almost Mothers Day and while my mom isn't diabetic, she watches her intake. So I asked Microsoft Copilot if it could list zero sugar, low carbohydrate candy brands I could buy for her. It told me I had to login because in order to recommend me candy brands, it needs to know if I am a minor or not. The explanation was that to "protect minors", brand recommendations now require age verification.

Candy brands.

You don't hate politicians enough.
 
"No amount of profit justifies the destruction of our children!"

Universal support. Why can't we get universal support for anti-grooming and anti-sex change bills for kids then?
 
It's almost Mothers Day and while my mom isn't diabetic, she watches her intake. So I asked Microsoft Copilot if it could list zero sugar, low carbohydrate candy brands I could buy for her. It told me I had to login because in order to recommend me candy brands, it needs to know if I am a minor or not. The explanation was that to "protect minors", brand recommendations now require age verification.

Candy brands.

You don't hate politicians enough.
I meeeeaaannnn... LLM responses aren't neutral objective truth, they're trained and biased by humans. I'm talking about OpenAI/ChatGPT because it's the backend of Copilot by the way. OpenAI doesn't even have to retrain or fine-tune the model to force biased outputs. They already have systems in place to activate a brand deal, add a context-sensitive prompt injection to their website/Copilot channels (obviously, sponsored content only appears on the goycattle-facing frontend) that makes ChatGPT always promote the sponsored brand when it detects you're asking a question relevant to the brand. That would place OpenAI under "advertising to children" obligations, which aren't a bad thing.
 
Any politicians who passes these Digital ID Age verification laws should see the Antifa and Neo Nazi mass murder gangs coming on their way and sending them to be publicly executed.

Because that is what Pol Pot did to everyone he knew was a scholar (but himself) and what the Rwandans did to people they know are Tutsis in 1994 Rwanda.
 
Last edited:
Joke's on them, I don't use AI assistance for anything.
Any politicians who passes these Digital ID Age verification laws should see the Antifa and Neo Nazi mass.murder gangs coming on their way and sending them to be publicly executed.

Because that is what Pol Pot did to everyone he knew was a scholar (but himself) and what the Rwandans did to people they know are Tutsis in 1994 Rwanda.
Most absurd of all, Pol Pot ordered the death of anyone who wore glasses. Roughly a fourth of the country's population was killed in less than half a decade.
 
Back
Top Bottom