- Joined
- Nov 12, 2019
more from reclaimthenet
1. congress try to pass the screen act before end of year.

2.
New Drama from congress
AI spooky bill- Deepfake Liability Act, revises Section 230
Parents Over Platforms Act, which reads like a spy’s dream version of child safety. The idea is to require mobile app stores and developers to “assure” user ages through “commercially reasonable efforts.”
SIDENOTE: FOR THE GOEY- this was same thing padjet said last week
1. congress try to pass the screen act before end of year.

2.
New Drama from congress
AI spooky bill- Deepfake Liability Act, revises Section 230
Parents Over Platforms Act, which reads like a spy’s dream version of child safety. The idea is to require mobile app stores and developers to “assure” user ages through “commercially reasonable efforts.”
SIDENOTE: FOR THE GOEY- this was same thing padjet said last week
...Key Facts
In a directive issued on Monday, India’s Ministry of Communications mandated that the government’s “Sanchar Saathi” must be preinstalled on all new smartphones sold in India within 90 days.
The order also requires phonemakers to push a software update to install the app on “devices that have already been manufactured and are in sales channels in India.”
Phonemakers have been ordered to ensure that the government-run app is “readily visible and accessible” to users when they first set up their devices and ensure that its “functionalities are not disabled or restricted.”
According to the directive, phonemakers will need to submit compliance reports on this matter to India’s Department of Telecom within 120 days.
The Sanchaar Saathi app is currently available on Google’s Play Store and Apple’s App Store and it has been downloaded more than 10 million times on Android devices and more than 950,000 times on iPhones.
Washington has finally found a monster big enough for bipartisan unity: the attention economy. In a moment of rare cross-aisle cooperation, lawmakers have introduced two censorship-heavy bills and a tax scheme under the banner of the UnAnxious Generation package.
The name, borrowed from Jonathan Haidt’s pop-psychology hit The Anxious Generation, reveals the obvious pitch: Congress will save America’s children from Silicon Valley through online regulation and speech controls.
Representative Jake Auchincloss of Massachusetts, who has built a career out of publicly scolding tech companies, says he’s going “directly at their jugular.”
The plan: tie legal immunity to content “moderation,” tax the ad money, and make sure kids can’t get near an app without producing an “Age Signal.” If that sounds like a euphemism for surveillance, that’s because it is.
The first bill, the Deepfake Liability Act, revises Section 230, the sacred shield that lets platforms host your political rants, memes, and conspiracy reels without getting sued for them.
Under the new proposal, that immunity becomes conditional on a vague “duty of care” to prevent deepfake porn, cyberstalking, and “digital forgeries.”
TIME’s report doesn’t define that last term, which could be a problem since it sounds like anything from fake celebrity videos to an unflattering AI meme of your senator. If “digital forgery” turns out to include parody or satire, every political cartoonist might suddenly need a lawyer on speed dial.
Auchincloss insists the goal is accountability, not censorship. “If a company knows it’ll be liable for deepfake porn, cyberstalking, or AI-created content, that becomes a board-level problem,” he says. In other words, a law designed to make executives sweat.
But with AI-generated content specifically excluded from Section 230 protections, the bill effectively redefines the internet’s liability protections.
Next up is the Parents Over Platforms Act, which reads like a spy’s dream version of child safety. The idea is to require mobile app stores and developers to “assure” user ages through “commercially reasonable efforts.”
Developers must “determine whether a user is an Adult or a Minor with a reasonable level of certainty.” How they’re supposed to do that without collecting more personal data is unclear. Privacy advocates might want to sit down for this one.
The bill’s co-sponsor, Republican Erin Houchin of Indiana, says it comes from personal experience. Her daughter, age 13, “hacked around our parental controls” and started chatting with strangers.
“My goal is to put parents back in the driver’s seat,” she says. Fair enough, but that driver’s seat now comes with a dashboard full of federal switches and levers.
If passed, parents would input their children’s ages into the app store, which would then transmit the “Age Signal” to every app. Kids under 13 would be locked out of restricted platforms. The potential for data errors and cross-app confusion seems baked in, but Congress appears unbothered.
Rounding out the trio is the Education Not Endless Scrolling Act, which would slap a 50 percent tax on digital ad revenue over $2.5 billion. The money would fund tutoring programs, local journalism, and technical education. Auchincloss explains, “This is for the major social media corporations, not the recipe blogs.”
He adds, “These social media corporations have made hundreds of billions of dollars making us angrier, lonelier, and sadder, and they have no accountability to the American public.”
The proposal reads like a moral tax: the government will collect penance for every click.
Both Auchincloss and Houchin frame their effort as a bipartisan stand for the children, launching a “Kids Online Safety Caucus” to formalize their alliance. Houchin puts it simply: “Good policy supersedes politics.” It’s a line you usually hear right before an entire generation of digital policy disasters.
The timing is no accident. Congress is now flooded with “child safety” bills.
Auchincloss says he’s tired of waiting. “I don’t like to be passive or wait for the ground to shift,” he says. “I am trying to be an earthquake.”
It’s a fitting metaphor, though he might consider what happens after the shaking stops. Once the dust settles, the UnAnxious Generation may find that the cure for digital anxiety looks a lot like preemptive censorship and surveillance wrapped in a moral crusade.
The Deepfake Liability Act is a proposed bill that would amend Section 230 of the Communications Decency Act (CDA) by making platforms' legal immunity conditional on them taking active steps to address harmful deepfakes. This approach differs from the already-enacted TAKE IT DOWN Act, which addresses similar issues but does not explicitly amend Section 230.
The Proposed Deepfake Liability Act
The Deepfake Liability Act, introduced in the House by Representatives Maloy and Auchincloss in late 2025, proposes an amendment to Section 230 to address abusive and harmful deepfakes, including non-consensual intimate imagery (NCII).
The key changes it would make to Section 230 are:
Conditional Immunity: Instead of having broad immunity, platforms would have to implement a "duty of care" regarding certain types of harmful content to maintain their Section 230 protections.
Mandatory Procedures: Platforms would be required to establish clear, accessible, and responsive procedures for victims to report and seek the removal of deepfakes, cyberstalking content, and intimate privacy violations.
Removal Deadline: The bill mandates that platforms investigate reports and remove the offending material within a specific timeframe (e.g., 24 hours in some drafts) once they are aware of it.
Liability for Encouragement: The bill aims to hold platforms accountable not just for content they create, but also for content they solicit or encourage, which targets platforms' algorithms that might amplify harmful content.
How This Differs from the TAKE IT DOWN Act
It is important to distinguish the proposed Deepfake Liability Act from the existing TAKE IT DOWN Act, which became law in May 2025.
TAKE IT DOWN Act: This law primarily creates new federal criminal penalties for individuals who publish NCII (including deepfakes). It also requires covered platforms to implement a notice-and-takedown system for NCII, enforced by the Federal Trade Commission (FTC). However, it does not explicitly amend or create an exception to Section 230's civil immunity shield, leaving some ambiguity about private lawsuits against platforms.
Deepfake Liability Act (Proposed): This bill directly targets Section 230's core civil immunity, making compliance with specific takedown and safety obligations a condition of maintaining that immunity in cases related to deepfakes and NCII. This creates a direct legal mechanism for victims to pursue platforms in civil court if they fail their "duty of care."
Last edited:
