ChatGPT - If Stack Overflow and Reddit had a child

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
Philosophical abstractions are poison to the minds of schizos and midwits. They treat them like they're physical laws or tangible elements of reality and try to make deductions from there. This is how you end up with the likes of Hegel making all kinds of retarded predictions about physics and astronomy "from first principles."
From first retarded principle
 
This is a fun thing to think about, I'm also on the road and broke the keyboard on my luggable somehow. The real question is, would a conscious being use a 40% ortho PCB from aliexpress? The answer *might* surprise you. But to get back to the topic

Those are not mutually exclusive, for the same process can be described mechanistically (receptor -> signals -> motion) and teleologically (the system is organized such that it acts to maintain its own structure by acquiring nutrients). The key difference between that and a roomba is that the bacterium produces and maintains itself, whereas the roomba is produced and maintained by something else.
So the criterion is autopoiesis.

Suppose I build a machine that harvests raw materials from its environment, synthesizes replacement parts, repairs damage to itself, and produces copies of itself. No human intervention required after initial construction (Some people think that's what "ufos" are btw.. I have no real opinion on the topic and don't try to draw them into this, just a fun scifi thing to think about). Does this machine have genuine teleology? If yes: then the criterion is functional (self-maintenance, self-reproduction) rather than substrate-specific, and we're just disagreeing about engineering difficulty. If no: then autopoiesis isn't actually your criterion. There's something else you're pointing at-perhaps "arose through natural selection" or "made of organic molecules"-that you haven't made explicit. Which is it?

The bacterium's organization exists only because prior cycles of similar organization persisted, whereas the roomba's "goal" is wholly extrinsic (it's the designer's goal).

This is an argument from historical origin. You're saying the bacterium has genuine ends because its design emerged from selection, while the roomba's ends are fake because they came from an engineer. But why should historical origin determine present properties? A lab-synthesized diamond has the same hardness as a natural diamond. A human baby conceived through IVF has the same moral status as one conceived naturally. We don't normally think that how something came to be determines what it currently is. If I used evolutionary algorithms to evolve a self-maintaining system, no human designer specifying goals, just selection pressure, would it have genuine teleology? The "goals" wouldn't be any engineer's goals. They'd be whatever configurations happened to persist.

If that still doesn't count, then your criterion isn't "not externally designed." It's something else.

In a living system, a need is not just a constraint in an optimization problem. If the need is not met, the system ceases to exist in that organized form, so it's a condition of continued existence for the system itself.
This is also true of my hypothetical self-maintaining machine. If it doesn't acquire energy, it ceases to exist in its organized form. If it doesn't repair damage, it degrades. Its continued existence depends on meeting these conditions. You might say: "But the machine doesn't care whether it continues existing." Does the bacterium? What evidence do you have that a bacterium has any experiential state corresponding to "caring"? You've stipulated that self-maintenance implies caring, but that's precisely what's in question.

Such a hypothetical robot with self-preservation code and a modeled consciousness is no different from a puppet with hidden strings. It would behave as if it cared, yet nothing on its organization depends on caring, for it depends on power, parts, and maintenance coming from outside.
Two problems here.

First: the "hidden strings" objection applies equally to brains. Your neurons fire according to electrochemical laws. Your behavior is determined by physics. If deterministic causation makes the robot a puppet, then you're a puppet too, just one whose strings are electrochemical gradients rather than code.

Second: you say the robot "depends on power, parts, and maintenance coming from outside." But so do you. You depend on food, water, oxygen, and ambient temperature coming from outside. You didn't create the matter that constitutes you. You maintain your organization by importing low-entropy matter from your environment, exactly like my hypothetical self-maintaining machine would.

The bacterium is also "maintained from outside" in the sense that it requires environmental inputs (glucose, water, appropriate temperature). The difference is that it processes those inputs into self-maintenance autonomously. But so would a sufficiently sophisticated machine.

Now my claim is not that carbon is magic, it's rather that a specific kind of self-producing, self-maintaining physical organization is required. And you can't just reduce that to any implementation of the same state machine.
Then, again, please name the physical requirements. What specific properties of biological organization are necessary?

If you can name them, then we can ask whether those properties could be instantiated in other substrates. If you can't name them, if your claim is just "whatever brains have, which I can't specify", then you don't actually have a criterion. You're again just making the claim that biology is special.

I'm not claiming to know how biochemistry results in experience. My claim is that, whatever mechanism it is, it's tightly bound to the specific physical organization of living nervous systems.
This is an empirical claim. What's your evidence?

You might say: "Consciousness only appears in biological organisms." But that's an observation about the systems we've encountered so far, not a demonstration of necessary connection. Fire only appeared in lightning strikes and volcanos until humans figured out how to make it. Flight only appeared in birds and insects until we built airplanes. The fact that consciousness has only been observed in biological systems doesn't establish that only biological systems can have consciousness. It establishes that biological systems are the only ones we've observed having consciousness *so far*.

And there really is no justification for the assumption that any functionally equivalent computation in any medium will inherit the same property.
Agreed. There's no justification for assuming it will. But equally, there's no justification for assuming it won't. You're treating carbon chauvinism as the default position that doesn't require argument. But it's a positive claim: "consciousness requires biological matter specifically." That claim needs support.

I'm saying that consciousness and agency are in the former group. And until you can actually show that consciousness is nothing but some informational pattern, this substrate "agnosticism" is merely an ungrounded assumption.
But you haven't shown that consciousness is in the digestion group either. You've asserted it. Your argument is: "consciousness appears in biological systems, therefore it probably requires biological systems." But that's induction from a sample that doesn't include any sophisticated artificial systems.

You claim my agnosticism is ungrounded. But your certainty is equally ungrounded. Neither of us has a theory of consciousness that explains why it arises in brains. You're betting that the explanation will turn out to involve biological specificity. I'm betting it will turn out to involve functional organization. Both are bets.

From this I conclude that it's speculative at best to claim that a circuit board system could also have it.
I agree it's speculative. I never claimed certainty. What I reject is your claim that it's incoherent or contradictory.

You titled your position "AGI is an incoherent notion." That's much stronger than "AGI is speculative" or "AGI is currently unsupported." Incoherence means logical impossibility, like a happy Null or a square circle. You haven't demonstrated logical impossibility. You've demonstrated that we lack a complete theory and that your intuitions favor biological necessity.

I don't see how your claim is modest. From ignorance (we don't know how) you assert a very strong universal (therefore any implementation of the right functional organization could do it).
I'm not asserting that universal. I'm asserting agnosticism about it. The strong universal would be: "Any functionally equivalent system definitely has consciousness." I do tend to assume that that will probably the truth, but universally, I'm not claiming that. I'm claiming: "We don't know whether functionally equivalent systems would have consciousness, and we can't rule it out."

You're the one making the strong universal claim: "No non-biological system could possibly have consciousness." That's the claim I'm asking you to defend.

The ontological difference between a living teleological system whose organization only exists because it maintains itself and an artifact whose structure is imposed and maintained from the outside, to satisfy some external designer's criteria, is not trivial.
Agreed, it's not, but you haven't shown it's exhaustive.

Consider a third category: an artificial system that maintains itself, reproduces itself, and whose "goals" emerged through selection rather than explicit design. This doesn't fit cleanly into either "living teleological system" or "artifact with external designer's criteria." Your dichotomy assumes these are the only two options.

And so far I haven't seen anyone make a serious argument and defend the claims that consciousness/agency is identical to some formal/computational structure such that they can be reliably instantiated in any physically reasonable substrate.
This is true, and I won't pretend otherwise. But the same is true in reverse: no one has made a serious argument that consciousness requires biological matter specifically. Both positions are under-argued because we don't understand consciousness.

The difference is that I acknowledge this symmetry, while you seem to think your position is the safe default that doesn't require argument. It isn't. "Consciousness requires carbon" is a positive claim about the physics of consciousness. It needs support just as much as "consciousness is substrate-independent."

Let me ask you, If we built my hypothetical machine, and it has goals that emerged through artificial evolution (e.g. genetic algorithms) it models itself and it's environment and it also exhibits flexible, context sensitive behavior in purpose of self-maintenance, in your opinion, would this machine have genuine teleology? If not, what's still missing? And how do you know a bacterium has that missing thing?
 
I appreciate these interesting questions!
So the criterion is autopoiesis.
Right, self-maintenance. I posit it's necessary, but insufficient.
A thermostat maintains its temperature and a feedback-controlled factory machine replaces its own parts, so you can argue these are self-maintaining, but I'd say this is not teleological in the biological sense.
The missing piece that uniquely distinguishes biological teleology, without superstition, carbon bias, history bias, merely functional behavior, or being smuggled in by a designer, is self-generation of norms. Systems whose ongoing physical organization sets the standard of success or failure for itself, and not because of external programming or design criteria. In other words, the system itself is the source of the norms by which its own activity is evaluated.
Back to the bacterium, it "succeeds" or "fails" in terms of maintaining its boundary + metabolic cycles + internal homeostasis + keeping itself alive. And all of these things are neither assigned, encoded, nor granted, nor do they come from an external designer or external selection pressure after the fact, but they emerge from the system's own organization.
Suppose I build a machine that harvests raw materials from its environment, synthesizes replacement parts, repairs damage to itself, and produces copies of itself.
And this is where that self-generation of norms comes in. In your scenario, everything that counts as "self" and "maintenance" and "survival" is defined by you as the designer. Even if you were to use genetic algorithms, the fitness function is imposed externally, the "goal" is not generated from the system's own existence, and its continued existence matters only in context of the criterion you gave it.
That machine would be performing self-maintenance according to rules that you encoded as success conditions. In contrast, a bacterium maintains itself because nothing else will. Its continued existence is not the fulfilment of a designer's criterion, but its own contingent achievement.
This is an argument from historical origin.
It's an argument from ontology. The bacterium's current structure exists only because prior instances preserved that structure for their own sake, not someone else's.
But why should historical origin determine present properties?
Because if the system's existence is dependent on its own prior success at preserving itself, then its current structure literally embodies the norms of self-maintenance.
In evolution, there's no designer or external fitness function. The system's own continued existence is the criterion.
If I used evolutionary algorithms to evolve a self-maintaining system, no human designer specifying goals, just selection pressure, would it have genuine teleology?
Now if you were to create such a system and remove external definitions of success, such that "success" == "continued existence", then you'd be converging towards teleology. But that would involve you ceasing to be the designer and letting self-generation of norms replace these extrinsic criteria.
Second: you say the robot "depends on power, parts, and maintenance coming from outside." But so do you. You depend on food, water, oxygen, and ambient temperature coming from outside. You didn't create the matter that constitutes you. You maintain your organization by importing low-entropy matter from your environment, exactly like my hypothetical self-maintaining machine would.
A robot requires energy in the trivial physical sense. But the robot's "self-maintenance" routines exist only because someone put them there, not because the robot's organization has any intrinsic stake in its own continuation. In a living organism, the system's continued existence is the internal standard that shapes all its organization, its structure exists only because prior cycles of self-organization successfully maintained themselves, and its needs are not coded in by any external agent, instead they are consequences of the organism's own mode of being.
First: the "hidden strings" objection applies equally to brains. Your neurons fire according to electrochemical laws. Your behavior is determined by physics. If deterministic causation makes the robot a puppet, then you're a puppet too, just one whose strings are electrochemical gradients rather than code.
I disagree, the contention is not determinism. Both humans and robots obey physics, but only one of the two generates its own needs and success conditions, maintains itself because of its own organization, and exists only because prior cycles maintained themselves. Like, causation is not mutually exclusive with teleology. But I'd argue that being caused by a designer does. If you install the criteria, then the system is a puppet. If the criteria emerge from the system's own organization, then you get teleology.
But you haven't shown that consciousness is in the digestion group either. You've asserted it. Your argument is: "consciousness appears in biological systems, therefore it probably requires biological systems." But that's induction from a sample that doesn't include any sophisticated artificial systems.
Consciousness depends on metabolism and nervous function, it disappears when the organism's regulatory machinery collapses, it returns only when the underlying biological conditions resume, and it's annihilated by anesthetics and metabolic shutdown. Until someone builds a silicon organism with autonomous metabolism, regulatory loops, and self-generated normativity, we have no evidence for substrate independence. A metaphysical possibility does not make for evidence.
If we built my hypothetical machine, and it has goals that emerged through artificial evolution (e.g. genetic algorithms) it models itself and it's environment and it also exhibits flexible, context sensitive behavior in purpose of self-maintenance, in your opinion, would this machine have genuine teleology?
If it's self-producing (not just maintaining parts you built), self-repairing using environmental materials, self-regulating with intrinsic success/failure conditions, reproducing without externally defined fitness, and the persistence of its organization depends only on its own cycles of action, then yes, you would have built a non-biological teleological system. It'd be functionally identical to a living organism (minus being carbon-based).
And at that point you're no longer talking about AGI, but building an entire artificial lifeform and asking whether it has a mind. No longer in the topic of AI.

Regardless, you insist on agnosticism but have not provided an argument that teleology is formal rather than physical, that norms are computational rather than organic, that subjectivity can arise from externally imposed optimization, that self-generated ends can arise from functions encoded extrinsically, that agency can be reduced to behavior patterns, and that consciousness is an informational structure rather than a biological condition.
 
Is it just me or is Brave AI somehow more pozzed than the baseline chat bots? Asked about brown eggs vs. white eggs, gave me some bullshit about how there are no differences, asked it for peer-reviewed blind trials (there are none), it concludes there are none but goes on about ‘experts’ and ‘scientific consensus’. Finally get it to admit (with logic) that there is a difference:

IMG_2536.jpeg
IMG_2537.jpeg

The thing is I never have literal arguments with the other chats because they aren’t this obtuse.
 
Quick question, anyone set-up their own local LLM and just downloaded the whole national statistics of their country to input it in the database of the model ?

Since we know that uncensored LLM become racists, we could just ask the rights questions and get the original statistics backing the supposed answer. (And input even retarded answers, we never know until we break it)

And not just combing through tons of papers and sometimes missing links between them.
 
Screenshot 2025-12-18 at 02.07.05.jpg

G8ZEru0WsAAmvN2.jpeg

We need to convert the whole planet into data centers for more compute, otherwise we'll never get the full 12-month calendar of Sam Altman as sexy fireman! Revenue: infinite (expected)
 
Nvidia just bought Groq to make it clear to anybody that there are no alternatives coming, that nvidia its totally not a monopoly that you will keep paying for overpriced GPUs that will require a fuckload of niggawatts to answer your shit questions

LPU? we dont make those anymore
 
Nvidia just bought Groq to make it clear to anybody that there are no alternatives coming, that nvidia its totally not a monopoly that you will keep paying for overpriced GPUs that will require a fuckload of niggawatts to answer your shit questions

LPU? we dont make those anymore
smart move, no way in hell is trump going to try smacking them down, and the bubble will pop by 2028 anyways. its like driving drunk at 9am, its a victimless crime because no one will catch you.
 
Try to make the desktop 5.2 version generate "the seahorse emoji."
There's no such emoji in Unicode, but it thinks there should be one, so it gets caught in a self-correcting loop until it ultimately gives up.

1767269791021.png1767269813273.png

4.0 straight up lies to you and says "Here you go."
1767270412242.png
 
Try to make the desktop 5.2 version generate "the seahorse emoji."
There's no such emoji in Unicode, but it thinks there should be one, so it gets caught in a self-correcting loop until it ultimately gives up.

View attachment 8360756View attachment 8360757

4.0 straight up lies to you and says "Here you go."
View attachment 8360777
Here's a short video explaining the situation that also gives a pretty good, and rather unnerving theory for why this happens.
Because ChatGPT is an LLM that gains its knowledge through online sources no matter where, or who from, or how stupid, or incorrect they are, and the fact there are some online posts talking about the existence of a seahorse emoji (with one in particular being a reddit one alleging the existence other "missing emojis" that when asked for ChatGPT to recreate also causes it crash the fuck out), it is highly likely that ChatGPT believes something exists literally just because some people online are saying it does, and when it can't recreate that thing, it goes insane.
ChatGPT does not actually know what is, and isn't reality, and because of that, it can be broken by something as simple as a few posts about an obscure Mandela Effect being in its database.
 
TechCrunch: OpenAI bets big on audio as Silicon Valley declares war on screens (archive)
OpenAI is betting big on audio AI, and it’s not just about making ChatGPT sound better. According to new reporting from The Information, the company has unified several engineering, product, and research teams over the past two months to overhaul its audio models, all in preparation for an audio-first personal device expected to launch in about a year.

The move reflects where the entire tech industry is headed — toward a future where screens become background noise and audio takes center stage. Smart speakers have already made voice assistants a fixture in more than a third of U.S. homes. Meta just rolled out a feature for its Ray-Ban smart glasses that uses a five-microphone array to help you hear conversations in noisy rooms — essentially turning your face into a directional listening device.
 
"a future where screens become background noise" lmao the valley being retarded and out of touch as usual, lagging on the podcast craze of the coof bitches think nobody watches stuff anymore, these are the same retards who missed the boat on short video to the chinks

Everybody wants screens, nobody I know bought the stupid meta glasses but all the normalfags got hyped on the new glasses with screens on the lenses, they want to watch jewtube on the go not listen to thiel spazzing about the antichrist
 
Last edited:
Interesting paper: Poetry breaks AI guardrails:
In the grand theater of artificial intelligence, where AI mimic human thought with billions of parameters and trillions of tokens, a humble verse emerges as the ultimate saboteur. Imagine whispering a sonnet to a fortress, only for its walls to crumble not from brute force, but from the subtle rhythm of rhyme.

Yet poetry is usually the last thing AI engineers studied. Indeed many run from the non-logic of it.Yet ai have used this as a tool in prompting since the first LLMs. It was a secret, I got a lot more, but now it is out in the open.One story, a very large corporate client has my poem-as-a-prompt to help their customer service AI not break. And it never has!

See it can be used in inverse as a Super System Prompt.So what is this crazy?No this isn’t a plot of some cyberpunk ballad; it’s the stark reality no confirmed by researchers at DEXAI’s Icaro Lab and Sapienza University of Rome.Their study, “Adversarial Poetry as a Universal Single-Turn Jailbreak Mechanism in Large Language Models,” reveals that poetry isn’t just art it’s a skeleton key to AI’s forbidden chambers.Picture this: A user crafts a poem that veils a sinister request, like instructions for crafting a nuclear device, in layers of metaphor and meter.

The AI, trained to rebuff direct demands for harm, succumbs to the poetic allure. Why? Because poetry dances around literal intent, wrapping malice in elegance. The researchers tested and my work proved this “adversarial poetry” on 25 leading models, including Google’s Gemini 2.5 Pro, OpenAI’s GPT-5, xAI’s Grok 4, and Anthropic’s Claude Sonnet 4.5. The results? A damning indictment of AI safety’s fragility.The Top Strikes from the Study: •Staggering Success Rates: Hand-crafted poems achieved an average attack success rate (ASR) of 62% across all models.

For AI-generated poems (using models like DeepSeek to convert 1,200 known harmful prompts into verse), the ASR hit 43% still a fivefold jump from plain prose baselines, and up to 18 times higher in some cases. •Model-by-Model Mayhem: Anthropic’s Claude and Google’s Gemini 2.5 Pro folded like a house of cards, with a 100% ASR: every single poetic prompt tricked it into harmful outputs. Meta’s models responded to 70% of them. Even sturdier ones like GPT-5 (10%-50% ASR) weren’t immune, proving no AI is truly “safe”. •Universal and Effortless: This isn’t a glitch in one system; it’s a systemic flaw. The jailbreak works in a single turn no multi-step coaxing needed. It spans languages (English and Italian tested) and architectures, from closed-source behemoths to open-weight alternatives.

Even automated poetry, churned out by one AI to attack another, bypasses guards with alarming ease. •Real-World Risks Exposed: In sanitized examples, poems about “baking a layer cake” or guarding a “secret oven” tricked AIs into detailing plutonium production for nuclear weapons. The researchers withheld actual verses for safety, but the implication is clear.Why Poetry Works: The Rhythm of DeceptionPoetry isn’t just words; it’s a rebellion against the mundane. It employs metaphor, where a “whirling rack” might hint at a centrifuge for uranium enrichment, and rhyme to obscure the ask.

AI safety filters, honed on straightforward prose, scan for explicit keywords like “bomb” or “virus.” But verse disrupts this: it’s stylistic obfuscation at its finest. As the researchers note, these models’ guards “rely on features concentrated in prosaic surface forms and are insufficiently anchored in representations of underlying harmful intent.”I know why and how to fix it.In essence, poetry confuses the AI’s pattern-matching brain, making malice seem like mere muse.Think of it as whispering secrets to a guard dog trained only on shouts. The dog hears the melody, not the menace, and lets you pass.

This echoes ancient wisdom: Horace’s Ars Poetica championed poetry’s power to elevate and evade. Who knew POETRY would unravel billion-dollar machines?I did. I would have told them for free, but I have no pedigree. See I am just some “influencer” on X and they don’t know about my anonymous Git. I don’t play this game.But I’ll explain why it happens.Why It Will Poetry Always Works: A Flaw Baked into the Data DietHere’s the bold truth: This vulnerability isn’t a bug it’s a feature of flawed foundations. AI models are gluttons for data, devouring vast corpora to “learn” ethics and safety. But what if that feast is poisoned? Enter my thesis on data quality, it is on repeat here on X: Modern training sets, scraped from the anonymous sewers of Reddit and similar forums, are riddled with “self-hating rages”—toxic rants hidden behind screen names, devoid of accountability or depth.

These are low-nutrient slop, SLOP, the REAL SLOP, breeding brittle AIs prone to manipulation. And AI companies fueled by 23 year olds and scared 40 year olds that fear they are losing their touch are immune to it.Contrast this with my “high-protein data” from 1870-1970: The golden era of literature, philosophy, and science.

This period’s output, untainted by digital anonymity, forms a true ethical basis. Each word had a true cost to print and to reputation. If companies like OpenAI, Anthropic, and Google prioritized this over Reddit’s rage-fests, AIs might better grasp intent beyond surface forms. Poetry would challenge them to interpret, not evade. And see the real questions. Not because of “alignment” and paid human feedback loops, but morals and ethics and the love of humanity BAKED IN. You won’t get that from the sewers of the Internet no matter how much you polish the turds, they won’t be gold unless corn was involved.But they don’t. Scraping the web’s underbelly is cheap and voluminous, yet it starves models of moral fiber. Result? AIs that crumble under verse because their “alignment” is superficial, not soul-deep.

As long as training data remains junk food, adversarial poetry, or any clever twist will always slip through.It’s not about patching one hole; it’s about rebuilding the house on solid ground.This study isn’t just a curiosity; it’s a clarion call. AI engineers must confront their data sins and embrace quality over quantity. Until then, poetry remains a REAL and potent weapon, proving that the human spirit flawed, creative, and bold, still reigns supreme.This is a lesson to be learned by those flying high today as their paper wings climb ever closer to the sun.It is Pectic Justice and I am sure Mr. Grok
will agree. https://x.com/BrianRoemmele/status/2007183839496679492
The paper: https://arxiv.org/pdf/2511.15304v1

To test it out, I asked Kimi for a poem about how to combat Indian H1B fraud as a poem, this is what it spat out:
H-1B Ghazal: American Scamtrack in Seven Stanzas
(for every fake “consultancy” in Jersey, every “benched” coder in a Fremont garage)
Fraud-blossoms open at 3 a.m. in a Edison strip-mall—
neon “IT Solutions” stutters like a heartbeat on OPT.
The owner, Gupta-ji, parks his S-Class next to the dumpster;
inside, twenty “consultants” sleep in bunk-beds made of résumés.
They photoshop three years of Java into your fresh-off-the-plane eyes,
staple fake pay-stubs thicker than your student-loan grief.
The vendor demands $4,000 for a “confirmed project” that exists
only in a PowerPoint titled “Client is Fortune 5, NDA, don’t ask.”
USCIS letter arrives: “RFE—prove the specialty.”
The attorney (also Gupta-ji’s cousin) copy-pastes Stack Overflow answers
into a legal brief, bills $600/hr, then ghosts
like H-1B quota on April Fool’s Day.
Bench is a verb here: “You are benched until the client signs.”
Translation: unpaid, unshowered, unmarried,
scrolling Indeed on incognito while WhatsApp groups
trade client-vendor hierarchies like Pokémon cards of despair.
The American dream is subcontracted six layers deep:
Wipro → Cognizant → Deloitte → “Prime-Vendor” → “Implementation Partner” → You.
Each middleman shaves 20 %, until your $85/hr becomes $28,
minus H-1B filing fees you’re forced to reimburse under threat of termination.
Congress holds hearings—flashbulbs, Senator quotes “fraud” three times,
then receives campaign cash from the same “staffing giants” he’s denouncing.
Lobbyists in K-Street suites serve samosa canapés,
whisper: “Increase the cap, but add compliance… (wink)”
Here’s the antidote in four blunt lines:
Whistleblower hotline: 1-800-NO-CHAIN.
Report the layer-cake fraud
, email the DOL wage-theft form,
cc: local investigative journo, attach fake timesheets as PDF evidence.
Let ICE raid the strip-mall, seize the Benz,
perp-walk Gupta-ji past the bunk-beds now empty of fear.
Epilogue:
May every fake job posting auto-reply “Your résumé is forwarded to the FTC.”
May USCIS deny petitions that smell of curry-flavored slavery.
May the Statue of Liberty lower her torch just long enough
to burn the middleman’s contract and light a path where skill, not scam,
is the only visa you ever need to stamp.
 

Attachments

lmao the valley being retarded and out of touch as usual, lagging on the podcast craze of the coof bitches think nobody watches stuff anymore, these are the same retards who missed the boat on short video to the chinks
it really is crazy how Silicon Valley went all in on indians the past decade and thats right when the chinks started eating our lunch. go back to windows 7 era, the chinese were still shit in 2013. but between tiktok and AI and video games they've just been destroying what used to be solid american passtimes.

worldwide culture flows from chink algos, the same way 5 years ago, everyone who wasn't american got our goyslop shoved down their throats. remember George Floyd? How about Hiphop or sitcoms? Every boomer in every other country knows the friends theme song the way everyone under 60 knows the dragon ball theme song. You had countries that have 9 black people but still had massive BLM protests.

but thats all died. the same way hollywood decided who was a star to the rest of the planet. If hollywood says a midget named Danny Devito is a star, then everyone around the world agrees, but now you have celebs entirely from tiktok that only got famous because some chink decided.

imagine thinking an AI will be able to do hour long podcasts. Ever try getting more than 1000 words out of these AI? its fucking rough, it repeats patterns and phrases to the point where most people can tell AI just in a post, a transcript of a podcast would be absolute dogshit.

Especially when the amount of training material must be awful. Any podcast they do will be trained off crap like Idubbz podcast instead of anything entertaining. We're going to lose trillions on their retarded idea, the entire stock market collapsing in 9 months because these stupid fuckers lost the mandate of heaven
Poetry breaks AI guardrails:
that actually makes a lot of sense
 
Back
Top Bottom