Claude AI discussion

I have the original one before it got nuked. Not sure how I can share it safety and anonymously. Its about 12mb, zip file.
catbox with a zip? Or https://ipfs.tech/.
Better idea, since we have AI these days upload files as a video to one of the video streaming sites that dont need id.
Maybe also https://wormhole.app/ if you upload a blake3/blake2sp + sha256 + sha3 sum as well.
I've seen everything from 331kb to ~20mb from file sizes. Here's some of my blake2sp hashes below:
Code:
BLAKE2sp: b616ed229879f7fbe392a4654138a1277607287756228299075be548aa8e3e37
BLAKE2sp: 50ffdc7f37392009cb986acddd28d8f9187ba730f5e27a51e682319e0d06cef9
BLAKE2sp: 44951c42e623064e53f8d9e7ef24714535500392e4466b794689bca35aae9e97
BLAKE2sp: ce6e6369f8d13a0f876f1f1e7ec24894fb879f3619c2d5ba6e51020b531666f7

WSJ says 8000 DMCA's have been sent already by Anthropic.

Any repos still left? I've found a python rewrite, but that's it - edit, looks like that alex000 repo is still up from above thank you!

https://github.com/instructkr/claw-code
Some autisimfag did a clean room reverse engineer re-write in RUST lol and I have several repo copies locally... All different sizes so tread carefully.
 
Last edited:
As one Twitter reply put it: “accidentally shipping your source map to npm is the kind of mistake that sounds impossible until you remember that a significant portion of the codebase was probably written by the AI you are shipping.”
How the fuck did it do that? Normally source maps would be in ignored files as a pattern. Did Claude deliberately un-ignore it?
 
I have the original one before it got nuked. Not sure how I can share it safety and anonymously. Its about 12mb, zip file.
I got the archive.org mirrored version and mine is 9.9MB. Most of the mirrors I found are also 9.9MB. Is there a version with more files floating around somewhere? The copy I have looks pretty complete already.

 
I got the archive.org mirrored version and mine is 9.9MB. Most of the mirrors I found are also 9.9MB. Is there a version with more files floating around somewhere? The copy I have looks pretty complete already.

Can you guys DM me a zip file link when I wake up tomorrow haha I want to see how bloated this shitty jeet code is
 
Brilliant analogy. I can only hope the other "car" makers are taking notes (and laughing their asses off) on what not to do.

Pardon my ignorance. How much of this leak is useful for guiding other LLMs? Does any of it reveal what Claude does right, or is it all just explanations for its retardation that could help Claude users sidestep its problems?

If it contains trade secrets that help Claude perform in ways that other AI could benefit from (or benefit from avoiding), then doesn't this mean competitor AI models (or interfaces or whatever) will improve from this information, even if Anthropic goes hard on people distributing the actual source code?
 
Pardon my ignorance. How much of this leak is useful for guiding other LLMs? Does any of it reveal what Claude does right, or is it all just explanations for its retardation that could help Claude users sidestep its problems?

If it contains trade secrets that help Claude perform in ways that other AI could benefit from (or benefit from avoiding), then doesn't this mean competitor AI models (or interfaces or whatever) will improve from this information, even if Anthropic goes hard on people distributing the actual source code?
Just because the engine, ECU, and tranny wasn't in there doesn't mean you can't find shit to use. If you know all the knobs/switches, and the the PCU (power control unit) you know whats getting sent, what's expected to be received means a whole lot.
1) Anthropic had anti-distillation (training) features. Expect those to get bypassed
2) The three "read and take as hints" agents for processing requests before it actually goes to generate code is an interesting way to reduce errors, expect it to be copied
3) The dream feature might be widely copied by other companies if they hadn't thought of the idea
 
I still love how with AI we have basically automated making shit up which is going to cause havoc to so many industries where so many people in management basically only know the the bare minimum of what they are doing. Any crackdowns on AI will also crack down on people basically lying about their credentials and idiot nepo babies being in charge of everything because they are being automated out of their jobs.
 
Pardon my ignorance. How much of this leak is useful for guiding other LLMs? Does any of it reveal what Claude does right, or is it all just explanations for its retardation that could help Claude users sidestep its problems?

If it contains trade secrets that help Claude perform in ways that other AI could benefit from (or benefit from avoiding), then doesn't this mean competitor AI models (or interfaces or whatever) will improve from this information, even if Anthropic goes hard on people distributing the actual source code?
From what I understand no other model consolidates memory like Auto Dream. This is probably the only tool thats unique and actually useful as of now. Undercover, Bash, always on agent, Ultraplan, and other tools already have another model equivalent or arent useful. Im assuming some parts might be useful for other teams to look at but I don't know.

My best guess is that other models will improve on this design and switch from time decay pruning. Doing so should free more resources marginally and improve memory quality greatly, so theres really no reason not to. Be prepared for the worst names like "Deep Sleep" though.

This isnt like what happened when Llama was leaked. You dont have the weights for creating your own model, just how to structure an agent. Not quite as valuable but it still saves competitors some time and money from R&D.

I think the anti-distillation poison pills are overblown. They were already ineffective at preventing large distillation attacks just two months ago per anthropic. https://www.anthropic.com/news/detecting-and-preventing-distillation-attacks

It just helps smaller operations that dont have the resources to send millions of echanges to hide it and makes larger operations more efficient. Also not a game changer.
 
Last edited:
From what I understand no other model consolidates memory like Auto Dream. This is probably the only tool thats unique and actually useful as of now.
It only replicates half of the human condition, that is REM sleep memory reinforcement, not self learning (Which I guess would be similar to a very specific fine tune).
Speaking of which:
Sounds like more unsecured fucking botnet to me, yay!
 
Marketing campaign, or actual potential threat?
 
Marketing campaign, or actual potential threat?
1775790263510.png

This might be it.
 
I still love how with AI we have basically automated making shit up which is going to cause havoc to so many industries where so many people in management basically only know the the bare minimum of what they are doing. Any crackdowns on AI will also crack down on people basically lying about their credentials and idiot nepo babies being in charge of everything because they are being automated out of their jobs.
I'm a hardcore accelerationist when it comes to AI. If I go down, everyone else is going down together with me. I was hesitant using AI until I realized everyone around me used it and in very sketchy/lazy ways too. If others can't use it responsible while I have to grind manually to be safe with less pay, fuck it! I'm going all in on it. Fire the retards who can't use it optimally and let me take their place with some AI subscriptions.

Not going to lie, it's exciting to see how far it can go before it goes down in flames. I can't wait until some hospital journal system or infrastructure control have some non checked AI code that will cause massive outage sooner or later.

On the lighter side, claude design is pretty neat for making a custom website starter page for the browser. So comfy.
 
Claude rate limits on the 20$ plan are unusable, and now they actively prohibit you from using a non-retarded harness.
I switched back to OpenAI Codex, easily 2x more value than what Anthropic offers. GPT-5.5 on low reasoning with Pragmatic personality handles tasks very well.
With a well articulated prompt, I get easily Opus 4.6 performance, didnt toy a lot with Opus 4.7. My tip is to guide it like a junior dev and not a chatbot.
Vibecoding is gay anyway but its extremely useful tool in someone who's competent with his toolbox.

Don't be loyal to only one company, that's the retardest shit you can do, the competition is closing the lead Anthropic had on coding agents, be sure to take advantage of that.
GPT-5.5
Kimi K2.6
Gemini 3.1 Pro
Deepseek V4 Pro

Either Anthropic gets their shit together, or it's gonna be a clawd.rip for them.
 
Back
Top Bottom