- Joined
- Dec 16, 2025
Total monthly number of StackOverflow questions over time
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
I'm not sure if this is good or bad. We're replacing indian slop with ai slop trained on indian slop.
Subhuman Centipede.I'm not sure if this is good or bad. We're replacing indian slop with ai slop trained on indian slop.





Obviously a hallucination. Only boomers like Metokur fall for this shit.View attachment 8418648
View attachment 8418651
Link to the Grok convo: https://x.com/i/grok/share/tDBXP0GVDGuegwWY8smUO2dp2
View attachment 8418653
View attachment 8418658
View attachment 8418669
Too late to the party, it's been patched up. But it really makes you think.
View attachment 8418681
![]()


Wccftech: The “Famous” Claude Code Has Managed to Port NVIDIA’s CUDA Backend to ROCm in Just 30 Minutes, and Folks Are Calling It the End of the CUDA Moat (archive)
View attachment 8459264View attachment 8459269
Coding Artilect kicks Jensen in the balls (probably not).
The weird looking jeet tweeting about it is the "Corporate VP of AI Software" at AMD:
It's not clear who has called Claude Code "famous".

Found it: https://github.com/LeelaChessZero/lc0/pull/2375Wccftech: The “Famous” Claude Code Has Managed to Port NVIDIA’s CUDA Backend to ROCm in Just 30 Minutes, and Folks Are Calling It the End of the CUDA Moat (archive)
View attachment 8459264View attachment 8459269
Coding Artilect kicks Jensen in the balls (probably not).
The weird looking jeet tweeting about it is the "Corporate VP of AI Software" at AMD:
It's not clear who has called Claude Code "famous".
This is a good use for it. If people remember it as a tool like a calculator, instead of outsourcing their thoughts, ideas, creativity and judgment to it, we might get back to innovating, instead of this AI good / AI bad retard war.partner's work had an ai workshop and they were told to use it because it makes coding very efficient. one of the software dev teams' lead even said that they don't write code at all anymore, just double check what ai has spit out
Cerebras makes the "Wafer Scale Engine", which are single chips that use an entire silicon wafer.On Thursday, OpenAI released its first production AI model to run on non-Nvidia hardware, deploying the new GPT-5.3-Codex-Spark coding model on chips from Cerebras. The model delivers code at more than 1,000 tokens (chunks of data) per second, which is reported to be roughly 15 times faster than its predecessor. To compare, Anthropic’s Claude Opus 4.6 in its new premium-priced fast mode reaches about 2.5 times its standard speed of 68.2 tokens per second, although it is a larger and more capable model than Spark.