ChatGPT - If Stack Overflow and Reddit had a child

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
the probably is you're getting into the "i love records because of how awful the quality is" levels of autism
Didnt say I liked it you tard but that the perfection gives off its fake, talking about autism and you cant even parse a fucking sentence

And even when they try to add shit like wrinkles and skin blemishes it looks painted in, only normalfags fall for it
 
Nearly two years into the thread, I feel like progress is stalling in anything involving extrapolation of data. Anything that is not a homework assignment level work is simply not feasible unless you so most of the work yourself.
 
  • Agree
Reactions: AFAB
Nearly two years into the thread, I feel like progress is stalling in anything involving extrapolation of data. Anything that is not a homework assignment level work is simply not feasible unless you so most of the work yourself.
There's always a moral panic whenever "AI" does something it couldn't do before, even if it's be seemingly intelligent enough that instead of being able to detect it instantly, it now takes a couple minutes. Some people have always been blind to this and even in the days of dumb babblebots like Eliza, would actually react to it like it was real. These people are called retards.

I don't think we're anywhere near anything that could pass the Turing test, especially considering the proviso that the tester would be familiar with and skilled at actually detecting fakeness. This is even more the case since they're gimping and dumbing down pretty much every AI in a desperate attempt to avoid another "Tay."
 
Deepseek V3 chat was released and it's a very powerful model, on the level and outperforming current SOTA models by Anthropic, Google, and OpenAI. It is open-weight and you can download it and at least theoretically run it at home, although it's a 600b parameter MoE model (256 experts) it's not very practical to actually run for most hardware. (even though still quite a bit easier to run than a dense 600b. you could theoretically run it off CPU with 400GB+ RAM)

The most interesting thing about Deepseek V3 is that it is a very, very efficent model using some very novel things (like being 8-bit by design) clearly designed as a result of a GPU-starved China. Interna of the big three american companies and their SOTA models are not known, but I personally think it would not be a stretch to assume that V3 is magnitudes more computing-efficent, while being just as good and better. It was definitively magnitudes cheaper to train. As a result, running it via the deepseek API is very cheap, it costs about 10% of what 4o costs. (These are the classic deepseek prices, they already announced they will raise prices in Feburary but it still will be much cheaper than any other API model; even with heavy use you would not pay much)

It's also uncensored and has no specific noticable bias towards safety. The chinese once again creating facts in this sphere.
 
Last edited:
Meanwhile, at the OAI salt mines:
oai.png

671b actually. It is a sparse model, meaning that only 37b parameters of those are active at all times. So yes, on a fast serverboard with AVX512 CPU+400GB fast quad-channel DDR5 RAM (that assumes quantization, better would be 600GB+) you can probably run this model at reasonable speeds, I'd guess ~10 tokens/s? Faster than a normal person can read with streaming, at any rate. Some people theorize even fast nvme drives might be enough, I don't believe this because we're talking (very) random reads here, not sequential ones. I don't know what such hardware currently would cost but I know it's likely magnitutes cheaper than the GPU+VRAM nickel and diming.

It's really hard to overstate what an architectural paradigm shift deepseek is. It has shown that pre-deepseek models are incredibly inefficient, both in training and in operation. Expect prices of non-legacy AI to fall quite dramatically next year. Also expect quite a few new players, I would say.
 
This shit is unbelievably ass. Making and showing off something like this gotta be part of a humiliation ritual
It's fine as a little proof of concept/experiment, though I don't know why Ahriman wanted to show this since this really isn't all that spectacular besides being a decent enough editor fooling with AI video.
 
Nearly two years into the thread, I feel like progress is stalling in anything involving extrapolation of data. Anything that is not a homework assignment level work is simply not feasible unless you so most of the work yourself.
The software has been lobotomized to the point where all it does is just regurgitate back whatever you've just said to it. It ignores stated preferences and is just biased towards leftist political agendas above all else.

Ask it to stop giving you lists. It will do that for one response and then go right back to giving you 10 bullet point lists you never asked for.
 
I always wonder what you people use chatgpt for that politics comes up regularily. Are you that desperate to have somebody that agrees with you?
I dont use chatGPT anymore and when I did I used it mostly for work but I stopped because its definitely getting dumber, dont know why but my guess seeing other models getting ahead is that openAI is now like apple in the nineties losing their marketshare and not knowing what to do to stop it so they keep fucking it up
 
  • Like
Reactions: The Spoils of War
I dont use chatGPT anymore and when I did I used it mostly for work but I stopped because its definitely getting dumber, dont know why but my guess seeing other models getting ahead is that openAI is now like apple in the nineties losing their marketshare and not knowing what to do to stop it so they keep fucking it up
I wonder if it's that the Internet, the main source of most ChatGPT content anyway, is now getting excessively flooded with AI slop to the point it's degrading the quality of its own output by, basically, constantly feeding on a diet of its own shit.
 
  • Like
Reactions: Taxidermied Rat
I wonder if it's that the Internet, the main source of most ChatGPT content anyway, is now getting excessively flooded with AI slop to the point it's degrading the quality of its own output by, basically, constantly feeding on a diet of its own shit.
It's called "Model Collapse", although I would expect that given OpenAI is a for-profit organization they would use that income to vet their data pool.
 
  • Informative
Reactions: AnOminous
I wish we had LLMs that were actually good at writing. NovelAI has dropped the ball and is focused on image generation, and if you go to Hugging Face, you'll find chatbots that produce the same slop as ChatGPT or Claude. How hard is it to train a model with good literature?
I wonder if it's that the Internet, the main source of most ChatGPT content anyway, is now getting excessively flooded with AI slop to the point it's degrading the quality of its own output by, basically, constantly feeding on a diet of its own shit.
Training models with synthetic data is actually an established practice now. The Internet is really, really big, but apparently not enough.
 
I wonder if it's that the Internet, the main source of most ChatGPT content anyway, is now getting excessively flooded with AI slop to the point it's degrading the quality of its own output by, basically, constantly feeding on a diet of its own shit.
Other AIs dont give me those bad results so my guess? its something else
I wish we had LLMs that were actually good at writing. NovelAI has dropped the ball and is focused on image generation, and if you go to Hugging Face, you'll find chatbots that produce the same slop as ChatGPT or Claude. How hard is it to train a model with good literature?
I need something to fill the gaps, hate writing filler but without it its a short story
 
Back