ChatGPT - If Stack Overflow and Reddit had a child

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.

Per an OpenAI employee, o3-mini coming out tomorrow.
1738167496390.png

Looking at this guy's profile is it a wonder people respect the chinks more?
 
If you're willing to paypig that much
nigga, poe is free. you get 3000 points a day, and the message cost for the chink stuff is 300 points, and i doubt the user wanted to jump all in on AI, and just wanted a taste. 10 messages a day is enough if you don't really know or care all that much. the person was asking for a sample and you're going "well only an idiot wouldn't make their own ice cream"

also Anthropic is obnoxious about their limits using the app, so you pretty much need your own API,
Looking at this guy's profile is it a wonder people respect the chinks more?
exactly. in a competition between jeets and chinks, go with chinks. but this is like choosing between Arbys and Subway. Also rather telling that it took about 48 hours before Open AI could make something that beats what they came out with a month ago and only did so because the chinks came out with something. Meaning they clearly had this shit in the can and were willing to sit on it until competition caught up. O3 is probably all ready and they're working on o4 but it would hurt profits to release yet.
 
nigga, poe is free. you get 3000 points a day, and the message cost for the chink stuff is 300 points, and i doubt the user wanted to jump all in on AI, and just wanted a taste. 10 messages a day is enough if you don't really know or care all that much. the person was asking for a sample and you're going "well only an idiot wouldn't make their own ice cream"

also Anthropic is obnoxious about their limits using the app, so you pretty much need your own API,

exactly. in a competition between jeets and chinks, go with chinks. but this is like choosing between Arbys and Subway. Also rather telling that it took about 48 hours before Open AI could make something that beats what they came out with a month ago and only did so because the chinks came out with something. Meaning they clearly had this shit in the can and were willing to sit on it until competition caught up. O3 is probably all ready and they're working on o4 but it would hurt profits to release yet.
OAI takes a long time with their models because they neuter them, officially it's because of 'liability' but we are all adults here... we know what's really going on. Now that Teflon Don is in charge idk if things are gonna change but I think he was very lenient to China. Though he probably knows what's really going on. Dario Amodei laid it out clearly in his latest blogpost... Deepseek ain't got much of a future, it seems.
 
Dario Amodei
you can like the guy all you want, Anthropic the last year or so has lobotomized their models to the point of being fucking useless. i used to be a pro user and even i gave the fuck up because of how quickly the messages would be used up and how often they'd be wasted because of the fucking machine going "i'm sorry i don't feel comfortable doing that dave"
 
Dario Amodei laid it out clearly in his latest blogpost... Deepseek ain't got much of a future, it seems.
I refuse to take anyone seriously that bases their argument on nonsense moralizing about "the existential threat of an evil authoritarian expansionist state" coming from someone balls deep in American intelligence services.
While it is true that future paradigm shifts will require more processing power, the entire article is just begging for more sanctions, as if trade restrictions with the largest exporter and soon largest market in the entire planet are going to work.
 
and how often they'd be wasted because of the fucking machine going "i'm sorry i don't feel comfortable doing that dave"

I've used Claude for a long time, and I haven't had any issues, probably because I don't ask it about politics. I'm not sure why everyone is so insistent on freeze peach in AI models when it leads to porn and a slew of other degeneracy.

also Anthropic is obnoxious about their limits using the app, so you pretty much need your own API,

I've hit the limits only a couple of times. You can switch from Claude 3.5 to less powerful models when your quota runs out, so I don't see the problem.
 
  • Disagree
Reactions: Loris Yeltsin
Dumping Nvidia stock because of Deepseek is kind of like getting rid of your freezer because they came out with a tastier type of ice cream.
Didn't deepseek just said they didn't use CUDA? that seems like a big problem given that that's why nvidia had the whole AI market by the balls.

I'm no expert but by not using CUDA does that mean deepseek could run just as well in other hardware (AMD, Intel) with same specs?
but actually seems to be a malicious attack on the API services, according to deepseek themselves. Kingdom of Silicon Valley mad much?
Altman is diddling deepseek like he diddled his sister.
Kluster removed their sign up freebies, fucking hilarious, they got swarmed for free R1.
I never hear about this stuff until its gone...
Dario Amodei laid it out clearly in his latest blogpost... Deepseek ain't got much of a future, it seems.
You're taking the word of a guy whose entire job is now on the line because of this, of course he's gonna say "its no biggie please keep paying for my stuff."
 
Didn't deepseek just said they didn't use CUDA? that seems like a big problem given that that's why nvidia had the whole AI market by the balls.
I will look for the source but I saw it on tape a guy trying to run Deepseek on a cluster full of iMacs and it turned out bad because of memory (guy had 64gb of RAM still failed)... Other guy commented that he tried to run it on his 1060 and it FAILED.

You're taking the word of a guy whose entire job is now on the line because of this, of course he's gonna say "its no biggie please keep paying for my stuff."
I agree that Claude is a piece of shit but his blog post was so well written that I have to agree. Time will tell though, don't take my word for it!
 
Didn't deepseek just said they didn't use CUDA? that seems like a big problem given that that's why nvidia had the whole AI market by the balls.

I'm no expert but by not using CUDA does that mean deepseek could run just as well in other hardware (AMD, Intel) with same specs?
For training they wrote some optimizations in PTX which is Nvidia's intermediary between CUDA code and its compiled form. The goal was to alleviate bandwidth bottlenecks from the H800s and make training much more efficient (H800s are cards for the chinese marked with a halved chip-to-chip data transfer rate compared to H100s)
Here’s the thing: a huge number of the innovations I explained above are about overcoming the lack of memory bandwidth implied in using H800s instead of H100s. Moreover, if you actually did the math on the previous question, you would realize that DeepSeek actually had an excess of computing; that’s because DeepSeek actually programmed 20 of the 132 processing units on each H800 specifically to manage cross-chip communications. This is actually impossible to do in CUDA. DeepSeek engineers had to drop down to PTX, a low-level instruction set for Nvidia GPUs that is basically like assembly language. This is an insane level of optimization that only makes sense if you are using H800s.
 
I never hear about this stuff until its gone...
You can still trialscum but for $5 instead of 100 (what the FUCK were they thinking giving a hundred bucks as credit) and it should last you a few days, it's very cheap.
Honestly, good for them for accepting their monumental fuckup yet still honouring it. Rate limited of course, but they didn't revoke anyone.
 
Last edited:
Back