Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
nigga, poe is free. you get 3000 points a day, and the message cost for the chink stuff is 300 points, and i doubt the user wanted to jump all in on AI, and just wanted a taste. 10 messages a day is enough if you don't really know or care all that much. the person was asking for a sample and you're going "well only an idiot wouldn't make their own ice cream"If you're willing to paypig that much
exactly. in a competition between jeets and chinks, go with chinks. but this is like choosing between Arbys and Subway. Also rather telling that it took about 48 hours before Open AI could make something that beats what they came out with a month ago and only did so because the chinks came out with something. Meaning they clearly had this shit in the can and were willing to sit on it until competition caught up. O3 is probably all ready and they're working on o4 but it would hurt profits to release yet.Looking at this guy's profile is it a wonder people respect the chinks more?
OAI takes a long time with their models because they neuter them, officially it's because of 'liability' but we are all adults here... we know what's really going on. Now that Teflon Don is in charge idk if things are gonna change but I think he was very lenient to China. Though he probably knows what's really going on. Dario Amodei laid it out clearly in his latest blogpost... Deepseek ain't got much of a future, it seems.nigga, poe is free. you get 3000 points a day, and the message cost for the chink stuff is 300 points, and i doubt the user wanted to jump all in on AI, and just wanted a taste. 10 messages a day is enough if you don't really know or care all that much. the person was asking for a sample and you're going "well only an idiot wouldn't make their own ice cream"
also Anthropic is obnoxious about their limits using the app, so you pretty much need your own API,
exactly. in a competition between jeets and chinks, go with chinks. but this is like choosing between Arbys and Subway. Also rather telling that it took about 48 hours before Open AI could make something that beats what they came out with a month ago and only did so because the chinks came out with something. Meaning they clearly had this shit in the can and were willing to sit on it until competition caught up. O3 is probably all ready and they're working on o4 but it would hurt profits to release yet.
you can like the guy all you want, Anthropic the last year or so has lobotomized their models to the point of being fucking useless. i used to be a pro user and even i gave the fuck up because of how quickly the messages would be used up and how often they'd be wasted because of the fucking machine going "i'm sorry i don't feel comfortable doing that dave"Dario Amodei
I refuse to take anyone seriously that bases their argument on nonsense moralizing about "the existential threat of an evil authoritarian expansionist state" coming from someone balls deep in American intelligence services.Dario Amodei laid it out clearly in his latest blogpost... Deepseek ain't got much of a future, it seems.
You mean Bindu Reddy's? She's a real woman. I remember her from the Candid app debacle.Looking at this guy's profile
and how often they'd be wasted because of the fucking machine going "i'm sorry i don't feel comfortable doing that dave"
also Anthropic is obnoxious about their limits using the app, so you pretty much need your own API,
ive never figured out how to do thisI've hit the limits only a couple of times. You can switch from Claude 3.5 to less powerful models when your quota runs out, so I don't see the problem.
when i had paid i knew that part, but i meant as a free user
when i had paid i knew that part, but i meant as a free user
Didn't deepseek just said they didn't use CUDA? that seems like a big problem given that that's why nvidia had the whole AI market by the balls.Dumping Nvidia stock because of Deepseek is kind of like getting rid of your freezer because they came out with a tastier type of ice cream.
Altman is diddling deepseek like he diddled his sister.but actually seems to be a malicious attack on the API services, according to deepseek themselves. Kingdom of Silicon Valley mad much?
I never hear about this stuff until its gone...Kluster removed their sign up freebies, fucking hilarious, they got swarmed for free R1.
You're taking the word of a guy whose entire job is now on the line because of this, of course he's gonna say "its no biggie please keep paying for my stuff."Dario Amodei laid it out clearly in his latest blogpost... Deepseek ain't got much of a future, it seems.
I will look for the source but I saw it on tape a guy trying to run Deepseek on a cluster full of iMacs and it turned out bad because of memory (guy had 64gb of RAM still failed)... Other guy commented that he tried to run it on his 1060 and it FAILED.Didn't deepseek just said they didn't use CUDA? that seems like a big problem given that that's why nvidia had the whole AI market by the balls.
I agree that Claude is a piece of shit but his blog post was so well written that I have to agree. Time will tell though, don't take my word for it!You're taking the word of a guy whose entire job is now on the line because of this, of course he's gonna say "its no biggie please keep paying for my stuff."
When they said it was "free and open source", they really mean FREE and OPEN source alright.
For training they wrote some optimizations in PTX which is Nvidia's intermediary between CUDA code and its compiled form. The goal was to alleviate bandwidth bottlenecks from the H800s and make training much more efficient (H800s are cards for the chinese marked with a halved chip-to-chip data transfer rate compared to H100s)Didn't deepseek just said they didn't use CUDA? that seems like a big problem given that that's why nvidia had the whole AI market by the balls.
I'm no expert but by not using CUDA does that mean deepseek could run just as well in other hardware (AMD, Intel) with same specs?
Here’s the thing: a huge number of the innovations I explained above are about overcoming the lack of memory bandwidth implied in using H800s instead of H100s. Moreover, if you actually did the math on the previous question, you would realize that DeepSeek actually had an excess of computing; that’s because DeepSeek actually programmed 20 of the 132 processing units on each H800 specifically to manage cross-chip communications. This is actually impossible to do in CUDA. DeepSeek engineers had to drop down to PTX, a low-level instruction set for Nvidia GPUs that is basically like assembly language. This is an insane level of optimization that only makes sense if you are using H800s.
When they said it was "free and open source", they really mean FREE and OPEN source alright.
They already disclose that your prompts are stored for training when you use it. At best it's just like tiktok stealing your data for the CCP, at worst you're giving it all of your credentials if you're that naive with your data.View attachment 6920479
At this point, it's safe to say that Chinese crap is either phoning home, or extremely unsafe to use.
You can still trialscum but for $5 instead of 100 (what the FUCK were they thinking giving a hundred bucks as credit) and it should last you a few days, it's very cheap.I never hear about this stuff until its gone...