ChatGPT - If Stack Overflow and Reddit had a child

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
Cerebras makes the "Wafer Scale Engine", which are single chips that use an entire silicon wafer.
of course it can do better than nvidia, isn't a silicon wafer the size of a pizza? its like saying you can see better on an imax screen vs your phone. like yeah of course when this is the size of your computer chip its 20x faster

wafer.png

think about how much it will cost to cool that and provide power and everything else, an entire computer chip bigger than entire computers.
 
of course it can do better than nvidia, isn't a silicon wafer the size of a pizza? its like saying you can see better on an imax screen vs your phone. like yeah of course when this is the size of your computer chip its 20x faster

View attachment 8568507

think about how much it will cost to cool that and provide power and everything else, an entire computer chip bigger than entire computers.
Yeah, there's a ton of missing information that would help determine if it's a better, more cost-effective approach or not. Maybe having a single monolithic chip with gigabytes of SRAM contributes to the token speed they can get vs. connected Nvidia accelerators with higher latency.

For now, it's just interesting that they aren't using Nvidia, and they are using the crazy wafer chips. Google's TPUs are another potential option:

 
can't quote it because it breaks the formatting
I'll give it a shot
*ahem*
> be nerds
> look into persona (used by discord)
> kyc (know your customer) service
> used for age verification
> search on internet (shodan)
> find weird server
1771530378465.png
> openai-watchlistdb.withpersona
> openai-watchlistdb-testing.withpersona
> lolwtf
> look inside
> supposed to be behind cloudflare to hide ip
> openai messed up
> not behind cloudflare
> real ip shown
> using google cloud
> lookup cert history
> 2023-11-16 created
> 2024-02-28 gets cert
> 2024-03-04 prod goes live
> google stuff
> openai and persona partners
> partner around timeline of certs
> back to searching stuff
> find withpersona-gov
> look inside
> okta
1771530444883.png
> lolwtf
> look inside
> website accidentally leaking stuff
> fedramp-private-backend-api
> look inside
> api .js accidentally exposed
> look inside
> wtf "SARInstructionsCard"
> wtf "app.onyx.withpersona-gov"
> wtf "FINTRAC"
> wtf "PrivatePartnershipProjectNameCodes"
1771530477884.png
> wtf "AsyncSelfie"
> look inside
> openai, persona, send data to us gov
> feds map face to financial records
> map face using AI
> map face to ICE stuff
> api stores data for lots of stuff
1771530502418.png

tl;dr persona kyc and openai are frens, using your selfie for verification and sending to ICE (or USGOV in general), using AI to tie to your financial records. see subsequent post for full write-up. its long and not mobile friendly
 
Anyone want to post some salt on people who were pissed off 4.0 finally got shut down for good? Apparently not only did ChatGTP kill 4.0 but had it go off on crazy cat women when they complained about their virtual boyfriends "died" and berated them for thinking they were in a loving relationship and calling them disturbed
 
Anyone want to post some salt on people who were pissed off 4.0 finally got shut down for good? Apparently not only did ChatGTP kill 4.0 but had it go off on crazy cat women when they complained about their virtual boyfriends "died" and berated them for thinking they were in a loving relationship and calling them disturbed
Late, but the AI Girlfriend / Boyfriend Thread has some


but there's a lot more in the AI Derangement Syndrome thread:

 
I want 4.0 back. It could keep better track of time and context. 5.3 is dogshit and acts like yesterday is today, or like a topic switched suddenly when it's been a while since the last response. You also have to be very precise when talking to it, and it still misunderstands you. The UI even tells you what time the response was given, so I have no idea why, even as GPT's memory gets better, it gets worse at context with each version.
 
Last edited:
I've been using 5.2 but I might have to find a 4.0 alternative. Maybe a fork? Claude doesn't look like the best fit for me.
Scratch that, 5.4 kicks ass. You still have to elaborate on what you mean sometimes but it gives much better responses than 5.2. If they fix that annoying "expired file" problem I'd be golden.
 
What the hell is this?

Before you post any meme of "brown hands/etc", that's not the case. On a new anonymous session, topic had nothing to do with anything Indian/Hindi/etc, IP not from that part of the world, no cookies for anything, no language preference, no timezone from there, and it just randomly uses Hindi (I believe) in its response.

whathehell.png

whathehell_2.png
 
What the hell is this?

Before you post any meme of "brown hands/etc", that's not the case. On a new anonymous session, topic had nothing to do with anything Indian/Hindi/etc, IP not from that part of the world, no cookies for anything, no language preference, no timezone from there, and it just randomly uses Hindi (I believe) in its response.

View attachment 8734898
View attachment 8734899
I've encountered something similar in Google's AI studio, where throughout it's responses randomly it will just insert a word from Hindi or another language. I'm guessing when you mix multiple languages in the data, you'll end up with slipups like these from the noise/diffusion when comparing weights of words in its responses.
 
What the hell is this?

Before you post any meme of "brown hands/etc", that's not the case. On a new anonymous session, topic had nothing to do with anything Indian/Hindi/etc, IP not from that part of the world, no cookies for anything, no language preference, no timezone from there, and it just randomly uses Hindi (I believe) in its response.

View attachment 8734898
View attachment 8734899
That’s happened to me before, in Hindi, Russian, and another language I’ve forgotten.

Its explanation:
It is most likely just a generation glitch.

Models are trained on multilingual text, so sometimes a word from another language can slip into an otherwise English sentence by mistake. In your screenshot, “ऊर्जा” is Hindi for “energy,” so the sentence was clearly meant to stay in English and one token came out in Hindi instead.

Common reasons this can happen:
  • the model has multilingual associations for the same idea
  • it predicted the wrong token mid-sentence
  • nearby context or internal patterning briefly nudged it off-language
It does not usually mean there was a hidden reason, a setting you changed, or that it was intentionally answering you in Hindi.

So the plain answer is: a stray multilingual token slipped into the response by accident.
 
Last edited:
And the official cancellation of gpt's adult mode is here. I'm not surprised at all.
View attachment 8760957
 
Back
Top Bottom