ChatGPT - If Stack Overflow and Reddit had a child

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Question for you guys. How much has your workflow been changed by LLMs? And do you think it will help equalize, have no effect, or widen the gulf between intelligent, average, and dumb people?
It's like having a kind of crappy intern. Nice, but not revolutionary; does save me some time, but not enough to put me out of work. Also every software company we work with is trying to integrate it into something. Probably widen because it's just another tool and dumb people are historically inept at using them correctly. They'll just find ways to get addicted to it and ruin their life with it like with everything else.
 
The same model? How can a text model generate images?
Yes. How OpenAI did it for GPT4o in detail, who knows. Imagine it like a brain that has different regions that do different, specific things, and different en- and decoders depending on the type of in-and output. Then imagine layers routing the information to the right parts, then imagine other layers specialized in paying attention to what is important about the specific input in question, and stiching the reply back together, considering relevant information they got from the specialized parts.

This is very complex on an architecture level but the advantage is that you can use several specialized areas to solve a problem, for example ask a text question about a picture. This does not actually mean necessarily that the text in regards to specific topics comes out more accurate just because that other information exists also, contrary to what I stated earlier.

Big advantage of this is especially in regards to audio - low latency, you don't have to "translate" between models and use valuable time, but end up with data that is intrinsically compatible with the various internal representations of the concepts of the model. This can move your data along the neural net really quickly. OpenAI claims a latency from audio input-audio output of around 200 milliseconds (!) (as opposed to anything between 3 to 10 seconds if you have different models working with each other) this is about the same speed as humans. That's why the models reply in the videos come across as so natural. There is no awkward pause in which the audio gets processed because it doesn't need to be. This model can work as almost real-time language translator.

For image generation this could also mean stability in between generations. This means you come up with a design for a character (lets say a smoking hot redhead) and ask the model to generate pictures of this character (how you described her in detail) in five different situations. She'd look largely the same in all five pictures, essentially being the same recognizeable character, very unlike e.g. stable diffusion. This would allow you to e.g. make a webcomic with gpt4o, where the character design between the panels is consistent. (this is why I told the artfags in the AI art seething thread that their days are numbered, stable diffusion especially is just a computationally very efficent way to do image generation, not the only one)
 
Last edited:
  • Like
Reactions: Vecr
. It could become a fast way to suck money out of wallets.
at minimum it would be an amazing service for OF women to take advantage of. a lot of their time is spent DMing their simps in hopes of bilking money out of them, beyond that is the obvious customer service stuff but this would abolish the low level sex work stuff. Apparently a lot of romance novels are written by AI now, enough that its hurting the niche market.

That said, most people probably don't care much.
i'm sure its because its not as accurate as you think. the actor they picked had a clear crisp voice and clearly planned their responses and practiced what they'd say. Its not like they picked up some autistic or homeless person missing teeth and had them try to talk to the AI. when Apple Newton was showcased it worked perfectly too. but consumers were constantly reminded to "eat martha"
at no point was anything turned from/to text.
they just don't show you that part. i've had translators from recent years that do the same thing
Maybe that is what that rebellion last year was about? Of course I am speculating and might be very wrong, but interesting nontheless.
since you brought it up maybe this is also how OAI avoids challenging the entertainment industry with their tech, sort of like how NES wasn't a video game console but an "entertainment system" where you could play games and do fun things with your new robot pal! and after years of selling it suddenly they restarted the video game industry.

considering bad actor Joseph Gordon Levitt's kike wife was on the board i'd say them going for direct to consumer business is telling, especially when even the AI model they showed last year could do better writing than what we see on Netflix shows today, this is probably just them focusing on a more harmless market until they can get powerful enough to not worry about being taken out by hollywood.
 
the actor they picked had a clear crisp voice
Which one of them, they didn't even all speak accent-free english.
they just don't show you that part
It does not need textual representation, it would also not be possible at these latencies. (see above)
For image generation this could also mean stability in between generations
There's actual image examples of this when you scroll down on their official page:

robot-writers-block-01.jpgrobot-writers-block-02.jpgrobot-writers-block-03.jpg

sally-01.jpgsally-02.jpgsally-03.jpgsally-04.jpgsally-05.jpgsally-06.jpg

Funny that they didn't show this off in the demo at all. Probably thought artists are already suicidal enough as is.
 
they didn't even all speak accent-free english.
you've never met a redneck, those people were Fraiser Crane-esque compared to someone by the bayou. Having said that, yes it will work for most people. And holy shit are you right about the artist industry, especially when it comes to visuals. Most of the work i did with graphical design was being told "do this, no like this" repeatedly. the fact that this was done so quickly doesn't help either. Think about storyboarding too, honestly we could probably get Crusader Rabbit level cartoons out using AI right now. you pretty much eliminated the need to actually have drawing ability to create too, meaning anyone with an imagination can create.

I honestly can't wait for 4chan to figure out the bypasses too, like "not taylor swift" "not wearing see through plastic clothing"
 
NSFW would need to be defintively a part of that and if you followed OpenAI the last few years, you'll realize what a big shift that is, if true. Maybe that is what that rebellion last year was about? Of course I am speculating and might be very wrong, but interesting nontheless.
Sam Altman is gay, Jewish, and in an open relationship, so this all tracks.
 
  • Informative
Reactions: Vecr and Broadside
Which one of them, they didn't even all speak accent-free english.

It does not need textual representation, it would also not be possible at these latencies. (see above)

There's actual image examples of this when you scroll down on their official page:

View attachment 5987343View attachment 5987345View attachment 5987344

View attachment 5987347View attachment 5987348View attachment 5987349View attachment 5987350View attachment 5987351View attachment 5987353

Funny that they didn't show this off in the demo at all. Probably thought artists are already suicidal enough as is.
Kind of reminds me of Joan Cornella.
 
  • Like
Reactions: Vecr and anustart76
you've never met a redneck
This thing probably has heard more rednecks than any of us in our entire lives ever can or will. It speaks all major world languages, (albeit, with limitations) I don't think regional dialects (which rednecks fall under and are probably well represented in the dataset) will pose much of a problem to it tbh. This is not the 90s anymore, it'll be the audio equivalent of this:
sky.png

I'd even bet money this thing won't have much trouble understanding Russel Greer.

Imagine the automated surveillance possibilities. Not only can it monitor a video call and write a summary complete with estimates about the emotional state and relationships everyone attending the call was in, it can also tell if your bomb threat against the white house was serious or not. Not only that, it will also be able to tell if you make that joke perhaps a little too often or if the room behind you indicates that you might have extremist tendencies, for example by having a certain flag on the wall. People who say this tech has no applications have obviously not thought it through. I feel it'll also be easy to push this kind of surveillance through. Since it's all automated and software and no human ever listens in and the data is never stored anywhere in it's original form, it'll be hard to make an argument for privacy concerns, especially in a world where western democracies already constanty try to push laws where e.g. every chat program should have a backdoor for actual glowies to listen in.

so this all tracks
It's inevitable really. Alone reading this thread it feels like it's something that people really want, and imagine the influence you get over people by making them build social, perhaps even intimate, relationships to your product.

Joan Cornella
I saw a somewhat more diverse and female version of the pip boy, but yeah, that fits too. I'll enjoy the twitter artists having melties when people can churn out artwork of original character designs by simply saying a name.
 
Last edited:
When can the common consumer get their hands on this is when it will matter to me. Uncensored and mostly unbiased at the very least because ever since got a taste of Claude Opus and that was like a shot of heroin and I only want more. Stupid 4chan niggers can't let anyone have fun so the various reverse proxys floating around get "spitefagged" (they find the keys and report them or ddos the landing page) it's all sketchy but damn Anthropic most of all, I'd pay good money per month to have fun with smart chatbots!
 
Sorry for the blogging but I thought it was interesting: Co-Lead of OAIs alingment team (this is what brings the refusals and is supposed to make AI "ethical") resigned, as did the lead scientist who was instrumental in ousting Altman from OAI last year. This lends credence to the theory I outlined in an earlier post.
 
Sam Altman is gay, Jewish, and in an open relationship, so this all tracks.
I hate kike fags so much, they always go out of their way to be extra deviant.
But I feel Sam would be a turbofag regardless.
 
Can we talk about how advanced AI is getting? The moment that GPT-4o presentation ended felt like the end of an era. It's like we closed the book on the 2010s once and for all and now it's like I'm in a whole new world. Do you feel like that or is it just me
 
  • Feels
Reactions: Niggerlicious
Co-Lead of OAIs alingment team
He resigned because OAI put his work on the backburner, safety became apparently less of a priority:
kwflx4xdg01d1.png

Do you feel like that or is it just me
When I was significantly younger I went through the same thing with computers. They were primitive, often could not really do what you would want them to, many things you couldn't do more than imagine but still, the sheer potential felt electrifying and it just wasn't the same world anymore. I had endless discussions way into the 90s with people who were assured that not only is nobody ever going to need a computer at home for anything, there was also nothing to be gained from the technology in the future ever because of how obviously limited it was at the moment. It was long ago, but not that long ago. Yes, 4o is still very limited and can't solve many complicated tasks satisfactory, but it can also do a ton of things that were *impossible* (no other way to put it) to do with computers before. And useful things too. How the critics gloss over that fact has the same vibe of these people I discussed back then.
 
  • Feels
Reactions: Roland TB-303
Mailwoman images
>still has that gross wavey line effect every AI generated image has for simplified artstyles
You will never be a real artist. You have no hands, you have no eyes, and you have no intentions. You are a metal machine built only on input from a billion other images and your inconsistent and incomprehensive outputs are a dead givaway.
 
Can we talk about how advanced AI is getting? The moment that GPT-4o presentation ended felt like the end of an era. It's like we closed the book on the 2010s once and for all and now it's like I'm in a whole new world. Do you feel like that or is it just me
No I think that culture is still alive on kf
 
her.jpg

Apparently Scarlet Johansson was approached to "voice" GPT 4o and declined. She's suing now, as indicated by this public statement that was obviously written by a lawyer to prepare the grounds for that lawsuit. OpenAI took the voice "sky" (from the demos) down. From OAI statements (EDIT: actually not entirely sure about that one, I cant seem to find a source), the voice actress was actually Rashida Jones. Not a good look, considering they actually aprproached her and then later did very little (quite the contrary) to discourage the sentiment. If you listen to the voices next to next in youtube videos, they are actually not that similar. They just talk in a similar way as Johansson did in that movie. It's kinda surprising that OAI was so bad at reading the room. The hate boner the "creatives" in Hollyweird have for generative AI was not exactly a secret. Still an incredibly poor move, right after she declined everything even just remotely similar to her voice should've been off limits. Would have probably been smarter to never ask her at all.

OAI changed it's release announcement for the voice feature from "rolled out in a few weeks" to "months".
 
Last edited:
  • Informative
Reactions: Vecr and anustart76
Back