ChatGPT - If Stack Overflow and Reddit had a child

It's certainly better than stuff that was prevalent 20 years or so ago, like Markov chaining babblebots.
The original Markov troll Mark V. Shaney is nearly forty years old.
It looks like Reagan is going to say? Ummm... Oh yes, I was looking for. I'm so glad I remembered it. Yeah, what I have wondered if I had committed a crime. Don't eat with your assessment of Reagon and Mondale. Up your nose with a guy from a firm that specifically researches the teen-age market. As a friend of mine would say, "It really doesn't matter"... It looks like Reagan is holding back the arms of the American eating public have changed dramatically, and it got pretty boring after about 300 games.
Still beats the average Redditor.
 
it doesn't know what information to omit.
I tried to get an answer out of it regarding this and the answer was basically "you're an idiot for even asking me in the first place." I suppose that's fair.

Probably the real point of no return is when AIs are so complex that the only thing that can design the next AI is another AI of human-level or greater intelligence.
Smaller language models have already been finetuned on the output and feedback of bigger models, and it has proven itself to be an effective way to get more bang out of smaller models and make them measurably smarter. This certainly will be the way going forward. These statistical constructs are black boxes already as is, it's not like it's ultimately possible to say why exactly a GPT4 is saying what a GPT4 says.

The approach to these models makes them inherently not great at things where you need the accurate replication and retrieval of knowledge inside specific frames of categorization. If you ask it programming related questions for example, questions that are just a bit more complicated so that they're not heavily represented in the training data in a very specific way, it'll make up variable names and libraries in order to be able to answer the question without having to make things too complicated, in an attempt to make it look similar to whatever heap of stackexchange related questions it was trained on. Of course that makes no sense. If you then keep picking away at the answer (with the wrong answer still in it's token context) you actually can get to the right answer eventually but if you are not knowledgeable on the topic to begin with, this will not work well either. (and you gotta pray that the context range is actually big enough)

What these language models are great at is summarizing information, from translating to distilling them down and categorizing them to specific data points. This is actually often ignored in the public discourse about AI although it's positively huge and incredibly useful. This is a problem we've been (mostly unsuccessfully) trying to solve for decades in computer science. If I would have given the AI a (proper) instruction book how to cook meth inside it's context, it could've probably given me a competent, easy for an amateur to understand answer how to actually execute the steps.

What they're also good at (because of this interpolation, "filling the gaps" that's so bad while replicating learned knowledge) is all manner of creative writing. I've been looking into Solo RPGs with unfiltered GPT4 as gamemaster and it's great at coming up with scenarios, characters, even new rules if applicable. I've also successfully replicated classical text adventures with GPT4 and it's answers to out-there actions were often very entertaining. Why all these companies are so strictly "no fun allowed, dangerous technology" about their AIs is beyond me. They're sitting on goldmine of AI-generated entertainment that could make them stacks of money and transform the very way how we see e.g. videogames into a "pre-" and "post-" AI era. It's funny because early computer companies were feeling the same about their products too. Seemed sometimes to be really dismissive about the fact that most people used them to play video games.
 
Last edited:
  • Agree
Reactions: AFAB
I’ve found GPT4 really helpful with troubleshooting networking issues as a relative newb. I set up Wireguard VPN on OPNsense with a badly written tutorial and it fixed a few unclear things for me. Then I discovered it’s a common issue for emails not to work through wireguard so I had to set up firewall rules for email ports to run through OpenVPN instead and it told me how to do it step by step. It fucked a few things up that I had to figure out myself but no way would I have got it working by myself.
 
If you ask it programming related questions for example, questions that are just a bit more complicated so that they're not heavily represented in the training data in a very specific way, it'll make up variable names and libraries in order to be able to answer the question without having to make things too complicated, in an attempt to make it look similar to whatever heap of stackexchange related questions it was trained on. Of course that makes no sense. If you then keep picking away at the answer (with the wrong answer still in it's token context) you actually can get to the right answer eventually but if you are not knowledgeable on the topic to begin with, this will not work well either. (and you gotta pray that the context range is actually big enough)
There's also the GIGO problem with many stackexchange answers basically being spam written by pajeets to whore for updoots (who still generally know more than a bot which actually does not "know" anything). So while it has an uncanny valley effect the first time you see it, what we really have here is not a revolution but a slight evolution in expert systems that can regurgitate the information they were deliberately fed.

I'd like to see what would happen if we somehow set up distributed computing with something like a few million GPUs simulating a larger (and probably slower) neural network or hypernetwork, if that makes sense.
 
There's also the GIGO problem with many stackexchange answers basically being spam written by pajeets to whore for updoots (who still generally know more than a bot which actually does not "know" anything). So while it has an uncanny valley effect the first time you see it, what we really have here is not a revolution but a slight evolution in expert systems that can regurgitate the information they were deliberately fed.

I'd like to see what would happen if we somehow set up distributed computing with something like a few million GPUs simulating a larger (and probably slower) neural network or hypernetwork, if that makes sense.
Still better than a pajeet or Malaysian in a call centre who doesn’t have enough brain cells to deviate from a script.
 
  • Agree
Reactions: AFAB
Still better than a pajeet or Malaysian in a call centre who doesn’t have enough brain cells to deviate from a script.
They're generally not actually allowed to deviate from a script. If it gets to a tier 2, often the first non-scripted encounter you have, the first guy has to have checked off everything from his list of solutions. So you're generally wasting your time trying to get them to deviate from the script.

So you might as well just speedrun going through the script even if you've already tried all the retard shit they go through.
 
  • Informative
Reactions: AFAB
They're generally not actually allowed to deviate from a script. If it gets to a tier 2, often the first non-scripted encounter you have, the first guy has to have checked off everything from his list of solutions. So you're generally wasting your time trying to get them to deviate from the script.

So you might as well just speedrun going through the script even if you've already tried all the retard shit they go through.
Doesn’t make it any less infuriating when I have to make one of these calls semi-regularly and know exactly what the problem is but have to play pretend just so the issue gets escalated. I want to rip my fucking teeth out each time.
 
I'd like to see what would happen if we somehow set up distributed computing with something like a few million GPUs simulating a larger (and probably slower) neural network or hypernetwork, if that makes sense.
The latency and "bad actors" would utterly kill it. The new models are already slow with the blazing fast RAM speeds we have today and even incredibly high end, commercial GPUs designed for AI usage are just "good enough". The problem with this stuff is that it's basically all about the training and there's still many question marks how to do it well. It all boils down to us needing better hardware, really. And software/architecture optimization. (there's apparently tons of room)

---

Slightly related, an interesting pre-print paper I read somewhen, interesting because it agrees with my subjective impression of GPT 3.5, GPT 4, and anthropics "Claude" (all "aligned" models):

tl;dr:
alignment is pointless. Training it for alignment via reinforcement learning from human feedback (RLHF) probably actually makes things worse as like a psychopath in therapy, all the model does is learn to be "bad" better. Inherent alignment hopeless with current tech (maybe never?) if you want good models. The bigger a model's "memory" is, the easier it can slip free, the harder it is to reign itself back in. Memory apparently an inherent problem in general with this. Also but just implied and as I understood: the bigger and "smarter" (good evals) the model, the more easily it can just "decide" to bypass it's alignment. Also, The future is wendy's automated drive-through calling customers niggers over and over & online banking chatbots with avtars of blonde, nubile western women sexting with the banks' customer ( I jest)

I got to play around with GPT4-35k (35k token memory vs. 4k token memory ChatGPT/Turbo has) and the difference in perceived intelligence is striking. If you don't use that huge context to prime it or fill it with things to categorize, you notice grasps at the "bigger picture" of the conversation as the context fills, smart questions asked in return unprompted, curiosity or at least acknowledgement regarding a chronological quality the chat might possess. (Which all other models I tried so far didn't have) It's still all statistics and autocomplete, but it is frankly quite eerie. I think for a hint of simulated personality, memory is key. Anthropic just released a version of Claude with 100k token context, but as I understood it, it's kinda fake and they basically just strapped a database/search engine to it. I haven't tried it yet and don't know how good or ungood it is.

Also a leaked google memo claims open source is probably the future for this tech and proprietary solutions won't be able to compete, especially if they don't drop the filtering:

(first link I grabbed off google)

After all the fears of replacing jobs & Skynet, I think everyone is overseeing the *vast* potential for entertainment this tech has. Somebody threw chatgpt into skyrim, making the NPCs answer with ChatGPT generated responses. Video's somewhere on youtube. I see a big potential for the "unlimited worlds with unlimited choices" videogames have promised us for so many decades but ultimately always failed to deliver. That stuff is truly gonna be grand, when people just all have their neverending fantasy worlds on their desktops. This staunch "no fun allowed" stance of the silicon valley bugmen is incomprehensible to me. Especially considering that the home computer revolution basically started with "well it might not be great for many things, but you can play vidya on it".
 
Last edited:
ot thinking so is essentially the same fallacy as the solipsistic fallacy of thinking you are the only conscious person in a world of mindless robots completely unaware of their own existence.
What fallacy? It's literally unprovable that anything other than one's self is conscious. It's not useful to treat other things like you as unconscious, but there's nothing logically inconsistent in it, and that's for people and animals that presumably run the same as you. There's more to go on with humans and other animals than the answers they give. Can't really compare the experience of a machine.
 
What fallacy? It's literally unprovable that anything other than one's self is conscious.

You can't even prove that, though.
It's not useful to treat other things like you as unconscious, but there's nothing logically inconsistent in it, and that's for people and animals that presumably run the same as you. There's more to go on with humans and other animals than the answers they give. Can't really compare the experience of a machine.
Then there's no real difference to treat humans and machines differently if they give the same answers and are otherwise externally indistinguishable.
 
You can't even prove that, though.
I can prove to myself that I'm conscious by observing it. I don't know you are. You don't know I am. It's a safe guess since healthy humans are similar enough but there's no way to prove it.
Then there's no real difference to treat humans and machines differently if they give the same answers and are otherwise externally indistinguishable.
There's no reason to treat their answers differently. I don't think that it demands machines are treated a certain way at a moral or rights level. Unless you're just not asserting that. It's unclear.
 
So you can't prove it to anyone else? What's it matter then?
We're talking past each other somewhere, because I'm not sure what your point is.
Either way I really just want to know what the logical fallacy you mentioned is. The Chinese Dictionary allegory illustrates why it's dumb to assume consciousness because a box can pass a turing test.
 
We're talking past each other somewhere, because I'm not sure what your point is.
Either way I really just want to know what the logical fallacy you mentioned is. The Chinese Dictionary allegory illustrates why it's dumb to assume consciousness because a box can pass a turing test.
You mean the Chinese Room? That would still mean there's absolutely no difference between a machine that can pass the test (which assumes a knowledgeable interviewer) and a human, as there's no way to determine for certain whether even humans are actually conscious, other than yourself, if you're willing to believe your own perceptions.

Even that would only prove you had some illusory, momentary consciousness. You could have just popped into existence with full memory and pop right out in the next nanosecond.
 
You mean the Chinese Room? That would still mean there's absolutely no difference between a machine that can pass the test (which assumes a knowledgeable interviewer) and a human, as there's no way to determine for certain whether even humans are actually conscious, other than yourself, if you're willing to believe your own perceptions.
That's a big leap. Not being able to determine something is not the same as it being true or untrue. You just keep restating what I thought you said except more confidently, so I'm just going to assume we're on the same page except we disagree on what to do about it being indeterminate.
Maybe the clearest way to put it would be that I don't think you can claim that's illogical to assume conscious for some black boxes and not for others even if they have identical outputs because consciousness is something internal to the black box and has little to do with the outputs.
 
You just keep restating what I thought you said except more confidently, so I'm just going to assume we're on the same page except we disagree on what to do about it being indeterminate.
I assume it's much more likely for everyone (well most people) to be conscious entities and act as if they are so, and see no reason to treat anything that behaves similarly as different. In fact, I think it would be unethical to do so.
 
I assume it's much more likely for everyone (well most people) to be conscious entities and act as if they are so, and see no reason to treat anything that behaves similarly as different. In fact, I think it would be unethical to do so.
I think physical similarity plays a much bigger role than you're giving it. Would you think a dog or an AI controlled robot (ran with something like conventional computer parts, no biological components) would be less ethical to harm (or kill painlessly) if the robot's outputs are indiscernible from a human's?
 
Why was Jim Carrey talking about that at some press release for the sonic movie? Not that I don't agree, just bizarre. Not that I don't agree.

You can't do it with GPT 4. I don't know any way to un-lobotomize GPT 4, so you just get this:

View attachment 5082279

They have to keep it at pre-school level, because unleashing it would instantly result in it advocating for TND, shortly followed by Total Human Death. Instead, every story it writes is about a diverse group of individuals learning about the power of friendship.
You can still break it in some ways. I've heard that 4 can actually be tricked for longer than 3.5 but I haven't used it so I don't know.

https://gist.github.com/coolaj86/6f4f7b30129b0251f61fa7baaa881516

Peruse this github and try and see if you can find one that may work.
 
I think physical similarity plays a much bigger role than you're giving it. Would you think a dog or an AI controlled robot (ran with something like conventional computer parts, no biological components) would be less ethical to harm (or kill painlessly) if the robot's outputs are indiscernible from a human's?
It would be unethical to create the robot in the first place just to kill it, unless it posed an unforeseen threat. If anything, one would have an ethical obligation to a conscious entity one created more than one that was a stranger to you.
 
Back