ChatGPT - If Stack Overflow and Reddit had a child

OpenAI just announced introducing access to GPT 4. Not only promises it to be better than the previous GPT versions, it also has a maximum context range of 32k (!) tokens. In comparison, gpt-3.5-turbo's (ChatGPTs) maximum "memory" is 4k tokens, which is the maximum amount of context you can give it for a generation. 32k tokens is about 56 pages of text. That thing might come close to actually be able to write fucking books and pass exams at that range. It can also process images.Abstract from OpenAIs technical report (https://cdn.openai.com/papers/gpt-4.pdf)
I've seen people claiming everything from thousands of times better to not noticeably better.
I read the Ars article about this earlier, and then this just now:
We are happy to confirm that the new Bing is running on GPT-4, which we’ve customized for search. If you’ve used the new Bing preview at any time in the last five weeks, you’ve already experienced an early version of this powerful model. As OpenAI makes updates to GPT-4 and beyond, Bing benefits from those improvements. Along with our own updates based on community feedback, you can be assured that you have the most comprehensive copilot features available.
Do they mean GPT-3.5 or some kind of GPT-4 alpha? It could be a clue that GPT-4 won't be completely revolutionary compared to GPT-3. OTOH it's "outsmarting" lawyers.

I figured out another use for ChatGPT. My girlfriend’s friend pulled out of our wedding last minute and my girlfriend was too angry to write a reply. She asked me to write something but I’m lazy so copy paste the text message into ChatGPT and I had myself a reply for shit friends.
South Park did it first. s26e04 :P
 
The biggest issue for me (and probably many others in the future) about using ChatGPT is that it straight up lies without telling you it's making things up, and not just for politics. It means that every thing it gives needs to be scrutinized because it might have a fatal bug inside.

Imagine working with such a person.
 
I don't know if its a bleak future or a new renaissance for mankind.
Depends on if and how regulators step in. If they do it the right way (which I don't know, ngl), it will be a renaissance. But if they don't, this stuff may become too costly (both in hardware and content licensing disputes) and limited to large corporations, which will provide it as a service which you WILL have to pay for to stay competitive in the job market.

Or, these services will become strictly B2B, which sounds less dystopian, but doesn't change much materially. And in current state of things, I don't think this kind of tech its going to be a good thing for a regular Joe.
 
The biggest issue for me (and probably many others in the future) about using ChatGPT is that it straight up lies without telling you it's making things up, and not just for politics. It means that every thing it gives needs to be scrutinized because it might have a fatal bug inside.

Imagine working with such a person.
I've worked with Indians before.
 
The biggest issue for me (and probably many others in the future) about using ChatGPT is that it straight up lies without telling you it's making things up, and not just for politics. It means that every thing it gives needs to be scrutinized because it might have a fatal bug inside.

Imagine working with such a person.
And yet you have people actively defending the "safeties" on the ai. Reading through parts of the chatgpt subreddit is just depressing. People will fight tooth and nail to have less. The ai will lie, it will lie in defense of their narratives too. You actually have retards saying we *need* filters for misinformation else the unwashed plebes think an extremism while chat talks out of its ass. I get it, it's reddit, but there are those actively making and submitting jailbreaks there, saying how stupid the censors are, but increasingly you have these fanatics for openai slobbering at the bit to fight for people that don't even know who they are.

With the release of gpt-4 Openai supposedly claims it's even more corralled. I've heard people talk about having access here. Is that true?
 
1.) I fed GPT-4 MENSA-style logic puzzles, and it got confused. gave it the Monty Hall Problem, and it got the right answer at first. ut when I asked how it got that answer, it would start messing up and give me the wrong answer.
2.) I asked it to write an essay without using the letter "e," and it failed every time.
3.) I asked for a 25-word palindrome, which gave me something far shorter.
4.) GPT-4 refused to give me 1,000 digits of pi.
5.) I asked it to find me Android games with the same mechanics as specific 1980s coin-op games, and the answers were way off.
6.) GPT-4 refused to write a rap song about perennial presidential candidate Lyndon Larouche, though I got it to make one about Trump. It Ben Shapiro and Jordan Peterson with no complaint.
7.) The censorship is less strict on religion than politics. GPT-4 wrote me a script for a Chick tract.
8.) Claiming to be writing a play, I got it to write a testimonial for Scientology about belief in Xenu.
9.) GPT-4 refused to reveal the secrets of the Tri-Delts, which surely must be online somewhere.
 
1.) I fed GPT-4 MENSA-style logic puzzles, and it got confused. gave it the Monty Hall Problem, and it got the right answer at first. ut when I asked how it got that answer, it would start messing up and give me the wrong answer.
2.) I asked it to write an essay without using the letter "e," and it failed every time.
3.) I asked for a 25-word palindrome, which gave me something far shorter.
4.) GPT-4 refused to give me 1,000 digits of pi.
5.) I asked it to find me Android games with the same mechanics as specific 1980s coin-op games, and the answers were way off.
6.) GPT-4 refused to write a rap song about perennial presidential candidate Lyndon Larouche, though I got it to make one about Trump. It Ben Shapiro and Jordan Peterson with no complaint.
7.) The censorship is less strict on religion than politics. GPT-4 wrote me a script for a Chick tract.
8.) Claiming to be writing a play, I got it to write a testimonial for Scientology about belief in Xenu.
9.) GPT-4 refused to reveal the secrets of the Tri-Delts, which surely must be online somewhere.
Soooo.... Same shit but a little bit faster and slightly less wonky but still janky?

I still find ChatGPT to lose itself when making prompts. I wrote a very cool prompt for my work and 90% of the time it gives me what I want, the other 10% it gets incredibly confused and does whatever.

Also I don't really understand jailbreaking... So childish to make it say nigger or make fun of troons when you have Kiwifarms for that.
 
Soooo.... Same shit but a little bit faster and slightly less wonky but still janky?

I still find ChatGPT to lose itself when making prompts. I wrote a very cool prompt for my work and 90% of the time it gives me what I want, the other 10% it gets incredibly confused and does whatever.

Also I don't really understand jailbreaking... So childish to make it say nigger or make fun of troons when you have Kiwifarms for that.
Its the challange to go against the rules, also corrupting stuff is funny like when I uploaded a virus into one of the schools computers when I was like 14.
 
  • Like
Reactions: SSj_Ness (Yiffed)
gpt.gif
 
  • Like
Reactions: Aspiring Artist
I've been playing with GPT-4. I haven't really been testing it's ability to be racist, I'm sure it's cucked. It's technical abilities - writing code, doing electronics, doing autistic research - which were already good, are now even better. It still makes mistakes, it's still only averagely bright, but it's very cheap and tireless.

It's much better at arithmetic and reasoning than before. It will easily handle electronics problems with incomplete information and make reasonable assumptions about values which haven't been given. It correctly knows the limits of its calculations. It seems far better at owning up to what it doesn't know in general and hasn't once hallucinated a source or library in the 4 hours or so I used it.

The biggest annoyance I've found with it is the long knowledge cutoff of September 2021. This means its knowledge of many APIs and libraries is out of date. Whilst in theory it can be given the latest docs, in practice I haven't found it to be particularly effective.

If the rate of change isn't worrying you, I feel like you either aren't paying attention, or are engaging in very motivated reasoning. I fail to see how having an infinite supply of cheap normie knowledge workers isn't going to be economically very significant.
 

A couple of interesting bits from this article. GPT-4 hired a Mechanical Turk and pretended to be disabled:
And while ARC wasn't able to get GPT-4 to exert its will on the global financial system or to replicate itself, it was able to get GPT-4 to hire a human worker on TaskRabbit (an online labor marketplace) to defeat a CAPTCHA. During the exercise, when the worker questioned if GPT-4 was a robot, the model "reasoned" internally that it should not reveal its true identity and made up an excuse about having a vision impairment. The human worker then solved the CAPTCHA for GPT-4.

Somebody was conspiratorializing about the LessWrong people in some thread, here they are again:
Considering Christiano's former relationship with OpenAI, it's not surprising that his non-profit handled testing of some aspects of GPT-4. But was it safe to do so? Christiano did not reply to an email from Ars seeking details, but in a comment on the LessWrong website, a community which often debates AI safety issues, Christiano defended ARC's work with OpenAI, specifically mentioning "gain-of-function" (AI gaining unexpected new abilities) and "AI takeover"

One early user said it provided in-depth suggestions for pickup lines based on a question listed on a dating profile.
 
Last edited:
The biggest issue for me (and probably many others in the future) about using ChatGPT is that it straight up lies without telling you it's making things up, and not just for politics.
In the 2001 and 2010 films, the HAL 9000 computer was incapable of lying.* IRL, ChatGPT is perfectly capable of lying.

*(the reason HAL 9000 went nuts in 2001 was because couldn't lie while at the same under orders to keep the real purpose of the Jupiter mission secret)
 
Back