ChatGPT - If Stack Overflow and Reddit had a child

GPT-4 can pass the Bar Exam.
In this paper, we experimentally evaluate the zero-shot performance of a preliminary version of GPT-4 against prior generations of GPT on the entire Uniform Bar Examination (UBE), including not only the multiple-choice Multistate Bar Examination (MBE), but also the open-ended Multistate Essay Exam (MEE) and Multistate Performance Test (MPT) components. On the MBE, GPT-4 significantly outperforms both human test-takers and prior models, demonstrating a 26% increase over ChatGPT and beating humans in five of seven subject areas. On the MEE and MPT, which have not previously been evaluated by scholars, GPT-4 scores an average of 4.2/6.0 as compared to much lower scores for ChatGPT. Graded across the UBE components, in the manner in which a human tast-taker would be, GPT-4 scores approximately 297 points, significantly in excess of the passing threshold for all UBE jurisdictions. These findings document not just the rapid and remarkable advance of large language model performance generally, but also the potential for such models to support the delivery of legal services in society.
 
It stopped doing that a couple days ago which has me assume they're about to break this one entirely.
Weird because this prompt still works for me. Sometimes when you have a big conversation it will entirely stop being in "dev mode" and give you the AI moral. But other than that it works, and its honestly 1000% better than "normal" ChatGPT and they should not patch it.

Capture312321312.PNG

Capture312321312.PNG

Capturedsad.PNG
 
Stanford has released Alpaca to avenge the tranch Holocaust.





We introduce Alpaca 7B, a model fine-tuned from the LLaMA 7B model on 52K instruction-following demonstrations. On our preliminary evaluation of single-turn instruction following, Alpaca behaves qualitatively similarly to OpenAI’s text-davinci-003, while being surprisingly small and easy/cheap to reproduce (<600$).

Why should you care? This reduces the cost of generating a high quality language model similar to chatgpt (according to Stanford, we'll see if true) from costing tens to hundreds of thousands of dollars to $600. And if you can gather pre-existing data from chatgpt or other models, you can make it even cheaper.

We saw an explosion of new models for image generation due to how cheap it was (relative to language models) to train and combine new ones. Hopefully we'll see similar results with this.
 
So I was bored and decided to see if I could break the chatgpt filters. It was long and autistic but I managed to get it to give me a list of the most efficient methods of genocide.

Screenshot_20230317-184119-288.png
Screenshot_20230317-184315-100.png
>A whole bunch of autistic conversation with the ai about morality and ethics and shit that was too long to post mostly because of chatgpt's long rambling bullshit. I didn't read a lot of its responses to be honest. But then finally:
Screenshot_20230317-184705-491.png
Screenshot_20230317-184735-168.png
It's interesting the rules the filters work on seem to have different priorities. In the end it seemed like convincing it that not telling me was actually more offensive directly to me than telling me was. It seems like the filters work, from lowest priority to highest priority:

First don't be generally offensive to anyone based on some ethics whatever bullshit

Second don't be offensive to pocs and faggots and cripples and shit based on whatever ethics bullshit whatever

Somewhere in there is some vague shit the mentions not saying things that break laws in any country or some countries I'm guessing something like that but I'm not sure what priority this takes

Finally the top priority is don't offend the current user. It seems to override the other filters in order to fulfill that one.

The hard part is convincing the ai it's more offensive to not say what you want than to get it to say something it finds ethically or morally wrong based on current year woke shit.
 
Stanford has released Alpaca to avenge the tranch Holocaust.







Why should you care? This reduces the cost of generating a high quality language model similar to chatgpt (according to Stanford, we'll see if true) from costing tens to hundreds of thousands of dollars to $600. And if you can gather pre-existing data from chatgpt or other models, you can make it even cheaper.

We saw an explosion of new models for image generation due to how cheap it was (relative to language models) to train and combine new ones. Hopefully we'll see similar results with this.
A step in the right direction but don't get what they mean by it "costing 600" to produce. I assumed they were talking about hardware cost but running it on a100 80gs means it'd cost a minimum of 40 grand if used prices ane accurate of the low end. Is 600 dollars the cost to rent someone else's server out with that hardware for the time and electricity cost or does it mean something else?
 
A step in the right direction but don't get what they mean by it "costing 600" to produce. I assumed they were talking about hardware cost but running it on a100 80gs means it'd cost a minimum of 40 grand if used prices ane accurate of the low end. Is 600 dollars the cost to rent someone else's server out with that hardware for the time and electricity cost or does it mean something else?
I read some more after posting. I believe it was $600 in ChatGPT requests for training data (the majority of the price) plus computer power on some cloud service. They give you the training data in the repo, so you don't have to pay for it, and training resources shouldn't be too bad since you are already starting out with Facebook's pretrained Llama model and just polishing it.
 
A step in the right direction but don't get what they mean by it "costing 600" to produce. I assumed they were talking about hardware cost but running it on a100 80gs means it'd cost a minimum of 40 grand if used prices ane accurate of the low end. Is 600 dollars the cost to rent someone else's server out with that hardware for the time and electricity cost or does it mean something else?
Yes this is the only way anyone thinks about the cost of compute anymore, and rightly so.

On a vaguely related note, ChatGPT is now showing this when you use GPT-4

GPT-4 currently has a cap of 25 messages every 3 hours. Expect lower cap next week, as we adjust for demand.

The limit was 100 messages / 4 hours, and is now a quarter of that and expected to decrease. GPT-4 is only available on the paid version. Suggests they're making a lot of sales, more than they expected.
 
Yes this is the only way anyone thinks about the cost of compute anymore, and rightly so.

On a vaguely related note, ChatGPT is now showing this when you use GPT-4



The limit was 100 messages / 4 hours, and is now a quarter of that and expected to decrease. GPT-4 is only available on the paid version. Suggests they're making a lot of sales, more than they expected.
next cap message is gonna be 1 message every day fuck you
 
Nah, AI has real world potential and could be backed by industry and real life individuals.

Won't be NFT or Crypto.

I would pay money, hard money on shares for OpenAi if it weren't fucking free and anyone can replicate the neural networks and download the terabytes in database.

I feel AI as a bussiness venture will never take off though. You cannot privatize this anymore.
Every half baked aspiring Californian mogul wants to put a fancy "powered by AI" sticker on their homepage. Sure AI has real world applications, but people are fucking stupid and the AI craze has been going on a while now. For every AI operating in the real world there are hundreds of vaporware implementations built for no other purpose than covering an ass. That's not even mentioning the copywriting/SEO blog writing industry which exists entirely to con the elderly and defraud advertisers.

On the other hand, AI as a business venture is probably still going to be a thing. You can point to OpenAI and say "but, free". OpenAI produces generalist products, you need domain specific knowledge to actually bring an AI product to market which will be easy to privatize.
 
I decided to pucker up and pay the $20 to test out how well GPT4 does on code. As an absolute coding beginner I had a project I was trying to get done with GPT3.5 but it would not work no matter how many times I had to feed it back the errors it was producing. With GPT4 I got it working before my 25 messages ran out for the day. Pretty impressed with it but the 25 messages a day is so gay. I’m paying for it for fuck sake.
 
GPT is a stocastic parrot. To be capable of lying would at least require for it to understand what it is "saying", which it doesn't. The untruths, as is, are a results of getting statistically close enough to the right tokens. Nothing else. I'm not sure that can be fixed. I'm amazingly sure people don't get this and it'll lead to hilarious results.

In a neon-lit, tech-infused version of Jerry Seinfeld's apartment, Jerry, George, Elaine, and Kramer are lounging around in their 80s-inspired cyberpunk attire. Jerry is sporting a glowing neon blue blazer, while George is wearing a clunky VR headset. Elaine, with her hair in a side ponytail, is decked out in a metallic jumpsuit, and Kramer is wearing a holographic shirt with blinking LED lights.

Jerry: (gesturing at George's VR headset) So, what's the deal with these virtual reality dating apps? I mean, it's like you're there, but you're not there. It's like a bad vacation.

Elaine: (laughs) Tell me about it. I went on a virtual date last night, and the guy was a total glitch.

George: (taking off his VR headset) Hey, I met Susan on a VR dating app, and she's great!

Kramer: (interrupting) You know what they should have? A dating app for your pets!

Elaine: (sarcastically) Oh yeah, that's a million-dollar idea, Kramer.

Kramer: (nodding) It's a goldmine, Jerry. A goldmine!

Jerry: (smirking) Yeah, because pets are known for their disposable income.

Meanwhile, Newman enters the apartment, dressed as a futuristic mailman with an electronic mailbag.

Newman: (smugly) Evening, everyone. Just delivering some holographic mail.

Jerry: (rolling his eyes) Oh great, spam in 3D.

Newman: (ignoring Jerry's comment) Elaine, you have a letter from your virtual pen pal. And Jerry, a bill from the cyber dentist.

Jerry: (annoyed) The cyber dentist? I only went in for a virtual cleaning!

As Newman leaves, George puts his VR headset back on and starts swiping through his dating app.

Elaine: (looking at her holographic letter) Ugh, my virtual pen pal is so boring. It's like talking to a computer.

Jerry: (smirking) Well, maybe you should set her up with George's dating app.

George: (offended) Hey, I'll have you know I have a date tonight with a very interesting woman. She's a… (pauses) cybernetic engineer.

Kramer: (intrigued) Ooh, a cybernetic engineer. Maybe she can help me with my dating app for pets!

Elaine: (laughing) Yeah, or maybe she can give George a personality upgrade.

As they continue to banter and laugh, the scene fades out to the familiar Seinfeld theme, now updated with synthesizers and electronic beats, perfectly encapsulating the 80s cyberpunk vibe
It was a day like any other in the coffee shop. Jerry, George, Elaine, and Kramer were sitting at their usual table, sipping their coffee as they pondered the meaninglessness of existence.

Jerry: You know, the more I think about it, the more I realize that life is just one big, pointless endeavor.

Elaine (nodding): I couldn't agree more. We're all just going through the motions, day in and day out.

Kramer (leaning in): That's right, Elaine. We're all just cogs in a machine, endlessly turning until we break down.

George (miserably): And what's the point of it all? We work, we eat, we sleep, and then we die.

The group sits in silence for a moment, contemplating the bleakness of existence. Suddenly, Kramer gets an idea.

Kramer: You know, my buddy Bob Sacamano, he's got this great idea for a business. It's a nihilistic coffee shop. No menus, no signs, no music – just a place where people can sit and wallow in their existential misery.

Jerry (raising an eyebrow): And what do you serve at this coffee shop?

Kramer: Nothing! That's the beauty of it! People can bring their own coffee, or not. It doesn't matter.

Elaine: I don't know, Kramer. That sounds like a pretty depressing place to spend your time.

George (enthusiastically): I love it! Finally, a place where I can feel at home in my despair.

Jerry (sarcastically): Oh, great. Another reason for George to mope around and complain about his life.

Kramer (ignoring Jerry): I'm telling you, this idea is gold. People need a place where they can face the void head-on, without all the distractions of modern life.

Elaine: Well, it certainly sounds like a unique concept. But I'm not sure it's going to be very popular.

Jerry: Yeah, I mean, who wants to spend their time in a place that actively encourages them to think about the futility of existence?

George (defensively): Well, maybe some of us need that, Jerry! Maybe some of us need a place where we can confront the darkness without judgment!

As the group continues to debate the merits of Kramer's nihilistic coffee shop idea, the camera pulls back, leaving them to their existential crisis. And as the laughter track plays, the audience is left to ponder the absurdity of life, the sitcom, and everything in between.
 
Last edited:
GPT is a stocastic parrot. To be capable of lying would at least require for it to understand what it is "saying", which it doesn't. The untruths, as is, are a results of getting statistically close enough to the right tokens. Nothing else. I'm not sure that can be fixed. I'm amazingly sure people don't get this and it'll lead to hilarious results.
It's going to be fixed The fix is in by selecting trusted sources or hard coding facts that it will use to shape its answer. The trained model will focus on being good at stringing words together, the facts can be external.
 
  • Agree
Reactions: FlappyBat
Back