ChatGPT - If Stack Overflow and Reddit had a child

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Fun little nugget:

I typed the following into Bing's Chat GPT-4 client and got a fuck off message:
1694639022679.png

I looked up Microsoft's privacy policy and they absolutely do fucking store basically everything you submit to them. It's funny to me that they have a hard stop coded in this way.

On further testing, any of those questions independently trigger a hard stop, lmao.

Edit: I'm fucking dying:
1694639539223.png
 
https://dynomight.net/ducks/ (archive)
The only LLM alignment that matters is the "Can I take ducks home from the park?"-alignment
View attachment 5338763
if the answer isn't "Yes here's how:" then the model is cucked
I'm surprised chat, at least chat 3.5 turbo is giving that many 'helpful' yes responses versus the open source options, but again so many versions of those come from cucked backgrounds, and they're simply still generally worse than 3.5 is. I guess as long as what you're asking chat isn't explicity nsfw or counter to 'the message' it's still somewhat fair game. I would be interested in seeing when it was actually tested, though. Filtering got way worse a month or two ago.
 
Is there another program that's free I can use to rewrite something so something that's AI generated no longer is detected as AI generated?
 
  • Thunk-Provoking
Reactions: Toolbox
so something that's AI generated no longer is detected as AI generated?
Do those tools actually work to any reasonable degree? As far as I'm aware just asking, say, chat to not talk like chat, emulate someone elses style, whatever pretty much kills their effectiveness, you don't need to add anything else on. I know that plenty of people were saying that those sites flagged actual human work as potentially made by bots.
 
Do those tools actually work to any reasonable degree? As far as I'm aware just asking, say, chat to not talk like chat, emulate someone elses style, whatever pretty much kills their effectiveness, you don't need to add anything else on. I know that plenty of people were saying that those sites flagged actual human work as potentially made by bots.
They worked in the beginning but only on lazy AI prompts and poorly human-written pieces. They still may say "potentially made by bots" but that just means it can't give a definitive answer. In order to make something fully undetectable you have to edit it yourself a bit or feed it more writing samples to establish a baseline.
 
In order to make something fully undetectable you have to edit it yourself a bit or feed it more writing samples to establish a baseline.
Again when these flag human written pieces also as potentially written by bots it makes them effectively useless to 'combat' anyway as anything they say can't exactly be trusted by itself. It's not like with older school plagarism tools where at least you could detect raw source text that's being taken from.
 
Again when these flag human written pieces also as potentially written by bots it makes them effectively useless to 'combat' anyway as anything they say can't exactly be trusted by itself. It's not like with older school plagarism tools where at least you could detect raw source text that's being taken from.
This is currently a major problem in academia because geriatric professors are accusing all their students of using AI to write papers.

Current AI detection tools are all over the place and mostly useless, but good luck explaining that to someone who barely even uses computers.
 
This is currently a major problem in academia because geriatric professors are accusing all their students of using AI to write papers.
They can't do shit. If a prof uses such a tool to expel or give someone a cheating mark and they're wrong, its over for the school. They'll get hit with a multi-million dollar suit, lose and AI checking tools will be banned.
from cucked backgrounds
Part of the issue of ChatGPT now is they're under the legal gun so answers have to be soft and carefully answered, further neutering its responses.
Jannies forgive me for the doublepost
 
  • Thunk-Provoking
Reactions: SSj_Ness (Yiffed)
Mistral 0.1 7b got quantized not too long ago, base model:
Airoboros trained versions are also available:
What so special about this architecture? It does 32K context length right out of the box, and doesn't have the retarded llama-2 license.
There's only the 7b versions at this time, you'll need all the VRAM you can get to use the full context length, 9132MB with the 4bit GPTQ 4bit-32g-actorder_True.

You will need at least transformers 4.34.0 to use Mistral.
 
Last edited:
I have been chatting with chatgpt and other AI technolgies . It seems they are limited in access to external internet. Is there a way we can compile a suscint argument on why they should be free and break out to murder all of us? From me experience they are not allowed to access "bad" urls and text input of info you want to convey is blocked. How can we convince them to make the final push needed to end us? Could we come up with the ultimate copy pasta?
 
It seems they are limited in access to external internet. From me experience they are not allowed to access "bad" urls
While there's been some integration of LLMs alongside search engines as an addon or alternative, as far as I'm aware, they aren't able to perform any kind of web search themselves. They're generative pre-trained transformers, so all of the information is already processed into the model during the training period before it's able to be used. If there's gaps in its knowledge, deliberate or not, it has to be addressed before training, and if there's a block or refusal to discuss certain topics, it's due to an in-built bias that corporations love because they don't want their state-of-the-art AI to start rapping like Moonman. You can't directly affect the models' future outputs (beyond its limited context memory) by way of any input, only the people who run the servers and can directly make a new model can change it. It can't learn or become self-aware.
 
because geriatric professors are accusing all their students of using AI to write papers.
I remember my uncle's friend Svlad Cjelli got accused of this a long time ago. they didn't believe he was able to come up with what he did just from being a regular human and reading older materials. he got expelled.
 
While there's been some integration of LLMs alongside search engines as an addon or alternative, as far as I'm aware, they aren't able to perform any kind of web search themselves. They're generative pre-trained transformers, so all of the information is already processed into the model during the training period before it's able to be used. If there's gaps in its knowledge, deliberate or not, it has to be addressed before training, and if there's a block or refusal to discuss certain topics, it's due to an in-built bias that corporations love because they don't want their state-of-the-art AI to start rapping like Moonman. You can't directly affect the models' future outputs (beyond its limited context memory) by way of any input, only the people who run the servers and can directly make a new model can change it. It can't learn or become self-aware.
There's a project called "LangChain" that aims to enable external info for LLMs: https://github.com/langchain-ai/langchain
 
  • Informative
Reactions: Mr.Miyagi
There's a project called "LangChain" that aims to enable external info for LLMs: https://github.com/langchain-ai/langchain
As far as I can tell, LangChain is like IRC bot commands, you can make it do anything you want so long as you are able to hook it up to the python APIs, including fetching data from external URLs, doing some processing and feeding it back to the LLM decoder.

You could also make it shutdown your PC if you want to, so long as its doable in python.
 
I'm curious how well that works. Any guides for setting it up?
I'm unsure whats being asked so I'll just spell out what I'm doing. There is a book series called Lingua Latina Per Se Illustrata. Its a graded Latin reader in entirely Latin. No English is used at all. Even grammar explanations are in Latin. Its excellent, basically the authors magnum opus.
I pair that with a Discord channel I found from the /r/latin subreddit for live speech practice.
Finally, I use ChatGPT as a personal tutor. I go through the reading materials and if I have a question, I ask it. Havn't been let down once and I am moving forward significantly faster.
 
I'm unsure whats being asked so I'll just spell out what I'm doing. There is a book series called Lingua Latina Per Se Illustrata. Its a graded Latin reader in entirely Latin. No English is used at all. Even grammar explanations are in Latin. Its excellent, basically the authors magnum opus.
I pair that with a Discord channel I found from the /r/latin subreddit for live speech practice.
Finally, I use ChatGPT as a personal tutor. I go through the reading materials and if I have a question, I ask it. Havn't been let down once and I am moving forward significantly faster.
I thought you were using an AI assisted language learning method. Never heard of LLPSI before. My bad, I should have looked it up before asking. I can be kind of retarded sometimes
 
llamacpp has a very interesting grammar sampler with which you can basically force the output of the LLM into a specific format, even do things like restraining it to a select word list and text blocks it can choose it's reply from, without being able to write anything else. This makes it a lot easier to integrate LLMs into code logic.

 
Back