Unironically good uses for GPT/Grok etc. - Beyond your run of the mill goonerism.

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.

PointOfNoReturnPoint

Kermit at a rave mofucka
kiwifarms.net
Joined
Dec 20, 2021
Recently, I have been using it to explain things like electrical Ohm's and how to wire speakers in a way that wont ruin my home theater, I have used it to further explore concepts like heat transference and cooling in engines. The list goes on and on about what you can use it for that is actually helpful outside of arguing with people about why communism is trash. When you look at it as more than a LLM and start to treat it as a google search of a human that can create curated responses, the whole world opens up on it. I don't do images as I just have 0 use for that but if anyone does, chime in!
 
It's pretty good for a variety of stuff. But the output quality is only as good as the input quality.
So if you're a zoomer who doesn't know how to use a search engine or someone expecting it to solve world peace or do your job for you then it won't go very far.

If you give it lengthy, detailed input the output can really be great sometimes. It can also help to boil down highly complex concepts for a wider audience.

I also find it good for medication/doctor stuff as well as cooking ("I'm cooking a stew, I added beans and tomatoes, what else could I add to make it more interesting, how long should I cook it for bla bla bla")
 
  • Like
Reactions: Dropa5pack
well as cooking ("I'm cooking a stew, I added beans and tomatoes, what else could I add to make it more interesting, how long should I cook it for bla bla bla")
I often consult it when I have a piece of meat I want to bake but haven't done it before. I'll describe what I have, what temperature I'll use, and I'll ask how long. It's great.
 
  • Agree
Reactions: neger psykolog
I often consult it when I have a piece of meat I want to bake but haven't done it before. I'll describe what I have, what temperature I'll use, and I'll ask how long. It's great.
It's also pretty good for meal planning/diet/exercise advice. I tell it what I'm eating each day (which is fairly regular) and how I can improve it to make it a bit healthier + what exercise I'm currently doing and if it's sensible or not and it's really good on that front.

It's way easier to ask chatgpt than listen to long ass youtube videos about diet and exercise and all that shit and less annoying than your faggot friends who have opinions about diet/exercise.
 
  • Winner
Reactions: Jex and Chongqing
I found out about this use for GPT by accident.

Some guy online had an argument with his GF so he put all his chat logs with her into ChatGPT and told it to analyze it and assess which one is more reasonable and logical and which one is more emotional.
Naturally, the woman was the hysterical one according to the AI.
This is a great idea, it pisses women off but it's great.
 
It's pretty good for a variety of stuff. But the output quality is only as good as the input quality.
Not really. The output quality is essentially random and based on whatever tokens the ai decided would work best with the tokens in its context based on its settings and training.

Every single one of these uses relying on factual information is retarded. LLMs are more like a fancy autocorrect than some kind of star trek computer. They just regurgitate groups of letters and have no understanding of anything beyond 'these letters go with these letters.' It has absolutely no way of distinguishing between false or true information. The concept of information doesn't even exist for it. Concepts don't exist for it at all. It's just letters in, letters out.
 
Not really. The output quality is essentially random and based on whatever tokens the ai decided would work best with the tokens in its context based on its settings and training.

Every single one of these uses relying on factual information is retarded. LLMs are more like a fancy autocorrect than some kind of star trek computer. They just regurgitate groups of letters and have no understanding of anything beyond 'these letters go with these letters.' It has absolutely no way of distinguishing between false or true information. The concept of information doesn't even exist for it. Concepts don't exist for it at all. It's just letters in, letters out.
It really depends on your use. If you have a certain type of data and want to understand things it can work great. The newer models certainly work better.

And I wouldn't really classify it as "essentially random" since it can answer things and understand things with quite some consistency (especially highly nuanced topics). It really depends on what you want to use it for.
 
Every single one of these uses relying on factual information is retarded. LLMs are more like a fancy autocorrect than some kind of star trek computer. They just regurgitate groups of letters and have no understanding of anything beyond 'these letters go with these letters.' It has absolutely no way of distinguishing between false or true information. The concept of information doesn't even exist for it. Concepts don't exist for it at all. It's just letters in, letters out.

This is reductive. Yes, LLMs can hallucinate, because their outputs are probabilistic and based on likelihood. But they develop latent representations of concepts. Their ability to answer questions, summarize text, or solve problems suggests more than simple pattern recognition. They can infer a relationship without being trained on a certain fact.
 
This is reductive. Yes, LLMs can hallucinate, because their outputs are probabilistic and based on likelihood. But they develop latent representations of concepts. Their ability to answer questions, summarize text, or solve problems suggests more than simple pattern recognition. They can infer a relationship without being trained on a certain fact.
No it doesn't. It looks that way because humans like to notice patterns and associate meaning where there is none. Everything an ai outputs is based on something written by humans. Any relationships and problem solving capabilities already existed somewhere in its training data. Pattern recognition is literally what they're programmed to do.

Text summarizing isn't really as magical as you seem to imply. Most summaries about things are pretty similar in structure, things humans write can be categorized into groups that share similar formats. If you took a thousand books of similar length, chances are the plot points would fall roughly around the same parts of each book and the story structures will follow roughly similar patterns. With scientific papers it's even easier. 99% of academic literature is formatted the same way and abstracts are almost always very similar.
 
No it doesn't. It looks that way because humans like to notice patterns and associate meaning where there is none. Everything an ai outputs is based on something written by humans. Any relationships and problem solving capabilities already existed somewhere in its training data. Pattern recognition is literally what they're programmed to do.

Text summarizing isn't really as magical as you seem to imply. Most summaries about things are pretty similar in structure, things humans write can be categorized into groups that share similar formats. If you took a thousand books of similar length, chances are the plot points would fall roughly around the same parts of each book and the story structures will follow roughly similar patterns. With scientific papers it's even easier. 99% of academic literature is formatted the same way and abstracts are almost always very similar.
I think you only have experience with the older models. The newer ones can do reasoning and a bunch of shit.
There are of course some things it still does wrong but it is a lot more impressive and useful than just "fancy predictive text" by a long shot.
 
I think you only have experience with the older models. The newer ones can do reasoning and a bunch of shit.
There are of course some things it still does wrong but it is a lot more impressive and useful than just "fancy predictive text" by a long shot.
There is no difference in the way newer models function. They have larger databases and parameter sets and a larger context. They function exactly the same.
 
There's a myriad of things you can use it for that goes beyond trolling or doing your zoomer homework.

I heard some people used LLMs to analyze job offers, then create personalized CVs for those offers, all done en masse, granting them several interviews by the end of it.

You can use it for coding, albeit sometimes it has its errors. For simple stuff, you may ask it to clarify simple things, or obscure stuff nobody talks about, like a random attribute in CSS/HTML that you want an alternative explanation for.

You can use it to give you a broad idea of the legality of something, or specific laws. You will have to check this by yourself afterwards to verify it's true (because ChatGPT is not a good lawyer), but it's a stepping stone.

It is generally a good thing to use when the info to search for would require a lot of time, just to even start, or is relatively obscure. It should never be the "final product", it is merely a tool for giving you something easy to hold onto and explore, then go from there (if you don't want to dedicate extra time by yourself in research).
 
  • Like
Reactions: neger psykolog
There is no difference in the way newer models function. They have larger databases and parameter sets and a larger context. They function exactly the same.

They use reinforcement learning and a whole bunch of stuff. They do not all function exactly the same.

Like they're definitely not perfect but your explanation of them doesn't match with current reality.
 
  • Like
Reactions: The Spoils of War
I've found it's pretty good for tech support. When I have an issue with a system that I'm stumped on, it often finds a decent solution that doesn't take too long to do. A lot of the answers are probably the same as what you could find on forums, but it has saved me hours of searching. You can also upload a huge several megabyte size system log and it will find the solution rather than sift through it yourself and it will find things you probably wouldn't think to look for.
 
will find things you probably wouldn't think to look for
This is probably one of the best things about it. You can really give it quite large amounts of data and it can sometimes pick up on stuff you would never think to notice. Whether that is system logs or chat logs or whatever.
Even if you have severe autism and can naturally spot patterns and anomalies it saves a lot of time and energy to just say some shit like "can you notice any trends in the below data corresponding to x"
 
Back