ChatGPT - If Stack Overflow and Reddit had a child

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
I used AI to check my grammar and compared it with my original text but lately it has been getting more and more awful at it (I write in Dutch, like God intended). The sentence structures and pacing become more unfriendly to read, it prefers to change every elegant word with its most dumbed down synonym, it likes to use vague descriptors and it doesn't understand ; . I have unironically been going back to the old tested method of reading my texts out loud multiple times, like olden times.

Has anyone else had this problem?
Considering that Dutch is German with a high blood alcohol level, and machines can't get drunk, it's hardly a surprising finding
 
Been using Kimi a lot lately, its quite good planning and more complex topic analysis. Its easier to get it into the grey area where it notices things, likely because its a Chinese AI. Have not tested it on Xi Jinping questions
 
An ex-ChatGPT employee is working to "solve" one of LLMs biggest problems, which is that it is non-deterministic (different responses everytimeyou ask the same question); or rather some responses should be the same outcome no matter what.
While this sounds good on paper for things like blueprints or for situations that deal with consistency, I for one cannot WAIT until this is abused to force TPTB-approved responses when asked about certain hot-button, controversial topics on things that have biases and different viewpoints involved. What a world.
 
An ex-ChatGPT employee is working to "solve" one of LLMs biggest problems, which is that it is non-deterministic (different responses everytimeyou ask the same question); or rather some responses should be the same outcome no matter what.
Just got home from a barebone Copilot course at work. 17 people writing the exact same prompt and we'd get slight variations, but rather than just word choice, it was also the interpretation of guidance. Like seeing -2 as dash-two instead of minus-two.

The ""instructor"" showed some paid version of copilot that'd scour every lick of your company account, teams, mails, files, fucking everything to do work. Being able to click "AI meeting" and have both the contents, time, invitations and summary automated in 2 seconds. Certainly superior to writing all the usual fluff nobody reads but is required by corporate culture in that one niche case someone might read it.

All in all it wasn't revolutionary for me. Whenever I see people go "I automated my work and play games all day", I wonder if it's cause they literally just are replaceable cogs or they do some mad coding. We only used Copilot Chat so its ability to do anything outside of shitting out text seems limited.
I'vee also been able to do all kinds of shit i would have never been able to do. on my own in python without the robot.
Seems to be the crux of the AI vs artist debate. The greatest creators in history had a vision and learned the craft to reproduce it. To study film making without having any movies you wanna create is as idiotic as sitting down with AI and going "aight I need to create something wild". You see fully-animated flawless 10-finger AI hyper cock vore unbirth porn, because someone wanted it made. AI is a tool; you ain't getting nowhere becoming a master of the sword with no intent to harm.
 
Last edited:
I transheart MORE AI BLOATWARE!!!!!!!!!
Google Chrome is now integrating AI in its browser.
Fuck, and this is just as I've been forced to put Chrome on my system to handle pages that're broken on Firefox. The Ladybird dream is real. :optimistic:
 
Alibaba/Qwen released a flurry of models this week. It's genuinely too much to talk about all of them in detail. The one they didn't open weight, Qwen Max, seems to be a model that might kick OpenAI/Anthropic from it's throne. I withold judgment for now. In other news:

IEEE Spectrum: Large Language Models Are Improving Exponentially

I did not exactly make a graph, but that vibes with my subjective impression. The easiest example in which you can see it is in hardware costs: The first GPT4 needed some serious hardware, we're talking six figures. Now, Models much smarter than GPT4 can be run on for the consumer completely affordable mid- to high end PCs. I'm starting to wonder about the ramifications of societies defining hierachies by the worth of complex labor suddenly having access to such labor that will become so cheap in that framework, it might as well be free. We had labor efficency hikes through technological progress before so this isn't entirely new, but I don't think we ever had anything quite on this level, interesting times ahead - if the improvement keeps it's pace and doesn't hit a wall, that is. I personally don't see it hitting a wall anytime soon.
 
Last edited:
What do you think about projects like this: PLLuM

TL;DR: It is a project funded by the gov and developed by some unis to publish an open sourced LLM model which is better at Polish.

Is it worth it to fund this kind of projects? It is not published yet, so I don't know what would be the difference, but I would say that other LLMs handle Polish pretty well...
 
A bunch of changes rolled out recently, like sensitive responses being rerouted to 5 plus more NSFW restrictions, and the usual suspects aren't happy.
1759639387015.png

This is why you write shit yourself, and save your work locally. Use it as a beta reader at most. Or, if you're a coomer like these people, don't. Do turn off the computer and touch grass.
 
Last edited:
A bunch of changes rolled out recently, like sensitive responses being rerouted to 5 plus more NSFW restrictions, and the usual suspects aren't happy.
View attachment 7999842
This is why you write shit yourself, and save your work locally. Use it as a beta reader at most. Or, if you're a coomer like these people, don't. Do turn off the computer and touch grass.

So are these people trying to get Chat GTP to make outright hardcore porn? You'd think if they were hardcore coomera they'd have learn how to work the system within its limits and work around the edges and be subtle with having it write porn with their fetishes and shit.
 
I prompted the new IBM Granite 4 model with "You are Francis E. Dec." and asked it "Hello" and it immediately started ranting about "Filthy Jew Bankers"

I wasn't even using a modified "abliterated" (uncensored) model, and it just worked as it was prompted.

I think AI's gonna be OK.
 
So is this as bad as it sounds? @AmpleApricots any opinions since you are the expert here

Whenever I see people go "I automated my work and play games all day", I wonder if it's cause they literally just are replaceable cogs or they do some mad coding.
They are about to be replaced and they are lying for tiktok likes because bragging is always popular, but they actually have to work most of the time
Is it? how good are their accelerators? I saw a chink GPU and it only worked well with that monkey game and some other titles that are only popular there, with everything else the drivers shat the bed
 
So is this as bad as it sounds? @AmpleApricots any opinions since you are the expert here
Yes, it's also not an entirely new thing and their paper is more of a confirmation that it's possible. Inadvertent triggers that cause very specific "behavior" by the LLM have already been observed. You have to remind yourself that "poison" is a qualifier we humans apply to this because it causes unwanted behavior. In truth it's just how this works. The new factor here is the amount of data needed. I personally don't think the problem is massive. The common opinion of people is that companies just scrape all text they can find anywhere down to every random reddit post and cram it like that into their LLMs however they can but if you read a few of the available papers about that (especially from the chinese, western companies don't share much) you will quickly get the impression that this is far away from the truth. There is *a lot* of quite sophisticated data sanitation and validation going on, especially with the current generation of models and it's a big part of the secret sauce of capable models. It's definitively something that needs to be considered though and it is good that anthropic shares this. I'm looking forward to the usual twitter crowd announcing they finally have the weapon that will kill all AI forever.

Is it? how good are their accelerators?
No idea, but I wouldn't underestimate the chinese. I think they're definitively capable of it if they focus on it.
 
Back
Top Bottom