AI Hallucinations becoming more dangerous as they become less noticeable?

Oinker_Space

kiwifarms.net
Joined
May 27, 2024
I’ve been mucking around with generative AI for the last couple of years and have noticed fewer visual glitches as time goes by. Hands aren’t as f’ked up, bodies aren’t as deformed etc.

With OpenAI Sora and Google Veo getting even more coherent for video it’s getting hard to spot what is and isn’t real.

This just seems like we’re inviting catastrophic over confidence in these systems.. I read a Verge article in the episode I recorded below. The author doesn't seem to understand what AI is being used for at the moment.. lots of productivity amplifiers but for things that are crucial for life or death situations, adding AI seems like a BAD idea.

IMG_2029.jpeg

We have to stop ignoring AI’s hallucination problem

Sora: https://openai.com/index/sora/
Veo: https://deepmind.google/technologies/veo/
 
I didnt read too much but it seems like theyre just getting mad at ai using statistical data to explain things that goes against the narrative and "truth" that they like and since im not seeing any examples of such hallucinations (maybe i just skipped over them) I have to assume that this is a nothingburger, furthermore I do not fear AI becoming too accurate as unless im missing something I just cant see it getting any better in terms of realism since not once have I seen a procedurally generated image that fooled me for more than a few seconds.

I believe that the problem is not with errors in the actual form of these images but the general composition of them giving off a strange aura that isnt seen in normal photos or drawings.
Do you have any good examples of what you mean by "it’s getting hard to spot what is and isn’t real."?
 
lots of productivity amplifiers but for things that are crucial for life or death situations, adding AI seems like a BAD idea
I would agree with this. Especially since we aren't anywhere close to a general artificial intelligence. These LLMs still make silly mistakes even on very simple code. Code is one of the most yes/no/logical things and if it can't even figure that out, I don't want it anywhere near critical or medical services yet.
 
I would agree with this. Especially since we aren't anywhere close to a general artificial intelligence. These LLMs still make silly mistakes even on very simple code. Code is one of the most yes/no/logical things and if it can't even figure that out, I don't want it anywhere near critical or medical services yet.
The AI replacing programmers thing always makes me laugh. Anybody who has ever tried to use AI for some kind of coding realizes real quick the limitations it has. As someone who only knows a little programming, the amount of time I have put in to trying to get AI to give me even the most basic working version of what I'm asking for is probably just as long as it would have taken me to figure it out for myself. Like, in order to even know how to ask it what to give you, you need some basic understanding, and more so to have it fix the mistakes it makes.
 
I don't think they're getting that much more dangerous because the average person was already falling for them. Chat GPT hallucinated all the time when it came out and still does. People just didn't make as much noise at first because they didn't even realize it was wrong. Most of the people who noticed it were somewhat knowledgeable of what AI really is (A fancy pattern recognition machine with access to a huge amount of data). People a little in the know saw cases of hallucination but also knew to ignore it and double check if something seemed off. The irony is that people are making a bigger deal over it because its become extremely obvious because Google Gemini is completely retarded and will produce blatantly false information. Its gained more attention because more people are aware via word of mouth and normal people are seeing gemini claim that Napoleon was a tall black lesbian.
I would agree with this. Especially since we aren't anywhere close to a general artificial intelligence. These LLMs still make silly mistakes even on very simple code. Code is one of the most yes/no/logical things and if it can't even figure that out, I don't want it anywhere near critical or medical services yet.
These tools are better at solving difficult problems than simple ones. Because they often provide approximations, they introduce variability which makes them a poor choice for a solved problem. A calculator can solve an equation with 100% accuracy for very little compute, while an LLM uses a massive amount of resources to probably solve the equation correctly. Try to ask if a picture has a hot dog in it and an AI is immediately better than any traditional piece of software. AI is incredible technology but its best when its solving problems we couldn't solve through traditional methods, not when its trying to reinvent the wheel.
 
I would agree with this. Especially since we aren't anywhere close to a general artificial intelligence. These LLMs still make silly mistakes even on very simple code. Code is one of the most yes/no/logical things and if it can't even figure that out, I don't want it anywhere near critical or medical services yet.
So you’re saying we’re putting everything in the hands of robots with autism?
 
The AI replacing programmers thing always makes me laugh. Anybody who has ever tried to use AI for some kind of coding realizes real quick the limitations it has. As someone who only knows a little programming, the amount of time I have put in to trying to get AI to give me even the most basic working version of what I'm asking for is probably just as long as it would have taken me to figure it out for myself. Like, in order to even know how to ask it what to give you, you need some basic understanding, and more so to have it fix the mistakes it makes.
The new jQuery, dirty code for the inexperienced
 
I believe that the problem is not with errors in the actual form of these images but the general composition of them giving off a strange aura that isnt seen in normal photos or drawings.
It's probably because they're generated by guessing which colour the next pixel should be based on other pictures that match the prompts in its training data. That's why hands and things like that look so fucked up and the only reason why they look any better now is because they usually do a second pass through that detects fucked up hands in the generated picture and attempts to fix them. The ai doesn't actually understand what hands or fingers are, it just knows that if this pixel is this colour, the next one should be possibly the the same colour or maybe a different colour depending on what probabilities and settings and shit it's tweaked for.
 
I'd be more worried if there was actual intelligence involved or these things were capable of doing something besides generate shitty art (if you could even call it that) and get simple questions wrong. These "AI" can't even consistently respond to a question any human could answer in a few seconds with billion dollar data centers and terawatts of power driving them. They still have less utility than their constituent parts. Imagine writing a program that makes your computer get simple math problems wrong.
 
I would agree with this. Especially since we aren't anywhere close to a general artificial intelligence. These LLMs still make silly mistakes even on very simple code. Code is one of the most yes/no/logical things and if it can't even figure that out, I don't want it anywhere near critical or medical services yet.
Devils advocate: The ones complaining about how LLMs can't code are usually using general conversational LLMs to write code instead of a model trained for coding. The people claiming coders will be replaced are journalists wanting ad revenue clicks.
 
  • Like
Reactions: BirdUp
ChatGPT4 appears to do a great job of coding, explaining what was wrong with code, and correcting it.
 
The AI replacing programmers thing always makes me laugh.
I agree on whole. I use copilot and I have to say though I'm pretty impressed with what it can do. It has inferred what I want to do without any real code on the file before. I think it will be a great tool to use, but I also see copilot make a lot of errors if you let it.
 
I agree on whole. I use copilot and I have to say though I'm pretty impressed with what it can do. It has inferred what I want to do without any real code on the file before. I think it will be a great tool to use, but I also see copilot make a lot of errors if you let it.
I only have maybe 6 months worth of actual programming experience, so I really don't know much, but my job doesn't require a lot of programming, most of what I do is try to automate things when I can.

I think the main issue AI runs into with code is context. Generally when you're asking something like GPT to write you a script there is only so many variables you can add to contextualize what you're trying to have it make. I haven't tried copilot yet but it sounds like it has that contextual element so I'll give it a try, maybe I'll rethink my opinion on AI after using it.
 
I only have maybe 6 months worth of actual programming experience, so I really don't know much, but my job doesn't require a lot of programming, most of what I do is try to automate things when I can.

I think the main issue AI runs into with code is context. Generally when you're asking something like GPT to write you a script there is only so many variables you can add to contextualize what you're trying to have it make. I haven't tried copilot yet but it sounds like it has that contextual element so I'll give it a try, maybe I'll rethink my opinion on AI after using it.
I think its worth the $10 a month. It gets things wrong, but when you need to make a bunch of boilerplate or maybe some less intensive stuff it will serve well. I dont think it will ever get good enough to do lock checks for concurrency, but its definitely neat.
 
Back