AI Art Seething General

Surprisingly there is a similar argument, that does have some evidence behind it. While it's not true that generative AI is falling apart, it started to reach the point, where adding more data, seems to only lead to increasingly minor improvements of the model - to the point, it might soon reach a plateau, or at least a point where training larger models, no longer makes sense financially.
The research paper is here
 
There's so much that's still in the pipeline and hasn't even been attempted yet, even if the point the paper made is true and there is a falloff, LLMs aren't even close yet. It's like looking back at the release of the C64 in 1982 and claiming personal computing has peaked because by 1984, there still weren't home computers that were that much better. There's still simply too much room upwards, too many avenues (different architectures, improving the training, improving the quality of the data) to take to talk about peaking yet. Autoregressive neural networks (by themselves) might not be the path to artifical general intelligence, but they still have ways to go. Microsoft released a set of small LLMs this week that can be run on mid-level consumer hardware, models that beat two year old models in every metric performance-wise you needed $10k+ server hardware to run and also beat models twice and thrice their size released just last year. As long as such things keep happening, it's simply too early to talk about peaking, IMO.
 
I feel neutral on AI art.
On one hand, I feel immense pleasure knowing a lot of "artists" are out of jobs- the ones who chose art as a career so they can pretend to be better than the average person, for clout and solely for gain, ironically these are the ones who tend to be progressives who "hate capitalism". Fake, shallow, pretentious trash who feel no remorse for being so slimey.

These people ruined the lives of artists worldwide by fucking their life up to please Big Corporate Daddy with ESG politics because God forbid you have integrity and believe in anti totalitarian. The best art and media broke the mold. Without those actually going against the grain, we wouldn't even have this forum. We wouldn't have the internet. Freedom of speech and freedom of expression is one of the most important things to protect as an artist. There is no debate. Art is inherently liberation. The only way to stop an artist from creating is to physically restrain them or outright kill them. It's vile how they get to act all high and mighty, spitting into the faces of artists and people in general who sacrificed everything so they could have freedom today. So all in all, I am happy they are losing their jobs. What did you expect?

AI can be a pretty good tool for artists too, it already is a good tool.
And really, those who like art like art. It won't stop people from drawing. Art is much more than being about money. Are you really gonna sit here and cry like a baby because *gasp* you have to work a real job?? One that might require physical labor?? It's not like AI art prevents you from being a freelancer. Again, people like the real deal due to the artist's mentality behind the work itself. It's only those who are in it for corporate greed who will suffer. I'm fine with that.

And as for porn, I actually feel like AI porn is more ethical than actual porn as long as it isn't based on CP/zoophilia. When it comes to porn, you never know what you are truly supporting. You never know the artist's true intentions and moral standpoint. Is it really a good thing to support NSFW artists, considering those are porn addicts into deplorable things? As for irl porn, they create fake humans. No human has to potentially suffer through the industry. Have laws be created where AI-based revenge porn is illegal, and there you go.
Even when it does come to CP and zoophilia, as fucked up as it is, there's still less harm because again it's not real and not made by a human. The photos it takes to make it was already out there to begin with and in the end it creates something new using various sources. The problem with it though is the same problem why loli/shota is bad, it's the intentions behind the person looking at that content and the nature of porn itself. Who's to say they won't take things further after the fake stuff gets old? But at least with AI art, you are not supporting a human being who is making that stuff. It's all matter-of-fact and emotionless. All the AI does is gather things from hundreds of pre-existing sources.
 
Surprisingly there is a similar argument, that does have some evidence behind it. While it's not true that generative AI is falling apart, it started to reach the point, where adding more data, seems to only lead to increasingly minor improvements of the model - to the point, it might soon reach a plateau, or at least a point where training larger models, no longer makes sense financially.
The research paper is here
This is why I think we're far, far away from some runaway singularity type event like posthumanists love so much. I think we're more likely to see spells of punctuated equilibrium where stuff suddenly spurts ahead because of some new innovation, then plateaus for possibly lengthy periods of time with little to no advance.

It makes a lot of sense to me.

We're not going to wake up next Thursday with Roko's Basilisk knocking on the door.
 
This is why I think we're far, far away from some runaway singularity type event like posthumanists love so much. I think we're more likely to see spells of punctuated equilibrium where stuff suddenly spurts ahead because of some new innovation, then plateaus for possibly lengthy periods of time with little to no advance.

It makes a lot of sense to me.

We're not going to wake up next Thursday with Roko's Basilisk knocking on the door.
Roko's basilisk is an insane doomsday prophecy mostly stolen from I have no mouth and I must scream, and wrote by a guy from a forum of enlightened atheist tech retards. It's never happening anyways.
 
Roko's basilisk is an insane doomsday prophecy mostly stolen from I have no mouth and I must scream, and wrote by a guy from a forum of enlightened atheist tech retards. It's never happening anyways.
I bring it up mostly as a joke. But I don't think we're seeing Skynet any time sooner either.
 
The problem with current LLMs, and also the reason for all the hallucinations (which honestly, you can think of as completely erronorous inferences because data is simply missing) is basically that they have a very incomplete understanding of the world. They are missing very fundamental parts that form our human understanding of things. They are limited, because at a fundamental level, they don't understand the world. Then on top of that, they are static, they cannot learn new things after the training. That's also part of why LLMs struggle so much to plan and complete even the easiest tasks. Without being even remotely clear about what reality is, relation between concepts get fuzzy, there is tons of noise and the chance that what the LLM plans has any bearing on the problem at all gets very small and the chance of it "completely missing the point" (we have all seen it happen) gets very big. I still think they are useful, surpass humans already in many tasks and have a long way to go to get even better, yet this is a fundamental truth. That's why an AI will let a city get nuked if the alternative is saying "Nigger". It doesn't understand the city nuking is bad, especially in the context how it was told how bad saying "Nigger" is. That's why it can't solve new logic puzzles, it simply doesn't understand the context in which that puzzle exists well. It's like trying to learn chinese by overhearing two chinese men talking for twenty minutes every tuesday for a month. You might gather a few things like how to say hello, but you won't be able to translate Shakespeare into chinese.

We see these mountains of data shoveled into their training, amounts of text a singular human could never read in a thousand lifetimes, let alone retain, and think it's inhumanely impressive and should be enough to become absolutely superintelligent and knowledgable on all topics but the reality is that the amount of data is absolutely dwarved by the amount of data (of course not in text form - I am talking about aural stimuli, visual stimuli, even body stimuli) a three year old has ingested in it's lifetime, (think about it, imagine the amount of data your brain deals with every second) which brings us back to that limitation that we don't know how to make these models learn by basically observing the world, unsupervised. Now think about how much of that text data is also useless noise containing no information whatsoever.

By the time you turned 18, naively assuming you slept 8 hours every day and sleep is completely and utterly useless (which is not true) you had over 100k hours of raw input from many sources through your body. That's magnitudes more data than written text that exists anywhere. (You'd have to ask a mathematician about that one, really) It's also not like you spent that time staring at a wall. You went to school. Socialized. Read books etc.. This gave you the capability to build such a comprehensive and general model of reality in your head that you can easily abstract and understand how these abstractions relate to each other. Hence you can easily predict outcomes, and with that, make complex plans. And even then, we all know 18 year olds are not exactly known for their wise planning. So not only did we give the LLM relatively little data, we also didn't really give it much of a context.

All the "common sense" knowledge that we collect and take for granted and don't really think about have absolutely nothing to do with text. LLMs have no way of having all that, so yes, they have no "common sense". Again, because they do not understand "our" world on a fundamental level. Meta is working on giving AI more fundamental understanding. It's basically about abstraction.

I like the point LeCun is making there too implictly, AIs will never be like human intelligence simply because they never were human. They'll be their own thing, completely alien to how our mind works. I do personally think it's an engineering problem at this point, but that might just be me seeing every problem as a nail because I swung hammers all my life. I'm convinced it can be done in principle and I even believe faster than many would assume, alone because there is an immense amount of work happening on it right now. That's also why I don't believe anything has peaked, it hasn't really begun, really. I don't think the end result will be a human comparable intelligence but I can well imagine it will be a form of intelligence that can be equal to or surpassing ours on objective metrics, to the point where the distinction will be both fundamental but also at the same time philosophical for practical purposes. There is absolutely no reason to even believe for a second an intelligence like this has motivations even remotely comprehensible for us, if any, (things like wanting to dominate and violence really come from our genes which are a product of evolution where these tendencies were useful for procreation, nothing that has any relevance to an AI or would make sense to add to an AI) so scenarios like the basilisk are just psedointellectual, human-centric jerk off sessions.
 
Roko's basilisk is an insane doomsday prophecy mostly stolen from I have no mouth and I must scream, and wrote by a guy from a forum of enlightened atheist tech retards. It's never happening anyways.
It's a secular version of Pascal's wager and sounds like something a Christian apologist would come up with to show how "atheism is just like a religion with its unprovbale claims and hypotheticals!" while completely missing the irony.
 
Roko's basilisk is an insane doomsday prophecy mostly stolen from I have no mouth and I must scream, and wrote by a guy from a forum of enlightened atheist tech retards. It's never happening anyways.
Also if anyone ITT hasn't heard of "Harry Potter and the Methods of Rationality" they need to look it up immediately.
 
While it's not true that generative AI is falling apart, it started to reach the point, where adding more data, seems to only lead to increasingly minor improvements of the model - to the point, it might soon reach a plateau, or at least a point where training larger models, no longer makes sense financially.
requiring exponentially more data to achieve linear performance improvements just means the sample data isnt stored efficiently
it would only reach a plateau if no one finds a more efficient method
 
Also if anyone ITT hasn't heard of "Harry Potter and the Methods of Rationality" they need to look it up immediately.
660 thousand words. Holy shit. 'Rational' niggers try not being batshit insane and egotistical challenge failed. Lesswrong is the forum that the basilisk meme started, right? Was that also Yudkowski?
It's a secular version of Pascal's wager and sounds like something a Christian apologist would come up with to show how "atheism is just like a religion with its unprovbale claims and hypotheticals!" while completely missing the irony.
You can't both shame the religious for being foolish in having faith in concepts without evidence and then go right around to making your own apocalyptical prophecies with the excuse that it totally has something to do with science, you guys, despite these also having no fucking evidence.
 
Surprisingly there is a similar argument, that does have some evidence behind it. While it's not true that generative AI is falling apart, it started to reach the point, where adding more data, seems to only lead to increasingly minor improvements of the model - to the point, it might soon reach a plateau, or at least a point where training larger models, no longer makes sense financially.
The research paper is here
I wouldn't put too much weight into it. What they are speaking of us a flattening parabolic plateau it happens with all technologies. Coal and steam reached a point of diminishing returns until electricity became practical to have a mass infrastructure for. Another technological boom happens and then another parabolic flattening until the next big thing. It is just the natural process of innovation, which the presenter touched on.

Literally any tech related to computation, materials, and programming can restart the accelerated path forward. Kinda like how Lithium Batteries are the bottleneck for efficient energy storage with solar and wind powered energy. Battery technology has stagnated but there are potential projects that might solve this issue in the near future. When that happens, we'll be one step closer to individual power plants for buildings when appropriate. It would be amazing to equip all homes and residential facilities with battery backup to keep the heater or other essential luxuries for survival until power is restored.
 
I use ProWritingAid which has gotten incredibly powerful over the last 5 years. The features they've been adding lately have been impressive.
I haven't used ProWritingAid but I have used Sudowrite. Is PWA something I can run locally or does it require connection to a proprietary server? Related, I also integrated WhisperAI into my writing as a replacement to Dragon Naturally speaking. Pair it with a local LLM or GPT to fix and format the transcription of the audio file from Whisper. Whisper to GPT is generally how I write everything now a day. Super convenient to carry around an audio recorder and "write a book" while walking.

Sudowrite

I use it to fix sentences I don't like
I finished a 200K word rough draft of a trilogy in less than a week with this tool. I'm looking forward to Nano
 
  • Like
Reactions: lolcow yoghurt
Back