AI Art Seething General

1708057167796171.jpg


>sorry chud, we just made the arbitrary decision that gender is a social construct, so over the course of the next 5 years, we're going to upend thousands of years of civilization by completely destroying the foundational concept of biological gender. Also we will normalize raping children with puberty blockers. If you don't accept this you are just a bigot who can't accept change.

>NOOOOOOO AI CAN'T GENERATE VIDEOS THAT IS TOO SCARY IT WILL CHANGE SOCIETY TOO MUCH NOOOOOOOO BAN IT BAN IT NOOOOO
 
While we don't have much in terms of AI generated writing (as far as I know), we do have discussions on it, and it's certainly popular as a pastime outside of the forum, perhaps limited simply by being more difficult (for now.)
AI writing is less popular because you need exceptionally good input if you want better results—it's suited for writers and copy editors who are willing to put in the effort, which can't be said for image generation
 
AI writing is less popular because you need exceptionally good input if you want better results
For the commercially available bots like Bing Copilot, Bard, and especially the piece of shit ChatGPT, yes, you spend majority of your time tardwrangling the bot, to the point where you are almost better off writing stuff yourself, but with models you can pull off from Hugging Face it is a whole different ball game. I have a post here that explains how to set up your own LLMs, and with Mistral's models you can easily get the same type of deal as with ChatGPT 3.5/4 before it became enshittified by the woke inspectors at OpenAI. Those are actually really, really good, but most people don't know about it.
 
For the commercially available bots like Bing Copilot, Bard, and especially the piece of shit ChatGPT, yes, you spend majority of your time tardwrangling the bot, to the point where you are almost better off writing stuff yourself, but with models you can pull off from Hugging Face it is a whole different ball game.
i was talking about novelai but koboldai models are much worse than anything commercially available
if i had to choose between a moral-policing but high-quality AI and a low-quality but uncensored AI, i'd choose the former
 
if i had to choose between a moral-policing but high-quality AI and a low-quality but uncensored AI i'd choose the former
There never really was an uncensored LLM before Mistral's latest model. They were all derivatives of LLaMA. Yes, you can "train away" the censorship to some extent, but not really, and if you do it too extremely, then it causes the whole model to seize and become completely unusable. In fact, if you try to "uncensor" a model, you more often than not end up lowering the quality of the model, which you correctly observed. The reverse is true as well. Further censoring a model after it has already been trained also results in total enshittification, which is precisely what is happening with OpenAI's ChatGPT; why it has had such a dramatic dip in quality recently.

Mistral's model is not based on LLaMA (what you call "koboldai", I believe), being trained from the ground up with the same data set that was used for training ChatGPT 4. Completely clean slate. You can even remove the wrongthink guardrails. I have already used it and for the vast majority of applications it definitely is on par with ChatGPT 3.5/4, although funnily enough I did not have as good success with editing creative writing as with other models. However, because we finally have an actually uncensored LLM and it's open-source, we just have to wait for some clever boys to fine-tune the model to work better with certain applications.
 
AI writing is less popular because you need exceptionally good input if you want better results—it's suited for writers and copy editors who are willing to put in the effort, which can't be said for image generation
Right. I think the difficulty in getting high-quality output is certainly one of the reasons, along with, ironically, the higher complexity of a text-generation model versus a model generating images (although the latter relies on the same modeling of language as the former to understand concepts.)
For the commercially available bots like Bing Copilot, Bard, and especially the piece of shit ChatGPT, yes, you spend majority of your time tardwrangling the bot, to the point where you are almost better off writing stuff yourself, but with models you can pull off from Hugging Face it is a whole different ball game. I have a post here that explains how to set up your own LLMs, and with Mistral's models you can easily get the same type of deal as with ChatGPT 3.5/4 before it became enshittified by the woke inspectors at OpenAI. Those are actually really, really good, but most people don't know about it.
I've found that with a decent enough jailbreak, I've avoided a lot of "As a large-language model..." handwringing over content from things like Claude or GPT4, but it's likely highly dependent on the task and the model itself. Running locally will always be superior in that regard of allowing you to make what you want, and I have no doubt it'll be the same for video generation in the future. I'd love to see Stability AI go and toss out a video generation model ahead of OpenAI making Sora available to the wider public, but I imagine they're a bit behind in terms of having it ready to go.

Here's the section on Safety from OpenAI's Sora announcement, for those curious about how this thing's going to be lobotomized with an icepick "made safe" before you get your hands on it:

Safety​

We’ll be taking several important safety steps ahead of making Sora available in OpenAI’s products. We are working with red teamers — domain experts in areas like misinformation, hateful content, and bias — who will be adversarially testing the model.
We’re also building tools to help detect misleading content such as a detection classifier that can tell when a video was generated by Sora. We plan to include C2PA metadata in the future if we deploy the model in an OpenAI product.
In addition to us developing new techniques to prepare for deployment, we’re leveraging the existing safety methods that we built for our products that use DALL·E 3, which are applicable to Sora as well.
For example, once in an OpenAI product, our text classifier will check and reject text input prompts that are in violation of our usage policies, like those that request extreme violence, sexual content, hateful imagery, celebrity likeness, or the IP of others. We’ve also developed robust image classifiers that are used to review the frames of every video generated to help ensure that it adheres to our usage policies, before it’s shown to the user.
We’ll be engaging policymakers, educators and artists around the world to understand their concerns and to identify positive use cases for this new technology. Despite extensive research and testing, we cannot predict all of the beneficial ways people will use our technology, nor all the ways people will abuse it. That’s why we believe that learning from real-world use is a critical component of creating and releasing increasingly safe AI systems over time.
I'm expecting a fun back-and-forth between the code monkeys and the internet when it comes to making some forms of jailbreaks for the text classifier, same as with DALL-E and Bing and whatnot.
 
However, because we finally have an actually uncensored LLM and it's open-source, we just have to wait for some clever boys to fine-tune the model to work better with certain applications.
the problem with crowdsourced development is that nobody has financial incentive to cater to the users so new features take more time to be implemented than with commercially funded development
 
Last edited:
  • Disagree
Reactions: UERISIMILITUDO
new features take more time to be implemented than with commercially funded development
Not really true in this case. In most cases, yes, it is true, but in the case of LLMs it is the "crowdsourced development" that mogged Google, Microsoft, Meta, and every other major company working on LLMs. For a long time these companies were at an impasse with LLMs, unable to make them produce more accurate input, and they did so highly inefficiently, requiring orders of magnitude more resources to run than they do now. There was a long time of little progress happening.

Then LLaMA got leaked, and within a week someone was able to run it on their Google Pixel 6. It turns out that when you stop relying on the world's largest data processing centres as a crutch to do all the hard work for you, and start thinking about how to actually make these things run with limited hardware, you end up with progress. Same happened with Stable Diffusion when it leaked. Stable Diffusion was pretty mid compared to Dall-E and alternatives, but users were able to make it better within a similarly short timespan. Dall-E and alternatives are again better for most applications than Stable Diffusion and Stable Diffusion XL, but I don't doubt it's largely due to the research that regular people have already done for everyone.
 
Last edited:
Obviously nothing precludes someone from making art for arts sake, but I imagine there is a certain kind of spiritual fulfillment and satisfaction of having your work become popular and appreciated by thousands of others across the world - and that opportunity will be effectively gone soon.
I addressed this earlier; to quote myself:

A lot of Twitter artists are in incredible denial that the greatest determinant of success in art in terms of popularity is who you are or who you know that is willing to promote you.

That 100 days thing would not be even a blip if it was someone else doing it. The man receives constant encouragement, with such encouragement being very helpful to an artist to at least continue.

That opportunity is already mostly gone due to the masses being ADD-addled by the Algorithm and similar things. Oftentimes a person has to already be established, be a drone who works constantly, draw coom art, draw fanart of Flavor of the Month, or some combination of the above, to get noticed. In the case of Twitter, buying Twitter Blue helps too.

@Mr.Miyagi is more or less correct, but I would take a more negative bent and say nothing will change.
 
video evidence is fucked.
Good.
If people are falling for AI generated videos and images, that implies that the average Twitter user takes things at face value. Therefore, "Sora" shouldn't be blamed. This isn't an AI issue. It's a human issue.
However, I think Sora is a double-edged sword because REAL VIDEOS could be dismissed as AI generated.
There's many elections this year in many countries. Political debate will be ruined by people who fall for AI generated videos and bad actors who make them.
Also, what guarantees that the feds already don't have GPT-8 or something like Sora-2?
 
Good.
If people are falling for AI generated videos and images, that implies that the average Twitter user takes things at face value. Therefore, "Sora" shouldn't be blamed. This isn't an AI issue. It's a human issue.
However, I think Sora is a double-edged sword because REAL VIDEOS could be dismissed as AI generated.
There's many elections this year in many countries. Political debate will be ruined by people who fall for AI generated videos and bad actors who make them.
Also, what guarantees that the feds already don't have GPT-8 or something like Sora-2?


All credibility should be fucked. It'll be a death of celebrity, and all sorts of fun stuff.

Also the Sora seething has been kino. They can't complain it's bad anymore so they have to say just ban cuz my misinformation.
 
Any artist who kills themselves over these advancements was, realistically, not going to make it into the .01% of artists anyway.
I couldnt agree more. And any artist, especially the hobbyists, that only does art purely to show it to others and doesnt enjoy the craft in creating wasnt that worth it anyway. I understand if proffesional artists get burnt out since its their job, but if you are just someone that enjoys drawing or painting, I don't see how Ai could take that pleasure away.
 
One other thing about the artfag Twitter response to all the Sora videos: remember that the Tumblr artists ran en masse to Xitter a few years back which is why the platform shat itself so badly in the first place. Of fucking course if you go to Xitter to see the response to AI art, you're going to see seething and hate because all the they/thems and was/weres are on there. Same thing with ResetERA which Stalinistically purged like 40% of its own userbase over the years and won't even let you join without your work email (so you know, if the admins don't like your wrongthink, they can bring it up with your boss).
These are the same people who are journalists and creatives in places like video game HR departments. Of fucking course they're terrified by all this and trying to appeal to your emotions (the only thing they understand).
 
independent creator empowering
Can you really call yourself a creator if the computer did all the work for you? Youtube is gonna be flooded by shit from 'idea guys' who were filtered out of the creative field by the actual, creating part. Look at Shadiversity, a guy who wants to be an artist, but hates drawing, and thinks other artists are pretending to enjoy it.

Sora might be a good tool to help artists, but lets be honest, it's just gonna be used to make spicy memes. Then it will be shut down or regulated cause people were using it 'wrong' and having too much fun.
 
They can't complain it's bad anymore so they have to say just ban cuz my misinformation.
And you know none of them care about what crimes may be committed with it. Hell, they’re such shut-in they/them children they think all it takes is video evidence to convince court of a crime. They don’t care about ‘misinformation’ when they parrot what someone else says without AI. They don’t care videos/photos get staged without AI. They don’t care about revenge porn because the attention whoring drives the need to share all their info to strangers online anyway.
Most of the arguments can really be dismissed if these retards learn, better late than never, that the internet is not their ‘safe space’. And stranger danger should absolutely return as a mindset.
 
Can you really call yourself a creator if the computer did all the work for you?
Yes. Without you the computer would have done nothing at all.
If you go down this line of argument to its inevitable conclusion, you have to concede that digital artists aren't creators as much as traditional artists are, because they have a computer helping them.
Take photography for instance. You didn't create the photograph, the camera did. But you took the shot. You aligned it , shifted the focus and decided the time to shoot. It still has your fingerprints all over it because you put in the effort to have it be like you wanted.
The amount of machinery between the creator and the creation is irrelevant if you took in the time and effort to guide your idea through all of them.

Imagine this: if you wrote a novellas worth of prompts, and then you kept generating and regenerating it, and along the way you delicately adjust your prompt so that through trial and error you get it to look exactly as you imagined it, and you spent hours and hours on this, did you really create it?

Just because most people are lazy with a technology doesn't mean the technology itself is only for lazy use.
 
Back