Stable Diffusion, NovelAI, Machine Learning Art - AI art generation discussion and image dump

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Since LLMs took off I saw several posts in various places that basically amounted to "I have a ton of chatlogs, how can I resurrect my dead friend/spouse/family member with this?" It's kinda ghastly and probably unhealthy and as soon as this stuff works better, people are probably really going to do this. Hollywood's probably already taking notes. James Earl Jones, voice of Darth Vader, died recently and AFAIK he gave rights to I think Disney to reproduce his voice with tech like this, forever. (don't remember the details but something like that)
Another way to think of this is, that with a sufficiently sophisticated image, your digital replica can be identifiably you, except digital and able to reason and interact at a far higher rate than the real you. Feed it an order of magnitude more information about yourself and you can be immortal.
 
Since LLMs took off I saw several posts in various places that basically amounted to "I have a ton of chatlogs, how can I resurrect my dead friend/spouse/family member with this?" It's kinda ghastly and probably unhealthy and as soon as this stuff works better, people are probably really going to do this. Hollywood's probably already taking notes. James Earl Jones, voice of Darth Vader, died recently and AFAIK he gave rights to I think Disney to reproduce his voice with tech like this, forever. (don't remember the details but something like that)
Before the LLMs, before the Black Mirror episodes, there was... futurist Ray Kurzweil:

Futurist Ray Kurzweil Says He Can Bring His Dead Father Back to Life Through a Computer Avatar (archive) - August 2011
Kurzweil's father, an orchestra conductor, has been gone for more than 40 years.

However, the 63-year-old inventor has been gathering boxes of letters, documents and photos in his Newton, Mass., home with the hopes of one day being able to create an avatar, or a virtual computer replica, of his late father. The avatar will be programmed to know everything about Kurzweil's father's past, and will think like his father used to, if all goes according to plan.
 
Since LLMs took off I saw several posts in various places that basically amounted to "I have a ton of chatlogs, how can I resurrect my dead friend/spouse/family member with this?" It's kinda ghastly and probably unhealthy and as soon as this stuff works better, people are probably really going to do this. Hollywood's probably already taking notes. James Earl Jones, voice of Darth Vader, died recently and AFAIK he gave rights to I think Disney to reproduce his voice with tech like this, forever. (don't remember the details but something like that)
I was gonna try finetune Llama using my wife’s text messages just to learn how to do it all. I never got around to learning how to do it yet but yeah if she died it would be strange and I’d probably delete it. Pretty sure there’s services out there already where you can plug in yours or your loved one’s social media accounts and it creates a bot based off their responses.
 
I was gonna try finetune Llama using my wife’s text messages just to learn how to do it all. I never got around to learning how to do it yet but yeah if she died it would be strange and I’d probably delete it. Pretty sure there’s services out there already where you can plug in yours or your loved one’s social media accounts and it creates a bot based off their responses.
The chinks are into it, some kind of warped filial piety (孝顺) smdh:

 
  • Lunacy
Reactions: crowabunga
What would you all use to train Flux.1, especially if its the Dev version and your using Ubuntu Linux OS? And what are the steps you use to train or fine tune your flux models?
I've been using https://github.com/kohya-ss/sd-scripts.git and the "sd3" branch (which supports both sd3 and flux) to make flux LoRAs. My early results have been very promising. I use ComfyUI for inference and I've found that setting LoRA strength > 1.0 is actually a good idea and helps a lot. It only starts to break down past 3.0.
 
I just found this site today, and I'm having so much fun with it.




 
There’s a 4chan threadshot generator on Glif.app and it is fucking hilarious.

03.png 02.png 04.png

Also, Job's Finished, lol:

1Cixjt8.pnggnosys_game_unit_portrait_african_american_redneck_construction_9a29bb16-01bd-4cdf-ae2e-b55a02...png
 
There's a trend on /v/ of feeding a poorly described video game prompt into a video generator and then posting the result. It ranges from "fat italian man with a BrandTM mustache jumping out of a painting with a star" to "that girl in resident evil clothing is running into and inside a dog and transforms into a car".

Give it 3 years and theyll produce proper videos
 
This c++/c implementation here will let you run Stable Diffusion and Flux on any GPU that supports Vulkan (almost every GPU made in the last 20 years,) as well as on CPU: https://github.com/leejet/stable-diffusion.cpp
This is because it uses ggml, the same AI library that llama.cpp uses. So now poorfags with older AMD GPUs have an option.
Well that is nice, when it first came out people said you need 4080 or 4090 to run it but i only have a 3080. Might try it later
 
  • Like
Reactions: Burekbossman
Well that is nice, when it first came out people said you need 4080 or 4090 to run it but i only have a 3080. Might try it later
Whether or not you can run it depends on the model you use, the amount of VRAM your card has, and whether or not your card supports 16-bit floating point operations.
Before I heard of the C++ implementation, I was able to run SD 1.5-based models (5-7GB VRAM for 16-bit floats) on a GTX 1070 with 8GB. Your GPU should be good enough to run SD 1.5, as well as some of the larger models too.
 
Oh
Whether or not you can run it depends on the model you use, the amount of VRAM your card has, and whether or not your card supports 16-bit floating point operations.
Before I heard of the C++ implementation, I was able to run SD 1.5-based models (5-7GB VRAM for 16-bit floats) on a GTX 1070 with 8GB. Your GPU should be good enough to run SD 1.5, as well as some of the larger models too.
Oh yes ive done a lot of playing around on sd 1.5 and some of the newer models
Trained a hypernetwork back in the old days before loras came out. Not sure exactly how to make those but I think its a similair process and just changes weights instead of makes a giant file. Did some animations too with scripts and changing things slowly, this was before any mainstream video release thing. Every now and then I come back and try to dabble. I even got one of the smaller meta chatbots running but its so shit compared to just using their site with the gigachad model. Which I hate because I want to do everything locally.
 
I have a 3070 and setup flux via comfy-ui and it runs. There is a settings "--lowvram" that allows it to use system memory as well. Seems to suck up to 30GB extra of my RAM while it's running. It's slow but it works.
I also had SDXL setup before and it worked too.
 
Oh

Oh yes ive done a lot of playing around on sd 1.5 and some of the newer models
Trained a hypernetwork back in the old days before loras came out. Not sure exactly how to make those but I think its a similair process and just changes weights instead of makes a giant file. Did some animations too with scripts and changing things slowly, this was before any mainstream video release thing. Every now and then I come back and try to dabble. I even got one of the smaller meta chatbots running but its so shit compared to just using their site with the gigachad model. Which I hate because I want to do everything locally.
Ah, you meant training. stable-diffusion.cpp is only capable of inference right now, sadly.
 
>AI replaces artists
>I sleep
>AI replaces pajeet customer service
>I sleep
>AI is now replacing shitposters

REEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE! This is evil we have to stop it!

Also too bad it requires Google or Discord. It can fuck off.
 
>AI replaces artists
>I sleep
>AI replaces pajeet customer service
>I sleep
>AI is now replacing shitposters

REEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE! This is evil we have to stop it!

Also too bad it requires Google or Discord. It can fuck off.
That's actually a great prompt.

A shitposter is angry that he is going to be replaced with AI, and rants away in the OP. Meanwhile, in the replies, people discuss the Dead Internet Theory.

zst1atmmwoztvdawkoia.png xdcc08p4vioz4990ks22.png

Damn. This is kinda fucked up.
 
you can be immortal
But it wouldn't be me, it'd be a digital alter ego and the moment our experiences diverge, we'd also grow to be probably very different persons. Same with people "brought back" with this tech. Kinda reminds me of Neuromancer, actually.

futurist Ray Kurzweil
These people struggle with the truth that nothing is forever. It sucks. Truly does, especially as you get older and the realization sinks in that you don't have time for everything you would like to do and experience as you will simply not live that long. Nontheless it isn't the way. Our society does not have a healthy relationship to aging and death, we kind of see it as a problem to be solved at best and an unpleasant reminder to be shunned at worst. We just cannot let go of things and a lot of this stuff comes from that.

Pretty sure there’s services out there already where you can plug in yours or your loved one’s social media accounts and it creates a bot based off their responses.
The "it's free real estate" meme just with personal data.
---

It still requires some skill to get actually good LLM outputs, probably always will to some degree with the current iteration of tech. Every time you see some AI toy but the outputs are actually fun and kinda uncanny it's because it was properly set up and not prompted by a ESL pajeet. Exceedingly rare as these types are drawn to the current hype like moths to a flame.
 
  • Agree
Reactions: indomitable snowman
But it wouldn't be me, it'd be a digital alter ego and the moment our experiences diverge, we'd also grow to be probably very different persons. Same with people "brought back" with this tech. Kinda reminds me of Neuromancer, actually.
Train the AI to replicate you perfectly, then kill yourself. Without new experiences you won’t diverge, and in terms of “is it still you?”, turning the AI on would be no more different than waking up and having forgotten the last five minutes before you went to sleep. The only thing holding you back from living forever as an AI is squeamishness about killing your biological copy.

If I was confident the AI was sufficiently well trained, and able to develop from its experiences the way a human would, I’d do it. Immortality is worth whatever sacrifices it requires.
 
Back