StealingElectricity
kiwifarms.net
- Joined
- Oct 22, 2023
Might be a retarded question, but why Woof and not GZDoom or at least Nugget?I recommend you play them with Woof.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Might be a retarded question, but why Woof and not GZDoom or at least Nugget?I recommend you play them with Woof.
I like Doomkid. He makes really cool retro wads that are Boom compatible. I recommend you play them with Woof.
Might be a retarded question, but why Woof and not GZDoom or at least Nugget?
Dudes entire job is to give passes and punch tickets and he probably earns more than me. I doubt there are much Karens in planet fitness to complain about the colour of the weights or something like that. Whats so stressful, issues with his body image? He probably seen lard asses get fit in 3 months behind the desk so theres enough motivation to get him going, so whats the problem?He actually works in a leisure/sport center as a receptionist
Wasnt there a dude calling you out on suicide baiting recently?my military-time-self. I mean just as self-loathing and egoistic, lost, angered as I was.
That's unfortunate. I'm explicitly a fan of generative AI, but I've been moving away from LLMs lately because I don't like their output. Maybe they'll get better in time.There is this 'Doom Awards' that started last year to 'counter' the Cacowards by letting people vote for the wads instead of KKKlub but it's not really any different. Wads from the KKKlub still get the most votes plus, allegedly 'Doom Awards' are using AI to write huge texts, so..... no, we really don't have any good alternatives.
No.Wasnt there a dude calling you out on suicide baiting recently?
Yea it shows, wouldnt hurt to touch them up a bit or use better embeddings because hands are fucky and eyes are weird. AI can do hands pretty well now!We do use AI-generated images
Cant really judge by the out of context phrases but you are angry all the time so i askedNo.
Lemme ask, since when the phrase "I'll achieve happiness or die trying" become a suicide bait? What niggercattle logic some of you autistic cunts use to put such a message behind the phrase?
We're getting better about that but I'm also going to get a cash injection in the next year or two from taking on another second job so we'll likely have more commissions. The biggest criticism of our AI-generated images isn't hands or eyes. It's signage having shitty language, not rendering right at all.Yea it shows, wouldnt hurt to touch them up a bit or use better embeddings because hands are fucky and eyes are weird. AI can do hands pretty well now!
Very interesting, though the mind shutter: how do you even train this thing? What data do you feed it? Besides the merits from such AI seem to be minuscule, though the opposite seems interesting. Wonder if we can get it to do the reverse - generating maps from said descriptions. We have a rather large WAD database to feed it, so it shouldn't really be a problem in the near future.And we may be willing to put new releases of Doomshit into print, where we just print some screenshots and the description of the WAD. But the thing is, I wouldn't want to rely on an LLM to generate a description of said WAD, because I literally CANNOT get that LLM to understand the context of the map unless I feed that information into it.
I've fine-tuned Llama2 on TTRPG manuals. The merits of such AI are in things that are relatively formulaic. For example, when writing a new setting in the Narration System we have, we need to come up with about 20-30 Proficiencies (such as Brawling, Lore, or Haggling; think Skills). There's a certain format they follow, where each one has an illustration, a scenario of the Proficiency in real use, a couple of paragraphs describing what it does and what it's about, and a description of different roll results and how to interpret them. We used an LLM to generate this text.Very interesting, though the mind shutter: how do you even train this thing? What data do you feed it? Besides the merits from such AI seem to be minuscule, though the opposite seems interesting. Wonder if we can get it to do the reverse - generating maps from said descriptions. We have a rather large WAD database to feed it, so it shouldn't really be a problem in the near future.
My brother in Kiwi Christ, are you aware that I'm Russian? If not, you are now.Cant really judge by the out of context phrases but you are angry all the time so i asked
I mean, isn't that how the brain processes information too? I have made some basic image recognition AI, and as far as I can see, the neural network pretty much does the same thing as the brain in terms of generating output. Most of the stuff we create are based on already established narratives that we subconsciously consume, but I digress.See, LLMs under the hood are text completion models with a dash of randomness thrown in.
Actually a great idea, not sure why I haven't thought of that yet - very clever.Here's a better use of generative AI for Doomshit: generating assets like textures and monsters. Fine-tune Stable Diffusion on 1990s-era FPS games in general, not just Doom, and then you can have it generate pixeltastic wall textures or something.
Neural networks do work similarly to the brain, yes, that's where they got their name from. It's a bit simplified but yeah, you're on the right track. You could maybe also do monster, weapon, and projectile assets. I'm not sure if fine-tuning all of them into a single job would be better than doing them as separate fine-tunes but that's for someone more interested in that experiment than me.I mean, isn't that how the brain processes information too? I have made some basic image recognition AI, and as far as I can see, the neural network pretty much does the same thing as the brain in terms of generating output. Most of the stuff we create are based on already established narratives that we subconsciously consume, but I digress.
Still, given that our current best map generation system is Oblige(which I think SLIGE was based on)/OBSIDIAN (which, as far as my understanding goes, splice random map rooms from presets), which I haven't tested yet, tough I generally hear that they are meh at best.
Actually a great idea, not sure why I haven't thought of that yet - very clever.
If you arent a nerd you take stable diffusion GUI from git, get bunch of pictures you want to train AI on, write a text file with danbooru style tags for said pictures in it, then press the "train" button in SDGUIhow do you even train this thing?
I played it when it was new, and I seem to recall thinking it was overall quite decent. Not perfect, but enjoyable and that it didn't feel like a waste of time, but my memory of it is kinda fuzzy by now. It relied too much on hurtfloors, that part is pretty familiar to me still. I enjoyed it more than I enjoyed MyHouse.wadWhat does everyone here think of his megawad Jenesis? I replayed it and while it looks decent to the eyes IMO, it likes to spam damaging floors like there's no tomorrow.
Has he had the same kind of deranged freakouts as Doomkid?He's bri*ish doomkid
If he's reading this, he should know that online dating services are actually a really horrible way to meet people.
Does attempt to come out with dragonfly counts? Oh wait we live in 2024, nevermind, carry on...Has he had the same kind of deranged freakouts as Doomkid?
Dragonfly was engaged then, no wayDoes attempt to come out with dragonfly counts? Oh wait we live in 2024, nevermind, carry on...
Sure, but how can we apply this to Doom WADs? How can we instruct the AI to parse the WAD and derive context from its data? Or maybe the WAD must first be parsed and then have specific sections of it fed to said AI? Either way, sounds painful.If you arent a nerd you take stable diffusion GUI from git, get bunch of pictures you want to train AI on, write a text file with danbooru style tags for said pictures in it, then press the "train" button in SDGUI
Well in ideal world you could just feed whatever learning algorithms you using contents of LINEDEFS, SIDEDEFS, THINGS and SECTORS of a limit removing map and let it run for a while searching for patterns. Its hard, you need to write custom code and data is noisy. In less than ideal world you can grab udb, screenshot map layouts and feed it to stable diffusion so it can gen you a map or a room layout. This is easy, funny and guaranteed to make map makers seetheSure, but how can we apply this to Doom WADs? How can we instruct the AI to parse the WAD and derive context from its data? Or maybe the WAD must first be parsed and then have specific sections of it fed to said AI? Either way, sounds painful.