- Joined
- Aug 17, 2022
Weren't a lot of the overworld and dungeons in Oblivion "computer generated" automatically like this? Just a really shit version of modern AI
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
I don't know if oblivion was, but daggerfall was procedurally generated and was massive by comparison to anything bethesda has done since regarding an explorable game world(and then it also had various POIs set in place). Granted, a lot of it was empty nothing. Hell, Minecraft uses procedural generation. It's kind of a shit version of modern AI, but not really.Weren't a lot of the overworld and dungeons in Oblivion "computer generated" automatically like this? Just a really shit version of modern AI
Most of the overworld was procedurely generated. The dungeons to my knowledge weren't but they all look the same anyways because there was only like one person designing them all.Weren't a lot of the overworld and dungeons in Oblivion "computer generated" automatically like this? Just a really shit version of modern AI
Hell, localization of games is going to end up much easier for smaller studios. You'll obviously still want a human proof reading it, and there's going to be some fixes needed for whatever in-game words a developer might use to describe things in their game world, but in the long term this means more possible sales for smaller game studios due to having a broader market to sell to at a reduced localization cost, meaning more income to develop more games thus having more game devs, etc.If people get smart with the AI, it might in the end create more jobs. Main issue is that the machine won't just work by itself and if it will, then it will create nonsense. While I do appreciate people pointing out blatantly bad uses of AI, sadly a lot of criticism against AI isn't very informed. People only now are concerned about computers taking away jobs, while this shit was happening for over a decade. I'm guessing that mostly because current rage against the AI is led by Twitter and Reddit, so nothing good or productive will come out of it.
not happeningI didn't even mention visual novels in the first place! I specifically mentioned ADDITIONAL REACTIVITY FOR ROLE PLAYING GAMES! VNS AIN'T IT CHIEF!
you'll need a monster GPU unless it's pregeneratedWhy the hell are you talking about visual novels? The usage would be something like the "radiant AI, offering infinite quests" tried(and failed) to do in Skyrim, or more dynamic dialogue options in something like baldur's gate or rogue trader.
What the fuck are you talking about? Even a 4060($300) comes with 8GB now, and that is not a "monster GPU". Even a damn radeon 7600 base model($270) has 8GB. No one is talking about visual novels but you, no one is talking about a "monster GPU" but you, who apparently thinks the amount of RAM included on the lowest end models of the product ranges from both Nvidia and AMD is somehow unobtainable. Even the damn intel ARC GPU's have 8 and 16GB once you get to the 700($250 for the a750) series which is still in the bottom end of current GPU lineups.not happening
you'll need a monster GPU unless it's pregenerated
so you're going to somehow magically make skyrim use 0GB of vram and dedicate it all to your LLM. nice oneWhat the fuck are you talking about? Even a 4060($300) comes with 8GB now, and that is not a "monster GPU". Even a damn radeon 7600 base model($270) has 8GB. No one is talking about visual novels but you, no one is talking about a "monster GPU" but you, who apparently thinks the amount of RAM included on the lowest end models of the product ranges from both Nvidia and AMD is somehow unobtainable.
Skyrim's recommended GPU requirement, (and remember that game is over 12 years old now) was only 1GB with a specifically recommended GPU being a fucking GTX 460, and a minimum requirement of only 512MB. What potato of a gaming computer are you on that you haven't updated your shit in over a decade?so you're going to somehow magically make skyrim use 0GB of vram and dedicate it all to your LLM. nice one
so you can't use any graphics mods. you clearly didn't even read the article you linked since it's mostly just AI TTS. the few LLMs that are there are cloud based.Skyrim's recommended GPU requirement, (and remember that game is over 12 years old now) was only 1GB with a specifically recommended GPU being a fucking GTX 460, and a minimum requirement of only 512MB. What potato of a gaming computer are you on that you haven't updated your shit in over a decade?
Also, these mods already exist for skyrim.
I never said there weren't cloud based mods using an API. In fact no one said this. Much like how no one other than you brought up visual novels. But since that specific portion of the discussion was about RPGs, they don't necessarily need "zero" latency. It's an RPG, not a first person shooter.so you can't use any graphics mods. you clearly didn't even read the article you linked since it's mostly just AI TTS. the few LLMs that are there are cloud based.
It also doesn't necessarily mean relying on a live LLM to handle better dialogue options in something like an RPG(or other genres either), as an LLM could just be used to generate the dialogue trees for a game and have those already done without needing access to a cloud service or running the LLM on the computer the player is using. Much like how AI can be used to generate concepts and reference material and speed up development, it can be done with dialogue as well for random background NPCs chatter, or even actual interactive dialogue options.The memory and processing demands of LLMS are falling fast as demand is motivating optimizations.
We're going to see more and more use of generative AI to produce assets. Even if the big lawsuits about scraping content don't go OpenAI's way, it's just a bump in the road. Anyone with large asset libraries will still be able to train neural networks on them.
I wanted to say that I am not entirely sure a localization by AI is a good idea, since the machine often doesn't understand context and complexities two difference languages might have. But then most of other localizers don't really care for that either or try to push something own into it. However, there are already rephrasing tools online that I'm figuring are using something like what you brought up.Hell, localization of games is going to end up much easier for smaller studios. You'll obviously still want a human proof reading it, and there's going to be some fixes needed for whatever in-game words a developer might use to describe things in their game world, but in the long term this means more possible sales for smaller game studios due to having a broader market to sell to at a reduced localization cost, meaning more income to develop more games thus having more game devs, etc.
The relative handful of morons in the industry of anime and japanese game translation to English crying about losing their jobs to AI can go fuck themselves. They made their beds by doing a bad job and the sooner they're gone, the better. Having an AI model doing 90% of the work wasn't going to mean eliminating 90% of the jobs. It would mean potentially getting 10 times the original content translated(more movies, tv shoes, books, manga, etc.) as a reduction in cost opens up a larger customer(from the business side of things) base to translation/localization services increasing their own potential consumer customer-base.
Hence the human still needed to do the proof-reading(especially with the goofy nonsense verbiage that is found in something like a jrpg). Even then, just within the past few months they've gotten far better at translation including those issues you've mentioned(still needs to be supervised as any AI just let loose on its own tends to get very dumb, very quickly). There's even been some japanese youtube creators using chatGPT-4 to translate their scripts top be used as captions/subtitles in their videos and it works just fine. So instead of 1 person manually handling a project, that same person can be managing multiple projects. It's a work multiplier if used correctly.I wanted to say that I am not entirely sure a localization by AI is a good idea, since the machine often doesn't understand context and complexities two difference languages might have. But then most of other localizers don't really care for that either or try to push something own into it. However, there are already rephrasing tools online that I'm figuring are using something like what you brought up.
so what is your point?I never said there weren't cloud based mods using an API. In fact no one said this. Much like how no one other than you brought up visual novels. But since that specific portion of the discussion was about RPGs, they don't necessarily need "zero" latency. It's an RPG, not a first person shooter.
You claim an 8GB video card is a "monster GPU", this is the bottom of the fucking barrel in terms of specs for a current generation GPU for every company other than Intel.
You claim that Skyrim wouldn't have enough VRAM remaining to run while using a LLM, while failing to realize how little VRAM skyrim actually requires.
You then shift the goalposts and now want to run a bunch of graphics mods alongside a LLM, that would require you to have a newer GPU anyway. This still doesn't have anything to do with current and future game development.
What the fuck is it you want out of a game that demands you run the AI model locally, can't buy hardware that isn't obsolete, and involves visual novels that no one else cares about? Unless it's because you want a VN to act like an AI sex chatbot... who the fuck cares? There's more to gaming than whatever version of FappyDoki Saber-Face Desu Parade you're imagining.
this hits up chatgpt on the cloudAlso, these mods already exist for skyrim.
it doesn't matter if you can fit everything into the card because the game's performance will eat shit when you begin inference on the LLM. doing it in realtime on the same machine is not a good idea but you could always fetch outputs during loading or before the game starts if you don't intend for the player to be able to say anything that isn't premade canned choicesSkyrim's recommended GPU requirement, (and remember that game is over 12 years old now) was only 1GB with a specifically recommended GPU being a fucking GTX 460, and a minimum requirement of only 512MB. What potato of a gaming computer are you on that you haven't updated your shit in over a decade?
So you did want to use it just to jerk off? You were the one who brought up the bullshit in this thread and made absurd claims about VRAM on cards, trying to call low end cards "monster", etc. while whining about visual novels. Fucking coomers...so what is your point?
Of course it would if it were just tacked onto a game that wasn't developed for it, or something that wasn't over a decade old. As far as hitting up chatgpt in the cloud, it's a non-issue. Plenty of games require being online as it is anyway for shitty DRM, might as well make use of the cloud connection for something that actually involves the game.this hits up chatgpt on the cloud
it doesn't matter if you can fit everything into the card because the game's performance will eat shit when you begin inference on the LLM. doing it in realtime on the same machine is not a good idea but you could always fetch outputs during loading or before the game starts if you don't intend for the player to be able to say anything that isn't premade canned choices
Absolutely. The current state of AAAAAA games is a fucking joke right now without using AI for anything, it'll either stay shit regardless of the technology used or someone might figure out a way to make something actually immersive with it.I honestly couldn't care less so long as it isn't distractingly bad in implementation. People want to act like the soulless approved corporate art direction of most games these days is somehow not as bland and utterly worthless as something a computer could make but in most cases it really isn't.
If I can pick up a game and play it and not look at it and immediately go "this is distractingly bad and takes me out of the experience (like what happens a lot today anyway,)" then I don't give a fuck.
Why would any company incur the massive costs of hosting/renting a LLM for the equivalent of fucking radiant quests?As far as hitting up chatgpt in the cloud, it's a non-issue. Plenty of games require being online as it is anyway for shitty DRM, might as well make use of the cloud connection for something that actually involves the game.