Video Game Chat Thread - Pre-Alpha Experimental Version

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account

Are videogames for children?


  • Total voters
    8
  • Poll closed .
I'm just waiting for @The Mass Shooter Ron Soye to give his take since his opinions are neat and informative.

Why wouldn't this be optional? Not every system will handle it well if at all given what else the player might be running simultaneously. And DLSS5 won't be coming to older GPUs like the 3000 series.

Yeah it has to be optional or else devs are literally gonna lose 99% of their sales because less than 1% of gamers can afford TWO FUCKING RTX 50 SERIES CARDS THAT THIS SHIT FILTER NEEDS.
 
I'm just waiting for @The Mass Shooter Ron Soye to give his take since his opinions are neat and informative.
My take was "Hey, this is sparking a lot of AI derangement!" We've only seen a few short clips of it, we don't know how it will work outside of these curated clips, most of the attention was on faces rather than background visuals, we don't know how much control developers will have over it, etc.

A lot of the folx crying "SLOP" learned the words "ART DIRECTION" overnight.

While Nvidia hypes up image-to-image generation on every frame, there are things going under the radar, like Xbox Helix / RDNA5 (and likely PS6) supporting neural texture compression, which could slash VRAM requirements even as the new consoles have around twice the memory of the current ones.

Yeah it has to be optional or else devs are literally gonna lose 99% of their sales because less than 1% of gamers can afford TWO FUCKING RTX 50 SERIES CARDS THAT THIS SHIT FILTER NEEDS.
They pinky promised that DLSS 5 will not need two RTX 5090s when it's ready. It's planned for a Fall release, not anytime soon.

Even if it did have insane requirements, AMD has burnt a lot of goodwill by making FSR4 require the latest cards to work (and reportedly this will continue with FSR "Diamond" and RDNA5). Market leader Nvidia can get away with doing the same thing extremely easy, locking the new feature to RTX 60 series if necessary.

It's not clear to me if this will be able to be swapped into games that support DLSS 3/4, or if it requires the developers to be on board. If it's the latter, not many games will have this at launch. If modders can make it work in old games, preferably more easily than RTX Remix, that will be fun to see.

Edit: Great reacts, how about a reply? What do you disagree with?
 
Last edited:
I am certain that these Videogames companies and Nvidia are colluding together to force people to buy higher end GPUs by making the videogame companies create their videogames and have them have bad optimization and ultraphotorealistic graphics and unnecessary tiny visual details and animations that nobody asked for.
But there's a fatal flaw in their plan... The games have to actually be good first so people would want to spend money to play them.
 
we don't know how it will work outside of these curated clips
We've only seen faces from the front(I think). Would be interesting to see how it looks as they turn all the away from the camera, to see if it can keep up with that.

I generally like what it does to the hair in the examples of Starfield and RE9, but it is a beautification filter and not something that makes things pop. Just compare the lips of Grace in the RE9 screenshots. I mourn the death of the DX reference rasterizer standards...
 
Last edited:
With Sony losing money from concord and marathon, will they finally understand that they should stop investing money in live service projects?
Triple A looks like a delusional gambler rn, "just one more live service, please I promise, one more live service project and we will hit the jackpot!"
:lossmanjack:
That's retarded, why wouldn't sony invest where the good money is? Good live-service games are going strong and still printing money, sony only needs to make a good game first and be efficient about it.
 
I didn't think the Lara Croft show could get worse, but it continually does.
1773812803470.png1773812937242.png
 
I'm not opposed to neural rendering, but it's obvious Nvidia hasn't allowed enough developers the creative control they need and it doesn't have NPBR workflows. I remember something called Barracuda that Unity was experimenting with in 2019 that let you train a GAN for NPBR rendering, but DLSS5 doesn't seem to have even the ability to fine tune it to a specific style or game.
View attachment 8713522
A dream sequence in a game with a really shitty neural imagegen filter would go hard. Something to the same effect as that AI hallucination video of a picture of a barn where a bunch of faces appear halfway through.
 
im gonna need to be the one to point out there has been people in this thread who wanted this exact thing in their games and now that it's a thing, nobody wants it
I think using neural networks to improve image quality is a good thing and want to see more of it.

Yeah it has to be optional or else devs are literally gonna lose 99% of their sales because less than 1% of gamers can afford TWO FUCKING RTX 50 SERIES CARDS THAT THIS SHIT FILTER NEEDS.
Almost makes you think that maybe they were using dual 5090s so they could render the games twice at maximum ultra settings for marketing purposes, and it isn't actually a hardware requirement that NVIDIA announced.
 
Last edited:
They confirmed it's a live AI recreation of the scene using the native scene as input. Img2Img in Stable Diffusion terms. It's quite literally the very definition of AI Slop.
They should have made an AI that focused on predicting caustics and subsurface scattering for objects based on geometry, light placement and materials. These are currently impossible to do in real time since they take 3+ bounces to look right, but they would make glass, skin, and water look way better. It could also happen before any post-processing by the developer.
 
They should have made an AI that focused on predicting caustics and subsurface scattering for objects based on geometry, light placement and materials. These are currently impossible to do in real time since they take 3+ bounces to look right, but they would make glass, skin, and water look way better. It could also happen before any post-processing by the developer.

The videos they showed clearly demonstrated the ability to add all that to the image.

DNNs are inefficient as small-scale approximators and often more expensive than just doing the math. They do best when they have a large amount of data to work with. Like with DLSS1-4, you don't want separate DNNs for upscaling each individual model and texture in a game. It would be expensive and get very bad results. Training a single DNN on 16K images of the game with motion vectors does a much better job.

So in the case of rendering enhancement, you don't want separate DNNs for each physics model you would normally have in the pipeline. You just want to train a single, large model to map inputs to outputs.
 
The videos they showed clearly demonstrated the ability to add all that to the image.

DNNs are inefficient as small-scale approximators and often more expensive than just doing the math. They do best when they have a large amount of data to work with. Like with DLSS1-4, you don't want separate DNNs for upscaling each individual model and texture in a game. It would be expensive and get very bad results. Training a single DNN on 16K images of the game with motion vectors does a much better job.

So in the case of rendering enhancement, you don't want separate DNNs for each physics model you would normally have in the pipeline. You just want to train a single, large model to map inputs to outputs.
Read what I wrote again. Adding this stuff adds substantial time to pathtracing and requires a minimum number of bounces before the effect shows up. Turning them on turns a 5 minute render into 1 hour render.
6a0105371bb32c970b02c8d3a3ce3e200d-750wi.jpg
Subsurface-Scattering-in-Hand-807x1200.jpgfog-photoshoot-ForestWalk.jpg
You need an AI approximator if you want to do it real-time convincingly it needs to operate re-render on scene layout info, not post-render on the image because these are lighting effects, not texture effects.
 
You need an AI approximator if you want to do it real-time convincingly it needs to operate re-render on scene layout info, not post-render on the image because these are lighting effects, not texture effects.
No, you really don't. AI-generated images & videos are full of lighting effects that don't have specific, tuned models. Note the reflections on the road in this video. It's just a Nintendo 64 game passed through an AI filter.

Are they physically correct? No - AI can't guarantee correctness by nature. But the point is the transformer is generating all kinds of diffusion, scattering, translucency, and so on without having AI-based physics models in the rendering path to achieve this.


Obviously, this is pure image processing, and this model is far too heavy to run in real time on consumer hardware, but you get the idea. You don't need to teach an AI about specific lighting effects for it to create them.

I highly doubt DLSS5 takes only the image as an input.
 
Alright, Digital Foundry got something else up in light of the DLSS 5 chatter.
Look at that thumbnail, I have to archive this.
maxresdefault.jpg
Description:
It's been a wild few days. When we posted about DLSS 5, we were impressed with the technology and what it was doing - but we posted too soon. When generative AI is so divisive and with the future of games development possibly at stake, we should have taken the time to get the story straight and to analyse what we saw more thoroughly drawing on the thoughts of the whole team. In this video, we try to do just that while tackling the big questions raised by our supporters.

00:00 Explaining our initial DLSS 5 video
02:26 How can developers maintain artistic control with DLSS 5, especially for faces?
15:40 When the game industry is hurting, what are the ethics of using DLSS 5?
23:23 Why show off DLSS 5 now? Why call it DLSS 5?
30:38 Our technical reactions to DLSS 5
47:15 Dealing with harassment and concluding thoughts
A clip they shared on twitter, it's the beginning of the video:

An interesting Youtube comment reacting to this, from a user named C0manso:
The intro to this video seems weird to me too aswell honestly, the last video left a bad taste in my mouth from what practically appeared to be an advertisement for DLSS5. You only said "we should've waited to see the feedback first", well no not necessarily. You don't go with the crowd, you're meant to be the professionals at this. Your opinion is highly regarded for the information you guys usually deliver, and THAT is why there was such a negative reaction. Because it was delivered like an advertisement instead.

You don't change your opinion based on the crowd reaction to save face. We want genuine thoughts.
The overall video does have real critique, but yeah, that intro...
 
Back
Top Bottom