GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

  • 🔧 At about Midnight EST I am going to completely fuck up the site trying to fix something.
physics in games kinda ended up dying because the cpus cant run instructions fast enough to simulate real time physics. its a hardware limitation as going anything above 4ghz gets them really, really hot, and we dont have a way to cool down cpus fast enough for it to be feasible. having the gpu do the physics for you isnt performant enough and the only other option is faking the physics through other means. hence them pivoting to making graphics the seller in games which clearly worked.
Ironically enough I see AI working really well here as modeling physics and destruction requires a lot of small equation calculations.
 
  • Informative
Reactions: Brain Problems
physics in games kinda ended up dying because the cpus cant run instructions fast enough to simulate real time physics. its a hardware limitation as going anything above 4ghz gets them really, really hot, and we dont have a way to cool down cpus fast enough for it to be feasible. having the gpu do the physics for you isnt performant enough and the only other option is faking the physics through other means. hence them pivoting to making graphics the seller in games which clearly worked.
The situation could change. On clocks, CPUs are starting to boost to around 5.5-6 GHz now, with base clocks above 4 GHz, and those could continue to creep up. GPUs are now boosting to above 3 GHz.
Ironically enough I see AI working really well here as modeling physics and destruction requires a lot of small equation calculations.
This. There are many little advancements being made that make physics simulations faster and easier to fake. You don't need 100% accuracy for gaming, if a shortcut cuts the computation needed by like 100x. See the Two Minute Papers channel but I'm too lazy to pick out a video right now: https://www.youtube.com/@TwoMinutePapers/search?query=physics

But the console generations decide what you're gonna get so we live in a 3.5 GHz 8x Zen 2 and 16 GB world for now. We can't expect too much from LLMs until a PS6. Maybe LLMs will convince them to ship with 32-48 GB instead of 24 GB.
 
physics in games kinda ended up dying because the cpus cant run instructions fast enough to simulate real time physics. its a hardware limitation as going anything above 4ghz gets them really, really hot, and we dont have a way to cool down cpus fast enough for it to be feasible. having the gpu do the physics for you isnt performant enough and the only other option is faking the physics through other means. hence them pivoting to making graphics the seller in games which clearly worked.
there also wasn't and still really isn't much use for it. stuff breaking more "realistic" or clothes waving more "realistic" doesn't matter much the few seconds it happens in a several hour long game. it's one of those "good enough" states where it might be interesting for certain games (although physics puzzle games don't blow people away as they used to), but not enough for the whole industry to put in much effort because they don't see it making big bucks and enough of a difference - for now at least.
 
Ironically enough I see AI working really well here as modeling physics and destruction requires a lot of small equation calculations.
GPUs are well-suited to running physics calculations for the same reason that they're well-suited to running AI, but moving data between CPU and GPU has significant latency.

AI methods are not well-suited to physics simulations and cannot be run on the CPU in any reasonable time. Collision detection equations tend to be extremely unstable (consider throwing some rubber balls into a pit and trying to model the position of the balls), so training would be extremely difficult to say the least; and projectile motion has performant exact solutions anyway.

On clocks, CPUs are starting to boost to around 5.5-6 GHz now ... we live in a 3.5 GHz 8x Zen 2 and 16 GB world for now
Most CPUs have significant limitations around boost with vectorized operations. For example, Intel has the "AVX offset", which explicitly reduces your CPU speed when running vector operations. For AMD, you didn't have such a rigid clock degradation, but there were stability issues around running these instruction sets at high clocks until Zen 5.

physics in games kinda ended up dying because the cpus cant run instructions fast enough to simulate real time physics. its a hardware limitation as going anything above 4ghz gets them really, really hot, and we dont have a way to cool down cpus fast enough for it to be feasible. having the gpu do the physics for you isnt performant enough and the only other option is faking the physics through other means. hence them pivoting to making graphics the seller in games which clearly worked.
How do you incorporate physics into gameplay in an interesting way?

For networked games, if you run the physics on the server, the server is suddenly significantly more expensive. If you run it on the client, you run into sync issues very quickly - you only get the same result if you run exactly the same operations in exactly the same order on exactly the same chip.

For single-player physics games, you don't have that issue, but the trick with those is that they tend to be puzzle games that don't have a lot of physics objects in the first place. You could run most of their physics simulations on a PC from decades ago.

CPU-bound games tend to instead be discrete simulation games like Dwarf Fortress.
 
physics in games kinda ended up dying because the cpus cant run instructions fast enough to simulate real time physics. its a hardware limitation as going anything above 4ghz gets them really, really hot, and we dont have a way to cool down cpus fast enough for it to be feasible. having the gpu do the physics for you isnt performant enough and the only other option is faking the physics through other means. hence them pivoting to making graphics the seller in games which clearly worked.
Most people just have no idea how much computational power it takes to simulate anything more than solid-body physics. Going from Half-Life 2's boxes and crates bouncing around to a piece of metal deforming as it impacts a piece of concrete isn't 2x-3x the compute load, it's like 100x-1000x. Going from solids to fluids is another O(100x) jump in compute time.

This is a low-fidelity crash simulation not quite accurate enough for engineering purposes, but enough to qualitatively look good.


This represents a few milliseconds of real time and takes a Ryzen 9900X about 12 minutes to run. You're not getting this kind of crash modeling in games because we won't ever have this kind of compute power in consumer devices, at least not as long as we're still building computers out of semiconductors.

Ironically enough I see AI working really well here as modeling physics and destruction requires a lot of small equation calculations.

Using inferencing rather than heuristics to fake real-time physics could produce visually pleasing results without actually having to get the physics correct at all. The guys trying to push physics AI are running into brick walls in engineering because the results look pretty, but are catastrophically wrong when you move even a tiny bit outside the training set for the surrogate model.
 
Last edited:
This represents a few milliseconds of real time and takes a Ryzen 9900X about 12 minutes to run. You're not getting this kind of crash modeling in games because we won't ever have this kind of compute power in consumer devices, at least not as long as we're still building computers out of semiconductors
Is it necessary to render that real time? Couldn't you have a bunch of pre-rendered deformations for different types of crashes and then the game picks whichever one is the closest and it has 2-3 intermediate meshes it uses as keyframes or something?
 
  • Like
Reactions: Brain Problems
Going from Half-Life 2's boxes and crates bouncing around to a piece of metal deforming as it impacts a piece of concrete isn't 2x-3x the compute load, it's like 100x-1000x. Going from solids to fluids is another O(100x) jump in compute time.
Sure, applying FEM to a whole car body is going to be computationally intensive, but what is the gameplay application of this? Most fluids in games don't even try to be realistic for similar reasons - you can capture most behavior necessary for gameplay with a very basic simulation which just compares level/temperature of adjacent nodes with very large distances between grid points. Even those have some problems with high CPU use due to fluid "sloshing" causing it to be permanently alive.
 
Is it necessary to render that real time? Couldn't you have a bunch of pre-rendered deformations for different types of crashes and then the game picks whichever one is the closest and it has 2-3 intermediate meshes it uses as keyframes or something?

"Necessary" is a pretty broad term. Games have had canned destruction for decades now, whether we are talking about a car crash in the original Burnout or a building collapse in Battlefield: Bad Company 2. Strictly from a "necessary to make games fun" standpoint, I don't think we've seen a single technology advance since 2015 that has been necessary. But, of course, graphically it's somewhat jarring ("waiter, there are not enough gold flecks in my caviar") that we have these beautiful, almost photo-realistic environments, and then a grenade tossed into an office always breaks the computer screens in exactly the same way, or a warhammer swung against a wooden door always knocks out a chunk of exactly the same shape, or smacking my Ferrari into a light post always results in exactly the same bumper deformation, or driving tanks across farmfields merely results in track decals appearing on the mud. And, of course, unless the developer specifically created a canned destruction animation for something, it is made of adamantine and surrounded with a force field. Even with more dynamic approaches, e.g. where tesselation is applied at the point of impact, once you've seen it a few times, it looks pretty same-y.

My point is not that it's necessary, but that the reason you don't see realistic physical environments is not because developers are lazy and stupid, nor is it because the Playstation is holding PC gaming development back, but because the kind of physical realism people are often fantasizing about seeing in games requires a DOE supercomputer to run in real time, not a high-end Ryzen or even a second consumer-grade GPU.
 
Last edited:
  • Like
Reactions: Brain Problems
"Necessary" is a pretty broad term. Games have had canned destruction for decades now, whether we are talking about a car crash in the original Burnout or a building collapse in Battlefield: Bad Company 2. Strictly from a "necessary to make games fun" standpoint, I don't think we've seen a single technology advance since 2015 that has been necessary. But, of course, graphically it's somewhat jarring ("waiter, there are not enough gold flecks in my caviar") that we have these beautiful, almost photo-realistic environments, and then a grenade tossed into an office always breaks the computer screens in exactly the same way, or a warhammer swung against a wooden door always knocks out a chunk of exactly the same shape, or smacking my Ferrari into a light post always results in exactly the same bumper deformation, or driving tanks across farmfields merely results in track decals appearing on the mud. And, of course, unless the developer specifically created a canned destruction animation for something, it is made of adamantine and surrounded with a force field. Even with more dynamic approaches, e.g. where tesselation is applied at the point of impact, once you've seen it a few times, it looks pretty same-y.

My point is not that it's necessary, but that the reason you don't see realistic physical environments is not because developers are lazy and stupid, nor is it because the Playstation is holding PC gaming development back, but because the kind of physical realism people are often fantasizing about seeing in games requires a DOE supercomputer to run in real time, not a high-end Ryzen or even a second consumer-grade GPU.
You could still do some more work, like for example for a car crash you can pre-render the results from collisions from different directions with a few different permutations for similar ones, possibly break the car down into sections so that compound collisions are somewhat independent to cut down on the number of permutations needed.

A monitor you hit with a grenade could have 5-10 variants depending on the direction the grenade is thrown. If you know what you're doing you could just throw the model into a simulator that generates a thousand permutations of damage for each potential weapon or collision possible in the game
 
  • Like
Reactions: Brain Problems
You could still do some more work, like for example for a car crash you can pre-render the results from collisions from different directions with a few different permutations for similar ones, possibly break the car down into sections so that compound collisions are somewhat independent to cut down on the number of permutations needed.

A monitor you hit with a grenade could have 5-10 variants depending on the direction the grenade is thrown. If you know what you're doing you could just throw the model into a simulator that generates a thousand permutations of damage for each potential weapon or collision possible in the game
Sure, and they already do this. It has its limits, though, because there aren't 5-10 places for a piece of sheet metal to bend, or for a screen to crack. There are basically infinite, and humans are really good at noticing patterns and repetition. What you need for convincing crashes isn't a few predefined crunches. You need for the car body to react to the thing it's impacting. See for example the slow-mo crashes in the latest Need for Speed.


None of those crashes look remotely realistic at all. Now what you could do is, rather than predefine 5-10 crashes, run 500-1000 crash simulations, use those simulations to train a surrogate model, and use that surrogate model in real time to deform your car mesh. This would be much cheaper than full physics computations, and despite technically being "AI/ML," wouldn't even need to run on the GPU.
 
NVIDIA Rumored To Cut GeForce RTX 50 Series Production In Order To Increase Production Of AI GPUs Such As GB300 (Wccftech)

Looks like Nvidia is reacting to the negative press around GeForce with a resounding, "screw you guys, I'm going home"

This might actually be the perfect opportunity for Intel to siphon more marketshare.
I considered posting this. Going by the machine translation, it's a Chinaman rumor alleging 20-30% supply cuts for China in June. It may mean nothing for the US market, and it may not be true at all. Archive in case it gets pulled.

nothing.webp
 
How do you incorporate physics into gameplay in an interesting way?
simulation games. the physics *are* the gameplay
This represents a few milliseconds of real time and takes a Ryzen 9900X about 12 minutes to run. You're not getting this kind of crash modeling in games because we won't ever have this kind of compute power in consumer devices, at least not as long as we're still building computers out of semiconductors.
no need for this level of simulation. check out beamng.drive, thats the sweet spot. best racing game on the market atm
 
  • Agree
Reactions: geckogoy
I'm beginning to hate Type-C docking stations as they only reliably handle one monitor and the port wears out with use rather quickly. At this point I could probably use a handheld PC with a Z2E cpu in it as my main computer, but not if I have to fuss with a docking station that doesn't connect to all of the monitors every boot without lots of futzing.
 
I'm beginning to hate Type-C docking stations as they only reliably handle one monitor and the port wears out with use rather quickly. At this point I could probably use a handheld PC with a Z2E cpu in it as my main computer, but not if I have to fuss with a docking station that doesn't connect to all of the monitors every boot without lots of futzing.
You also have to read the fine print. I got my wife one that supported 4K and 60 Hz...but not at the same time. She had to drop her resolution down to 1080p. Some real bullshit right there.
 
Sure, and they already do this. It has its limits, though, because there aren't 5-10 places for a piece of sheet metal to bend, or for a screen to crack. There are basically infinite, and humans are really good at noticing patterns and repetition. What you need for convincing crashes isn't a few predefined crunches. You need for the car body to react to the thing it's impacting. See for example the slow-mo crashes in the latest Need for Speed.


None of those crashes look remotely realistic at all. Now what you could do is, rather than predefine 5-10 crashes, run 500-1000 crash simulations, use those simulations to train a surrogate model, and use that surrogate model in real time to deform your car mesh. This would be much cheaper than full physics computations, and despite technically being "AI/ML," wouldn't even need to run on the GPU.
1. you are using an arcade racer as an example when talking about realistic physics
2. NFS Hot Pursuit is a 15 year old game
3. you are using a game with licensed car models, where contracts stipulate deformity levels

3 is the big one. There's a reason Burnout Paradise has better destruction physics than NFS Hot Pursuit despite releasing years earlier. Got nothing to do with computation.

 
Last edited:
  • Agree
Reactions: Rololowlo
Back