GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

5800X vs. 5800X3D with the RTX 4090, in 53 games. The 5800X3D is running at a 200-300 MHz lower clock speed:


+18.5% in 1080p
+15.4% in 1440p
+6.8% in 4K, 5.8-7.5% with some different subsets of the games.

+30-50% FPS gains are seen in some games, while other games are +0%.

The big gains are, to the extent I looked up numbers, all in games that the 4090 can run at well over 100 fps at the given resolution. So for gaming, this is really not going to be worth the money for consumers who aren't trying to hit 200+ FPS.
 
What consumer workloads does cache affect? I know it does for what we do at work, but the applications we work with are memory bound, and we're finding massive gains going from DDR4 to DDR5, and similarly when AMD introduced the EPYC 7003X series CPUs. By contrast, if games aren't seeing much gain from bandwidth, do they see much gain from larger L3 cache?
Games often benefits massively from lower latency so a larger cache and aggressive prefetching probably helps with that. The benefits of lower latency can also be seen with RAM and games, the same kit with lower bandwidth and lower timings can outperform itself when running higher bandwidth with higher timings, both of which can be hampered by odd read/writes and wtf data structures that won't align well in anything.

I would like to see someone benchmark a bunch of older Unity games on the 5800X3D.
 
  • Informative
Reactions: Brain Problems
I would like to see someone benchmark a bunch of older Unity games on the 5800X3D.
Seeing yansim as a benchbark would be both horrifying and hilarious.

On the other hand. My Zotac 880G itx finally died. I was running a Phenom II x6in there, was a nice heater while it lasted as well. Does anyone have a suggestion for a replacement board that won't be stupidly "collectors" grade expensive in the ITX segment aside from another 880G?
 
Games often benefits massively from lower latency so a larger cache and aggressive prefetching probably helps with that.

The benchmarks don't show this, though. The benchmarks show pretty close to zero difference once you hit about 80 fps in most games. 1/80 of a second is clearly more than enough time to fetch the data you need to draw a frame without the benefit of an extra-huge cache making sure lots of data is already there from the last draw.

And it absolutely makes sense that a high-end CPU has so little tangible benefit. Games are designed to target 30 fps to 75 fps on low-to-midrange hardware (including game consoles) at resolutions from 1080p to 4K. Thus, any decently designed game can easily grab all the data it needs to render the frame and process it in 1/75 of a second or more. So of course faster fetches don't do anything for you until you're at frame rates most people just don't care about.

The benefits of lower latency can also be seen with RAM and games, the same kit with lower bandwidth and lower timings can outperform itself when running higher bandwidth with higher timings, both of which can be hampered by odd read/writes and wtf data structures that won't align well in anything.

You probably won't see a difference unless you have a 240 Hz monitor, and even then, you might not actually see the difference. Some of the games on the list that showed a big benefit hit over 240 fps already on a 3080, so any perceived improvement you get from 4090+5800X3D will just be a placebo.
 
Last edited:
  • Informative
Reactions: Brain Problems
By the time you get to the last piece of data you need to work on, the first piece has been cleared out of cache, so for the next frame, you will need to re-fetch it from RAM.
Sounds like a bigger cache could help with that.

No one is writing games with insane cache sizes in mind and even if it becomes more common on the PC it's not something current consoles have so any benefit of it will be for the 2025 generation of 480fps Overwatch faggots.
 
  • Like
Reactions: Brain Problems
You asked if games see a benefit from larger l3 cache. It was shown that they do.

I'm sorry that a specific fps goal was not part of your original question. The fact remains that yes, games do benefit.
 
Sounds like a bigger cache could help with that.

Only if your FPS is too high for CPU to keep up, which is what the benchmarks show. Except the benchmarks also show that threshold is around the range where lots of monitors and arguably even human eyes can't keep up. You don't play gprof. You play games.

You asked if games see a benefit from larger l3 cache. It was shown that they do.

I'm sorry that a specific fps goal was not part of your original question. The fact remains that yes, games do benefit.

I asked if the have "much gain," not "any gain at all," and the answer is "no." I'm sorry this is yet another piece of hardware that isn't for you, but the fact remains this isn't a gaming CPU. Looks like it's targeted more at professionals and prosumers doing video/audio editing who don't need or can't afford an EPYC or Threadripper.
 
Last edited:
I asked if the have "much gain," not "any gain at all," and the answer is "no." I'm sorry I threw water on your fanboy wankfest, but the fact remains this isn't a gaming CPU.
LMAO. Yup. Seeing 20+ percent gains in some games from literally swapping to another cpu in the same generation. Somehow not "much gain". You're the one who then decides later to say "Well anything over X fps doesn't matter".
 
Only if your FPS is too high for CPU to keep up, which is what the benchmarks show. Except the benchmarks also show that threshold is around the range where lots of monitors and arguably even human eyes can't keep up. You don't play gprof. You play games.
That's because graphic work doesn't really touch the CPU other than assets going through the IMC to VRAM. Game logic and everything related to it is run on the CPU and that means everything as it relates to the game except drawing the picture on the screen.
I can't find the benchmark now but I saw the biggest gains for the X3D was in 640x480 and it seems to be that it cuts down on API/driver overhead in a curious way. Good game code is like feeding sausage in a wood chipper.
 
LMAO. Yup. Seeing 20+ percent gains in some games from literally swapping to another cpu in the same generation. Somehow not "much gain". You're the one who then decides later to say "Well anything over X fps doesn't matter".

When the way to realize a gain is to drive your resolution so low that your frame rate exceeds your monitor refresh rate, no, it's not much gain. I am not the one who decides this. I guess it was the Lord, since he didn't bless you with the ability to perceive frames your monitor didn't display.

That's because graphic work doesn't really touch the CPU other than assets going through the IMC to VRAM. Game logic and everything related to it is run on the CPU and that means everything as it relates to the game except drawing the picture on the screen.
I can't find the benchmark now but I saw the biggest gains for the X3D was in 640x480 and it seems to be that it cuts down on API/driver overhead in a curious way. Good game code is like feeding sausage in a wood chipper.

It's not because something changes about driver overhead. It's because the lower your resolution, more frames your GPU can draw in a second. Since the GPU and CPU run in parallel, the question is whether the CPU can get its work done before the GPU is ready to draw a new frame. If it can't, the GPU has to wait around for the CPU to get done. Just think about the simple example of a single core running at 3 GHz.

At 30 fps, it has 100 million clock cycles per frame
At 60 fps, it has 50m cycles
At 240 fps, it has 12.5m cycles

At 480 fps, that current-gen CPU core has about as many cycles to do its a the original Xbox's CPU did in a 60 FPS game. Of course, you have more cores, but you get the idea. When you have fewer clock cycles to throw around, you can't afford to waste them fetching data. That's why you see the 3D V-Cache having the most impact at low resolutions where the GPU can draw so fast that the CPU has to work at maximum efficiency to keep up. Lower resolution = less work per frame = higher frame rate = CPU eventually becomes the bottleneck.

FWIW, the context here is I do a lot of benchmarking on new-gen hardware, but it's all industrial workloads on high-end EPYC/Xeon/Ampere/Instinct stuff, so I legit did not know what the profile of gaming workloads looks like and what the right-sized hardware for a given use case is.
 
Last edited:
Xbox's CPU did in a 60 FPS game.
That's what's now called an APU but like consoles it had the beneficit of true UMA, it can't really be compared with PC hardware or integrated graphics even if they appear the same.

Of course, you have more cores, but you get the idea. When you have fewer clock cycles to throw around, you can't afford to waste them fetching data. That's why you see the 3D V-Cache having the most impact at low resolutions where the GPU can draw so fast that the CPU has to work at maximum efficiency to keep up. Lower resolution = less work per frame = higher frame rate = CPU eventually becomes the bottleneck.
Yes, it absolutely sounds like the CPU is doing better work if it wasn't bottle necked by... Other CPUs that doesn't perform as well at the same task.

The old trick for game logic and related routines was to run them at a 1x1 pixel resolution so the graphics API and hardware was involved but they didn't have much to do, then logs were spit out. That's just for pure CPU throughput. As opposed to creating a synthetic test that only engages one component of the system, that's bullshit.
 
When the way to realize a gain is to drive your resolution so low that your frame rate exceeds your monitor refresh rate, no, it's not much gain. I am not the one who decides this. I guess it was the Lord, since he didn't bless you with the ability to perceive frames your monitor didn't display.
Then how about ask the question in relation to your preferences? Many buyers of the 5800X3D love it's gaming uplift. They're specifically forgoing higher frequencies, cores, and even the newer generation for specifically its gaming performance. Don't be a pedantic faggot because your relatively vague question was answered in general terms.
 
X3D also raises 1% lows, and the benefit at 4K could be expected to rise in the future as faster GPUs are paired with it.
This is also a point less talked about. Less choppy games are always better.

Fact is the X3D is marketed, sold, and bought primarily as a gaming CPU.

Even the raptor lake slides had to insert a little red line, not a full bar, showing the X3D as beating it.
 
Even the raptor lake slides had to insert a little red line, not a full bar, showing the X3D as beating it.
That was wild:

bq1K0gukNH96Vkeh.jpg
 
That was wild:

View attachment 4019448
It's nice that Intel acknowledged it, but you can still taste the salt lol.
 
That's what's now called an APU but like consoles it had the beneficit of true UMA, it can't really be compared with PC hardware or integrated graphics even if they appear the same.

The point is that the Xbox's CPU ran at 733 MHz, so at 60 fps, it had 12.2 million clock cycles per frame to work with. I use that a reference point to underscore simply that 12 million clock cycles is not an extremely large number. Don't read too much into it, the only point is "12 million is not big."

Yes, it absolutely sounds like the CPU is doing better work if it wasn't bottle necked by... Other CPUs that doesn't perform as well at the same task.

I don't know what you mean by a CPU bottlenecked by other CPUs. Do you mean some sort of hybrid cluster where you have multiple sockets with different CPUs in them? I worked with a machine like that once - had I think 32 new (at the time) quad-core Xeons and 32 old single-core CPUs, found it was better to just not even try to use the slower CPUs at all.

X3D also raises 1% lows, and the benefit at 4K could be expected to rise in the future as faster GPUs are paired with it.

Just to use one example, one of the games showing no benefit at 4K is Doom Eternal. At maximum settings and 4K, the 4090 RTX runs Doom Eternal at an average 316 fps. So let's say NVIDIA comes out with a graphics card in a year or two than can hit the 400+ fps needed for the CPU to become the bottleneck, so buying a 5800X3D CPU will get you over that 500 fps hump, or whatever the threshold is. Who's the customer for this? Is it anybody other than YouTubers who like to post benchmarks to brag about how much they spent on their machine?


I know I'm being autistic, but picking apart marketing performance numbers so people don't waste millions of dollars on parts that don't actually help them is part of what I do. People don't lie, but they leave out things that matter, and vendors just want you to buy the most expensive thing usually.
 
Last edited:
I know I'm being autistic, but picking apart marketing performance numbers so people don't waste millions of dollars on parts that don't actually help them is part of what I do. People don't lie, but they leave out things that matter, and vendors just want you to buy the most expensive thing usually.
You do know that plenty of regular people don't use top end gpus like a 4090 and still get good uplift from CPUs, right? Here's an older review using a 3080ti:

Some games don't show benefit, others like Microsoft flight sim show large increases even at 1440p and 1080p. More and more people are moving away from 60fps so these increases are very relevant.

Yes, you're being very autistic because for some reason you're focusing on a very narrow set of parameters for what you define as "useful". I'm sorry, the 3D L3 cache is not a gimmick for what it's aimed at.
 
  • Like
Reactions: Brain Problems
So I haven't upgraded my PC since 2015 and it's really starting to show it's age. Right now I have a i5 at 3.5 and a GTX 1070 with 16gb DDR3 ram I think.

What's a good upgrade to bring my computer into contemporary times? I prefer to stay Intel/Nvidia. I don't have to be cheap but I'm also not looking for top shelf Titan shit.
 
  • Thunk-Provoking
Reactions: Brain Problems
Back