- Joined
- Nov 15, 2021
So I have a friend who is claiming that my 3080 GPU will bottleneck a 7600 CPU which I seriously doubt it considering the 3080 is still pretty solid AFAIK, but what do you guys think? I do plan to upgrade the GPU next year but I don't know.
Also, I am considering the 7600X, but would a 7800X3D be a better option for a near 150$ premium? Or would a 7800X be a better option? Going to wait until black friday/cyber Monday to buy on Amazon to see if I can get a deal.
(Sorry if this has been already discussed, if it has been I skimmed it).
It sounds like your friend is trying to sound smart. That CPU will have no problem keeping up with the frame rates that GPU is capable of delivering, nor will any of the other CPUs you listed. And on the flip side, that GPU will have no trouble delivering smooth frame rates and stunning visuals in any game out now or in the next five years. So you can't really go wrong. 3D V-Cache is not something I'd pay extra for unless your computer is also serving as a workstation.
GN says one thing, Ugly One another. I'm going to listen to whichever one has the better hair.
Also because my personal experiences with Intel's heterogeneous architecture was that it is awful and not ready for desktop computers. Seriously, my 12900K ran better with the e-cores off than with them on, it was ridiculous. The e-cores put out ridiculous amounts of heat for what little actual work they bothered doing. I've heard that's mostly fixed now, but calling them cinebench cores kind of aligns with my experience. Looks good in benchmarks, and technologically impressive, but Linux/Windows are not ready for it. I'd rather have chiplets and core parking. Only need one CCD (ie, office work or gaming)? Park the other three. Running engineering software/compiling? Power up all four CCDs and go at it. Idling? Run one CCD at minimal levels.
Squeezing so many cores into so little die area is impressive, but given a choice I'd rather have a processor with four zen4c dies than one with lots of Intel's benchmark accelerators that can't even do AVX512 (not that I ever use AVX512, and now thanks to Intel undermining it as an instruction set we should expect processors to have, probably never will).
So what annoys me about the GN video is he says a bunch of stuff that makes no sense, like "the E-Core gets in the way" or something. He also seems to think P-cores run demanding workloads "more efficiently" than E-cores. The fundamental problem is you don't actually know what the problem is until you know what the software is doing. He doesn't figure out what the software is doing, so he doesn't have enough information to say any of the things he's trying to say. He provides enough information that if I were in his shoes, I know the next things I'd look at to make conclusions (probably run in Vtune and track L2 DCMs), but he stops there. And that is what perpetually annoys me about all these reviewers - it's like they assume the software is already doing the right thing, and how it performs on different hardware comes down to design successes or failures on the hardware.
At the risk of writing another 5000-word spergout, a good example is how the AMD X3D chips perform much better in BGIII than just about anything else. Now, the game hardware sites were jumping for joy, but this result sounded weird to me, since games are usually not memory bound, so I did a little digging and discovered BGIII has a memory leak. Even when sitting there, doing nothing, somewhere in the code, the executable is constantly allocating and losing track of objects without deallocating the memory. So this explains why all that extra L3 cache improves performance. Larian's buggy software is constantly hitting the main memory and calling unnecessary mallocs, those newly malloc'd objects eventually go unused, and so the memory controller shoves them back to RAM, where they float in a zombie-like state until you close the game. Extra L3 means they can sit around a little longer in cache before getting kicked out. But this doesn't tell you in general about "gaming performance."
Now, it's absolutely true that if you ship software that's full of memory leaks, sure, it will run better on machines with larger caches and more memory bandwidth. But that's not what advanced memory technology is for. AMD didn't develop 3D V-Cache so it could wipe irresponsible developers' asses. There are lots of legitimate needs for larger L3 cache, but "I didn't fix my memory bugs before shipping" isn't one of them. Larian's software is basically shitting itself constantly, and the game sites are evaluating hardware's general fitness for gaming based on how much Larian can shit on it before it chokes, instead of calling out Larian for shitting itself.
My i9-12900 (didn't get the k because 5 GHz is a meme) does very well under Windows 11 with a game in the foreground and alt-tabbing to the browser regularly (so I can be MATI here or look at game guides or Discord or whatever). However, my 8-core Ryzen does just fine with that as well...which is why my stance on gaming CPUs is "get whatever is in your price range, it will be fine, ignore the memes and the YouTube spergs."
To be fair a lot of the problems I was having were probably because the 12900K overdosed on e-cores in a vain attempt to compete with the 5950X. All the e-cores struggling at 120W to write the system log in the background got the die really hot indeed and forced the p-cores to downclock.
I wonder if that's due to the warping issue 12th gen has. I got about +15% performance from a $10 contact frame and significantly lower temps all around.
Power consumption
i9-12900: 65W - 202W
i9-12900k: 120W - 240W
So yeah.
Last edited: