GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

In general I'm super split on the X3D chips. If we see them drop down to like $300 when you're ready to build, definitely pick one up but you're probably not going to miss the extra cache if you decide to grab a non-X3D variant.

Popular youtube guys make them seem like a big deal but if you look at the charts, it's often maybe a 10 fps boost in the 1% lows (which are already well above 60) and the other thing they usually don't talk much about is how the gains are not uniform across games - some games get noticeable boosts, some games don't get any benefit at all, some games regress because of lower clock speeds, and most get maybe 5-10 fps extra (which then vanishes the second you go to 1440p or 4K).

Here's something to keep in mind - The 9800X3D is like $150 more than the 9700X. It gets you maybe 10-15 fps more. $150 is roughly the amount you'd need to go up a tier in GPUs which is going to give you a whole lot more gains.

tl;dr don't overpay for CPUs. Unless X3D is cheap, you're getting more value out of spending the difference on a GPU.
 
  • Thunk-Provoking
Reactions: Post Reply
Here's something to keep in mind - The 9800X3D is like $150 more than the 9700X. It gets you maybe 10-15 fps more.
Baldur's Gate has a memory leak, so the 3D V-Cache on X3D helps for real frame rates, not just the bullshit 200+ fps benchmarks you see a lot. Spending an extra hundred bucks on a processor because your game has a bug the woke dangerhairs can't be bothered to fix is cool.
 
Baldur's Gate has a memory leak, so the 3D V-Cache on X3D helps for real frame rates, not just the bullshit 200+ fps benchmarks you see a lot. Spending an extra hundred bucks on a processor because your game has a bug the woke dangerhairs can't be bothered to fix is cool.
It’s a really good game though…

Anyway, the extra cache is also a huge boon for games like Stellaris and Civilisation.
 
Yeah, if you main a game or genre where 3D v-cache helps then it's obviously something to factor in. 4X and Sim games tend to have outsized benefits from the X3D chips. But for a general normie gaming PC, I can't recommend it at the current price vs just spending that money on a better GPU.

And tbh I can't even really recommend Zen5 when Zen4 is so cheap. A 7700X is like $270. That's a $200 difference vs 9800X3D which can almost take you from a 4070 to a 4070 ti super.

If budget isn't a concern and you're already locked in to buy a high-end GPU regardless of how much your CPU costs, then I'd say pull the trigger whenever the 9800X3D comes back into stock. But most people who come to this thread asking for recs are usually working on a pretty constrained budget.
 
Popular youtube guys make them seem like a big deal but if you look at the charts, it's often maybe a 10 fps boost in the 1% lows (which are already well above 60) and the other thing they usually don't talk much about is how the gains are not uniform across games - some games get noticeable boosts, some games don't get any benefit at all, some games regress because of lower clock speeds, and most get maybe 5-10 fps extra (which then vanishes the second you go to 1440p or 4K).
It will look a bit better after the 5090 comes out and alleviates GPU bottlenecks. And that's the type of person who "needs" the 9800X3D, an RTX 4090/5090 owner. Most people are fine with much cheaper shit.
 
9800X3D looks nice in benchmarks, I wonder how Intel fanboys will cope now. Exciting times ahead, unless Intel fumbles next generation after 200 series.
I overclocked my ram to 6200MT 1:1 to not feel left out on 7800X3D. Would be nice if next generation brings higher bandwidth Infinity Fabric,
All I want now is for AMD to release 16 core low power monster for AM5, so that I could upgrade my mITX build in a couple years.
 
  • Thunk-Provoking
Reactions: Vecr
I overclocked my ram to 6200MT 1:1 to not feel left out on 7800X3D. Would be nice if next generation brings higher bandwidth Infinity Fabric,
I'm betting that High Yield is right and Zen 6 will adopt Infinity Links for greater bandwidth, lower latency, and other benefits.

PC Gamer: AMD's Infinity Links is the unsung hero of RDNA 3 and chiplet gaming GPUs (archive)

Also, Kepler_L2 claims Zen 6 will be on AM5 and is coming late 2026 or early 2027. That is the entirety of the "leak".
All I want now is for AMD to release 16 core low power monster for AM5, so that I could upgrade my mITX build in a couple years.
The Minisforum B790I SE could be the perfect product for you.
 
Last edited:
I'm in search of a used RTX 3090 for ML. It's going to be in a rackmount case with plenty of airflow, and the length is max 305mm. It's between an MSI ventus x3 OC, a Dell Alienware OEM, or an EVGA XC3.

Thoughts?
 
Yeah, if you main a game or genre where 3D v-cache helps then it's obviously something to factor in.

FWIW, there really isn't a "genre" with particular demands on cache. It comes down to how you designed and manage the data structures in your game. If you did it stupidly, you will be brutalizing your L3 cache. If you did it smartly, you won't be. A lot of software is programmed stupidly. Two games in the same genre can be radically different under the hood.

This is in no way a criticism of AMD. Recognizing most software is shit and designing your chip to run shitty software fast was a revolutionary idea in the 1990s and has defined chip architecture for decades now. It's just the smart thing to do if you're a hardware company.

9800X3D looks nice in benchmarks, I wonder how Intel fanboys will cope now.

I run games at whatever settings I need to maintain about 100 fps and cap at 90. So this is what the TH graph looks like for me (I don't have any of these CPUs, but still).

1731156330244.png
 
Last edited:
Anyway, the extra cache is also a huge boon for games like Stellaris and Civilisation.
I play a shit ton of Stellaris and other 4x games, so at least I have one good excuse.
I'm in search of a used RTX 3090 for ML. It's going to be in a rackmount case with plenty of airflow, and the length is max 305mm. It's between an MSI ventus x3 OC, a Dell Alienware OEM, or an EVGA XC3.

Thoughts?
3090s are still insultingly expensive aren't they?
 
3090s are still insultingly expensive aren't they?
It's the VRAM, they are the only card on the market aside from the 4090 that has 24GB at the consumer price point. I've looked into the p40, and its FP16 performance is garbage. Used go for $700+. I'm not going for the TI because it will fry my PSU.
 
  • Feels
Reactions: Post Reply
Intel Investigating Core Ultra 200S Shortcomings, Says Gaming Performance Fell Short of Expectations & Will Be Fixed

Is Arrow Lake one patch away from not being shit? Stay tuned.


7900 XTX: "Am I a joke to you?" "Yes."
I was setting up text generation webui on AMD card and it wasn't pleasant. On Nvidia it just works™.
Let's just push more power into it.View attachment 6626944
Maybe it's NT scheduler finally biting them in the dicks, or microcode is unfinished.
 
Hey, how small could they make the pins or lands in a PGA/LGA chip, if you assume that chips would only be swapped out with specialty equipment with people with more training then the average PC builder?

Like, if a cpu is now being divided into chiplets, is there a way to make them somewhat replaceable by the end user as a trained tech with the right tools could add or remove chiplets with different features, kind of like adding more calculation chips to a really old computer, but all the chiplets are on a daughter board that connects to the motherboard like a traditional cpu?
 
Hey, how small could they make the pins or lands in a PGA/LGA chip, if you assume that chips would only be swapped out with specialty equipment with people with more training then the average PC builder?

Like, if a cpu is now being divided into chiplets, is there a way to make them somewhat replaceable by the end user as a trained tech with the right tools could add or remove chiplets with different features, kind of like adding more calculation chips to a really old computer, but all the chiplets are on a daughter board that connects to the motherboard like a traditional cpu?
The pins/contacts you see on a CPU aren't the actual connection points on the silicon - there are a variety of bonding techniques used to connect the silicon to the package and pretty much all of them require high-precision machinery.

For example, this is a classic wire bond:
1114px-07R01.jpg

Modern CPUs use stuff like TSVs or flip chip bonding to expose the pins on a package. Or put another way - contact pins on the actual CPU silicon are already incredibly small. We explicitly use packaging techniques to enlarge them so they can be safely installed by the end-user without specialized tools.
 
Back