GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

Sorry to interupt but I would like some help and opinions. I'm looking to get back into PC gaming but I can't for the life of me figure out CPU's. GPU's I can understand, they somewhat make sense, but CPU's are confusing.

What's the difference between the variations of I3/5/7/9 processors? Is an I3 10400 the same as an I9 5000 or an I7 8000?

Any help would be great. Thank you :)
 
Ignore the i number and just look at the code. The first (or first 2) numbers are the generation so the 2000s are 2nd generation, 10000 is 10th gen, 13000 is the current gen.

The rest of the numbers are the class within the generation, between 100 and 900, higher being the most cores, power, and cost. So the current flagship is the 13900KF (K meaning unlocked for overclocking, F meaning no onboard gpu).

That said, don't buy Intel. AMD are just as fast and use way less power.
 
That said, don't buy Intel. AMD are just as fast and use way less power.
This is reasonable advice. Intel are clawing their way back up but I would still go AMD right now if I were getting back in.

And on that note I'll extend @Second Sun 's direct answer to your question to include AMD numbering which you didn't ask about but probably should have. It's similar to Intel. The first number is the generation. The second number is broadly speaking the power category (not in the watts sense) of the chip. So a Ryzen 5600 is the older version of the Ryzen 7600. And a Ryzen 7700 is more powerful than a Ryzen 7600. Usually by more powerful it means more cores. So there are some scenarios in which a more powerful chip wont actually be more powerful - e.g. very single-threaded tasks that don't benefit from more cores.

Newer isn't always better. They're overlapping things. So a Ryzen 7600 which has six cores is going to be a bit better than a Ryzen 5600 which also has six cores - clocked higher, more power efficient, possibly more IPC (instructions per clock), etc, but it being newer isn't going to be enough to make it better than a 5900X which has twelve cores. But again, there's nuance. The 7600 is actually clocked higher so there are scenarios in which it WILL be better.

There are some sub-categories just to mildly complicate things. Some of the newer Ryzen chips have 3D Cache versions of the procssor. That's the chip with some extra cache bolted on top of the thing which makes it faster for some tasks that benefit from it. Gaming being chief amongst it. And just to add some confusion AMD name their chips "Ryzen 5, Ryzen 7, Ryzen 9" but these are broad categories of how powerful they are for lay people. Really just look at the long number e.g. Ryzen 9 7950X.

You said you were interested in it for gaming. You need to avoid weak points in the build, e.g. you don't want a very old processor or a mechanical hard drive for your system, but allowing for that you don't necessarily get the equal return on your investment by over-spending on everything. A graphics card is more likely to be a bottleneck than a CPU so spending any extra budget on a better graphics card is more likely to get you a benefit in gaming than spending the same money on getting a better CPU. In general, I would say start with looking at mid-range Ryzen and work back from there to get the motherboard, RAM, SSD, etc. If you tell us what sort of gaming expectations you have (type of game, size of monitor, level of quality you insist upon) we can probably guide you to something that can achieve that without overspending.
 
And on that note I'll extend @Second Sun 's direct answer to your question to include AMD numbering which you didn't ask about but probably should have. It's similar to Intel. The first number is the generation. The second number is broadly speaking the power category (not in the watts sense) of the chip. So a Ryzen 5600 is the older version of the Ryzen 7600. And a Ryzen 7700 is more powerful than a Ryzen 7600. Usually by more powerful it means more cores. So there are some scenarios in which a more powerful chip wont actually be more powerful - e.g. very single-threaded tasks that don't benefit from more cores.
Both Intel and AMD have been making idiotic changes to their numbering scheme recently. Raptor Lake Refresh (14XXX) is the last that seems to be using the old system, now they are using Core Ultra XXXX for Meteor Lake onwards and it seems with a different numbering. AMD seems to have desktop chips still going with the first two digits indicating generation, but their mobile chips are an absolute mess, with the 7000 series including Zen 2, Zen 3, Zen 3+ and Zen 4 based chips.
 
  • Informative
Reactions: WelperHelper99
their mobile chips are an absolute mess, with the 7000 series including Zen 2, Zen 3, Zen 3+ and Zen 4 based chips.
Intel mobile chips also have unintelligible naming, but AMD really wins here. Looking at laptops in the same price class with chips labelled 7000-series, you could get a business laptop (meaning only integrated graphics and a sensible exterior) with a 7940U, or a gaming laptop with a 7520HS and an RTX3050. But no matter what you plan to do with your computer, the business laptop is a far better choice, because the eight core zen 4 processor (despite being the underclocked U variant) will absolutely crush the four core zen 2 gamer (despite being the overclocked HS variant), and the RDNA3 iGPU the zen 4 offers will outperform the dedicated GPU in the gaming laptop while also producing less heat and letting the battery last longer. The -50 tier of RTX cards don’t even have the Nvidia features you’d normally consider paying more for that would make a lot of sense in laptops, such as DLSS2. Its four zen2 cores use more electricity than all eight zen4 cores! And the zen4 will boost a gigahertz higher! All the zen2 really has going for it is the same memory controller as the zen4 variant, so it’s this really weird combination of modern 6400MT/s LPDDR5 RAM with an old processor that can’t make use of the bandwidth that offers.
 
Chips and Cheese posted a new article (blog post?) recently on GPU utilization with Starfield. Interestingly, it seems that Starfield uses AMD's new dual-issue shaders with it's fat cache, which makes the 7900 XTX look good. They come to the conclusion that Starfield is utilizing an appropriate amount of hardware resources on RDNA 3, Ada, Ampere and Turing. They don't think there are obvious issues on that front.

Sorry to interupt but I would like some help and opinions. I'm looking to get back into PC gaming but I can't for the life of me figure out CPU's. GPU's I can understand, they somewhat make sense, but CPU's are confusing.

What's the difference between the variations of I3/5/7/9 processors? Is an I3 10400 the same as an I9 5000 or an I7 8000?

Any help would be great. Thank you :)
Intel have moved around CPUs all over the marketing stack, effectively making the branding number a useless metric to judge accross generations. AMD and Nvidia have done the same to GPUs. Unfortunately, you'll have to find a breakdown table with all the CPUs, Intel's Ark is nice.

If you're looking for a budget-oriented CPU, I've found the i5 13500 to be the best positioned this year. You have the option of DDR4 if memory is expensive where you live, and the E-cores are a nice plus compared to the 7600.
 
Both Intel and AMD have been making idiotic changes to their numbering scheme recently. Raptor Lake Refresh (14XXX) is the last that seems to be using the old system, now they are using Core Ultra XXXX for Meteor Lake onwards and it seems with a different numbering. AMD seems to have desktop chips still going with the first two digits indicating generation, but their mobile chips are an absolute mess, with the 7000 series including Zen 2, Zen 3, Zen 3+ and Zen 4 based chips.
AMD was always pretty easy on desktop imho, just treat it like GPUs. most people don't care and or even need to know the generations it represents.

A graphics card is more likely to be a bottleneck than a CPU so spending any extra budget on a better graphics card is more likely to get you a benefit in gaming than spending the same money on getting a better CPU. In general, I would say start with looking at mid-range Ryzen and work back from there to get the motherboard, RAM, SSD, etc. If you tell us what sort of gaming expectations you have (type of game, size of monitor, level of quality you insist upon) we can probably guide you to something that can achieve that without overspending.
depends, GPUs usually get replaced sooner than CPUs, and unless big jumps like MUH 4K they should still be more than enough to feed GPUs a few years down the line.
I rather spend a bit more for the CPU so I don't have to rip half the machine apart just to replace the CPU. besides that 5x00 is the last am4 gen anyway, just bump it up to a higher model then wait till am5 matures and it's features actually become necessary.
 
depends, GPUs usually get replaced sooner than CPUs, and unless big jumps like MUH 4K they should still be more than enough to feed GPUs a few years down the line.
I rather spend a bit more for the CPU so I don't have to rip half the machine apart just to replace the CPU. besides that 5x00 is the last am4 gen anyway, just bump it up to a higher model then wait till am5 matures and it's features actually become necessary.
You write as if you're disagreeing with me but say nothing that changes anything I said. My point: If you have the budget get an average CPU and a good GPU, not a good CPU and an average GPU. For gaming. Because the GPU is more likely to be the bottleneck in performance when gaming than the CPU. All I said is once you have sufficient, direct excess towards the GPU.

A mate of mine still games on an FX-8350. Any mid-range current gen CPU is going to be good for many years. You're not going to have to "rip half the machine apart" (which actually means a couple of hours on a Sunday morning, btw).

Also, what are you blithering about with "wait until its features become necessary" and "just bump up to a higher model". @Bog-standard Poster hasn't said anything about what they're currently on that I can see. If building a new system for gaming there's no good reason other than tight budget not to get on board with something like a 7800X3D if they have the budget. My point is if they have a couple of hundred extra put that towards a better graphics card than say upgrading the CPU to a 7950X3D.
 
A mate of mine still games on an FX-8350. Any mid-range current gen CPU is going to be good for many years.
I'm pretty sure we can thank the PS4 and Xbone having extremely weak CPUs, and thus devs taking into account weak CPUs, for how long a lot of early 2010s CPUs lasted. Given current gen consoles are using Zen 2, I would assume anything at or above that will last quite a while.
 
Eh, I've seen a decent number of people really want the extra lanes of the thread ripper platform. They'll be people excited about a new one.

Edit - think power PC users.

Literally me.

What I'm saying is that if you cheap out on the 16 core version, you simply don't have enough cores to take advantage of the available bandwidth. You're basically paying a massive premium for a trivial performance gain over a much cheaper Ryzen-X machine, especially since the Zen 4 architecture has 12 memory channels, not 8. Do the math.

EPYC/Threadripper: 12 * 8 * 4800 / 1024 = 450 GB/s

Ryzen: 75 GB/s

16 cores * 4 GHz = 64 GFLOPs for non-SIMD workloads, around 512 GFLOPs with SIMD.

So on a 16 core Ryzen, you only need to stay over about .85 flops per byte to keep a non-SIMD workload fed with data, or 7.8 flops per byte with SIMD. Threadripper Pro will push that floor down to 0.07 flops/byte and 0.65, respectively, and this is before considering the effects of 3D V-Cache in something like the 7950X. What I have seen in the wild is that some of the most bandwidth-hungry, real-world applications don't have an arithmetic intensity low enough to truly need all that bandwidth unless you are well over 16 cores. My point is, if you're going to buy a Zen 4Threadripper Pro, don't cheap out, you're already buying a $8000 workstation, spend another grand or two to get the 2x-3x performance the memory system can actually deliver

Any help would be great. Thank you :)

Any midrange or high-end CPU from the last couple years is more than enough for just about any consumer workload, and will be for a very long time...so i7-11XXX or i9-11XXX and up, or Ryzen 7 or 9 5XXX and up. The intels are a bit hotter than the AMDs lately, but most of the heat in a gaming machine comes from the GPU. Frankly, if I were buying a new PC today, with my own money, for me, I'd get this:

But my love for mini-pcs is now well-documented in this thread.
 
Last edited:
I'm pretty sure we can thank the PS4 and Xbone having extremely weak CPUs, and thus devs taking into account weak CPUs, for how long a lot of early 2010s CPUs lasted. Given current gen consoles are using Zen 2, I would assume anything at or above that will last quite a while.
I disagree. We're talking games in this context and exactly how much does the CPU contribute to modern gaming once a baseline of necessity has been reached? The majority of games aren't doing massive world simulations. Even RPGs like Baldur's Gate III if you take out the graphics part of things are doing fairly elementary mathematics on a fairly small number of elements. How much computational power does it take to say "I have 12 monsters, my algorithm for picking their target is some function of nearest PC, range of my attack / distance to the PC, some few other things the game designer decided, run through the maths, damage - armour, subtract HP, etc." And then you look at things like First Person Shooters or driving games... Path tracing is the most complex thing most games need to do and that's basically a solved problem not wildly taxing. The difference between Doom in 1993 and Doom in 2016 was basically graphics. Yes, there is more but in that time processors have increased in power by an order of magnitude more than that of the demands placed on them. In the context of gaming.

You'll find exceptions and a faster CPU has some effect on the overall performance of a game. It remains something of a block between memory and RAM though even that is now being worked around. But that is far more the real reason than developers optimising for consoles. Gamers, for the most part, thirst for better graphics and faster frame rates. Exactly what do you add to a racing car game that means you need a CPU 10x the power of a racing car game a decade ago? But 4K graphics at > 100fps? Yeah, that's the real reason my mate can still game on an FX-8350 and be happy.

Caveat: Please don't try to take my argument to strawman extremes of "oh, you're saying the CPU is irrelevant". Literally my entire point to the person who asked for help was once you've met your basic needs to a reasonable standard, prioritise any spare cash mostly to the GPU. Why anybody feels the need to reply to that and quibble about it or rephrase it into anything else is beyond me.
 
I disagree. We're talking games in this context and exactly how much does the CPU contribute to modern gaming once a baseline of necessity has been reached? The majority of games aren't doing massive world simulations. Even RPGs like Baldur's Gate III if you take out the graphics part of things are doing fairly elementary mathematics on a fairly small number of elements. How much computational power does it take to say "I have 12 monsters, my algorithm for picking their target is some function of nearest PC, range of my attack / distance to the PC, some few other things the game designer decided, run through the maths, damage - armour, subtract HP, etc."
In practice, you'd be surprised. It really depends what you're playing. Many games tend to have very poorly-optimized simulation code and will frequently encounter CPU bottlenecks. For example, World of Warcraft has issues with freezing on low-CPU systems on fights that spawn too many (thirty!) enemies at once. Poorly-written custom UI scripts don't help very much with that either. Even most Vampire Survivors clones will start to have slowdown with "too many" (several hundred!) sprites on screen at once. You can say that it's really the developers' fault, but you have still have to spec your CPU for the game you have.
 
  • Informative
Reactions: Kane Lives
If you are building a new system specifically for gaming, an AMD 7800X3D with an AMD 7800XT / 7900XT(X) or Nvidia 4070TI is going to be the best bang for your buck for a relatively future proof system (roughly 4-5 years depending on if anything crazy comes down the pipeline).

If I were to put something together in literally 1 minute without looking for discounts I'd build this if $1800 is in your budget:
No muss, no fuss, easy to build and super quick to push out with all high quality parts and lots of expansion capability.

edit: fixed retard mistake.
 
Last edited:
That said, don't buy Intel. AMD are just as fast and use way less power.
It depends on your usage, and for most people the opposite is probably true since Intel's power draw is definitely lower in idle states because of e-cores, IO and chiplet design.

If you're a power user and care about efficiency, then go AMD since Alder Lake.
 
It depends on your usage, and for most people the opposite is probably true since Intel's power draw is definitely lower in idle states because of e-cores, IO and chiplet design.

If you're a power user and care about efficiency, then go AMD since Alder Lake.
Internet says 13900k idles around 20watt, my 7800x3d is around 25w, so you're right but it's not huge.
 
I'm pretty sure we can thank the PS4 and Xbone having extremely weak CPUs, and thus devs taking into account weak CPUs, for how long a lot of early 2010s CPUs lasted. Given current gen consoles are using Zen 2, I would assume anything at or above that will last quite a while.

The PS4 and XB1 did not have "extremely weak CPUs." They were quite powerful. What they did was go with cores over GHz, which delivers far more power at a given thermal load. Game devs initially cried bitter tears over this, same as how they initially cried over the Xbox 360 having in-order cores, because they were used to IPC uplift doing everything for them and not having to learn about things like thread safety or task scheduling. PC devs bitched especially hard, because they were used to writing single-threaded code for dual-core machines where one core ran their game, and the other core ran Windows.

At the end of the day, they had no choice but to learn how to write scalable, thread-balanced code if they wanted to do anything with those machines, and the result was discovering that CPUs had, by and large more compute power than they really knew what do with, especially since the advent of fully programmable GPUs means most geometry processing has been moved off the CPU.

Early on, people tended to write code that pinned specific tasks to specific threads, like this:

Thread 0: Game AI/pathfinding
Thread 1: Physics and other math
Thread 2: Networking
Thread 3: I/O and other tasks

The problem with this is two fold. One is that it gains absolutely nothing from more cores. The second is that if, for example, your networking thread is done, and your AI/Pathfinding thread is still busy, there is no way for the first thread to take advantage of the idle resources. You really need to code multicore CPUs something like this:

for each task in tasks
wait for available thread
send task to thread

Despite having been around for a very long time - I've been writing scalable multithreaded code since the 00s - it's a completely different programming model than game devs were used to, but one the PS4 and XB1 forced them to learn.
 
If you are building a new system specifically for gaming, an AMD 7800X3D with an AMD 7800XT / 7900XT(X) or Nvidia 4070TI is going to be the best bang for your buck for a relatively future proof system (roughly 4-5 years depending on if anything crazy comes down the pipeline).

If I were to put something together in literally 1 minute without looking for discounts I'd build this if $1800 is in your budget:
No muss, no fuss, easy to build and super quick to push out with all high quality parts and lots of expansion capability.

edit: fixed retard mistake.
This is a solid build. I like. Pricey mind. If it's a little high for the poster, there are some corners that could be cut. (Not the PSU, though. 7900XT will want a good one).
 
  • Like
Reactions: Nottafed
I recently bought an Alienware AW3821DW monitor. I’m pretty excited since it will be my first ultrawide. Anyone else have experience with this monitor? I didn’t want to go OLED mainly to avoid the text fringing and burn in issues. Also I don’t like how reflective most oled panels are.
 
Back