GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

What is the modern power equivalent to the i9-9900k? I haven't really paid attention to the cpu market so I'm curious where I stand these days
The new designs from both Intel and AMD have changed architecture rather dramatically, the 9900K is basically an improved Skylake chip. These Coffee Lakes used to have the highest single core scores, now they're not even touching i3s from Alder Lake, and it's not even the last Raptor Lake gen. In multicore a bit better/around a i5 12500 Alder Lake, but worse in single core. The 5600x should be somewhat equivalent from AMD, with the caveat of higher single thread performance.
It all depends on what you use the CPU for most, if it's gaming with an older GPU, it's irrelevant, it's only around the RTX 3070 that you'll start to see some CPU limitations at 1080p. With a 4070ti and above, you'll be bottlenecked even at 4K in many titles with the 9900K. If it's for work, rendering/encoding etc., the 9900K is outclassed by the new CPUs which would save you a lot of time.
 
  • Informative
Reactions: Dr. Geronimo
  • Like
Reactions: Dr. Geronimo
If you can do the things you need it to do, then it's good enough. Alder Lake was a big jump, and future iterations will widen the gap, so there's no rush.

The low-power, efficient cores in Alder Lake are comparable to Skylake: https://www.anandtech.com/show/1704...ybrid-performance-brings-hybrid-complexity/10

Anecdotally, there is not a single game I have played that pushes my Alder Lake CPU over 5% utilization, and most stay below 3%.

Unrelated, Intel is sticking to releasing Meteor Lake later this year. Another process shrink is inbound, so after just 2 generations on Intel 7, they're off to Intel 4.

 
Another process shrink is inbound, so after just 2 generations on Intel 7, they're off to Intel 4.
Intel 7 = Intel 10nm+++++ SuperDuperAwesomeFin
Intel 4 = Intel 7nm
It's not a die shrink, Intel's (real) 7nm process has been delayed since Skylake. They published a roadmap a while ago, it was delayed (lol), and I expect it to be delayed again:

 
Last edited:
Intel 7: 80MT
Intel 4: 160MT

How's that not a process shrink?

src: https://www.tomshardware.com/news/i...0-higher-clocks-at-same-power-2x-area-scaling
It's possible to have an increase in transistor density without shifting to a smaller lens. While it was uncommon, both IBM and Intel refreshed a node in the 90's, giving higher densities with better yields and power consumption (can't remember which one it was off the top of my head). The returns have greatly diminished recently, yet Intel and TSMC both tweak their processes over years.

Besides I was referring to Intel's marketing that advertised the move from Intel 10 to Intel 7, which clearly didn't happen as they are both the same. Yes, I did a poor job of communicating that.
 
It's possible to have an increase in transistor density without shifting to a smaller lens.

A process shrink means you're making smaller features, regardless of how exactly you did it.

1675286574382.png

While it was uncommon, both IBM and Intel refreshed a node in the 90's, giving higher densities with better yields and power consumption (can't remember which one it was off the top of my head). The returns have greatly diminished recently, yet Intel and TSMC both tweak their processes over years.

Besides I was referring to Intel's marketing that advertised the move from Intel 10 to Intel 7, which clearly didn't happen as they are both the same. Yes, I did a poor job of communicating that.

It was especially unclear because you were responding to my post about moving from Intel 7 to Intel 4.

I'm aware they rebranded 10+++ twice now. Point is, Intel used 14nm for desktop generations 5, 6, 7, 8, 9, 10, and 11 due to 10nm being so broken, and now that they've finally gotten 10nm (rebranded to 7) working, it's only being used for 12th & 13th gen before moving onto Intel 4.

Also, Intel didn't claim that 7 was a process shrink from 10nm Enhanced Superfin. They claimed that their 10nm process is equivalent to the competition's 7nm processes, so it makes sense to update their branding in line with everyone else's. They do have a valid point, particularly when Samsung's branded a process as "8nm" that is larger in every measurable way than Intel's 10nm process, and length scale no longer really seems to mean anything at all.
 
Does any OS actually use these "efficiency" cores properly in the meantime? My last linux encounter with ARM in big.little configurations was let's say, less than stellar. I feel many schedulers still do fucktarded things (in relation to latency and power efficiency, mind you) if you dare to have more than two CPU cores. I honestly can't imagine the little cores being used properly in the meantime considering what a long standing problem this was last time I cared. But hey, who knows.
 
Does any OS actually use these "efficiency" cores properly in the meantime? My last linux encounter with ARM in big.little configurations was let's say, less than stellar. I feel many schedulers still do fucktarded things (in relation to latency and power efficiency, mind you) if you dare to have more than two CPU cores. I honestly can't imagine the little cores being used properly in the meantime considering what a long standing problem this was last time I cared. But hey, who knows.
If you're talking about Intel's efficiency cores, Windows 11 is the only version of Windows that does. Any Linux kernel past 5.18 will support it as well.
 
The 7800X3D only hitting 5ghz is pretty disappointing. They're clearly binning it low so they can meet large volumes. If I had infinite money I'd get a 7950X3D and turn off one of the CCDs.
 
I'm bored. Here's how I would fix the 6500 XT:

7500 XT
- PCIe 4.0 x8 instead of x4.
- 6 GB of VRAM
- Include the missing H.264/H.265 encode, and AV1 decode/encode.

7300
- Sold to consumers, not OEM-only.
- 3 GB of VRAM, 75% of the core count.
- Same video capabilities, maybe less PCIe.
- No power cable, 75W maximum.
- Basically a modern GT 1030.

7400
- Don't launch, or make it in between the other two, with 4 or 6 GB of VRAM.

The 7800X3D only hitting 5ghz is pretty disappointing. They're clearly binning it low so they can meet large volumes. If I had infinite money I'd get a 7950X3D and turn off one of the CCDs.
No, IIRC it shares the cache chiplet voltage issue of the 5800X3D even though the voltage has gone up for the cache chiplet. It shouldn't be a matter of binning. 7900X3D can contain worse chiplets since only 6 cores are enabled, but it clocks higher. The higher clock speeds of the 7950X3D and 7900X3D are only achievable on the chiplet/CCD without 3D V-Cache. Only one of them has the extra cache, not sure if you knew.

7800X3D is +800 MHz base, +500 MHz boost over 5800X3D. Also, +400 MHz base, -300 MHz boost compared to 7700 non-X. AMD claimed there are more overclocking features enabled this time, like PBO (source: recent Gamers Nexus HW news video).
 
Last edited:
No, IIRC it shares the cache chiplet voltage issue of the 5800X3D even though the voltage has gone up for the cache chiplet. It shouldn't be a matter of binning. 7900X3D can contain worse chiplets since only 6 cores are enabled, but it clocks higher. The higher clock speeds of the 7950X3D and 7900X3D are only achievable on the chiplet/CCD without 3D V-Cache.

7800X3D is +800 MHz base, +500 MHz boost over 5800X3D. Also, +400 MHz base, -300 MHz boost compared to 7700 non-X.
Ah, you're right, I didn't brain the extra cache is only on one of the CCDs. That makes the higher ones way less appealing. I'll still wait for the benchmarks but just the 7700 is probs the way to go.
 
Ah, you're right, I didn't brain the extra cache is only on one of the CCDs. That makes the higher ones way less appealing. I'll still wait for the benchmarks but just the 7700 is probs the way to go.
All Zen 4 CPUs are more than what most people need. Zen 4 X3D will ostensibly be the best gaming CPUs of 2023, dethroning i9-13900KS. Relevant to RTX 4090 owners like Dear Feeder. Also delivering really good 1% lows, and absurd performance in specific titles like the single-core autism simulators.

It remains to be seen how well the 7900X3D and 7950X3D handle with the differing cache and clocks on each chiplet. Windows 10 users will certainly be shafted, and Linux users might need a very new kernel.
 
All Zen 4 CPUs are more than what most people need. Zen 4 X3D will ostensibly be the best gaming CPUs of 2023, dethroning i9-13900KS. Relevant to RTX 4090 owners like Dear Feeder. Also delivering really good 1% lows, and absurd performance in specific titles like the single-core autism simulators.

It baffles me that anybody's still writing single-threaded code in 2022. I guess you don't have to learn modern software design if you weren't writing games for console back when you had 8 cores at sub-2 GHz.
 
yeah im going full amd this time: 5600 and the 6700xt. also for the first time im going to ditch hdd and going full ssd
im going to mount everything on a MSI PRO B550M-VC WiFi ProSeries
 
Last edited:
I'm running Win 10 LTSC and a USB keyboard, and it's developed a strange quirk. Every now and then (I'd say about once every 2 weeks or so) the keyboard will die and become completely non-responsive until I either turn the computer on and off, or unplug the keyboard and plug it back in. Not the end of the world, but a minor annoyance. Especially if I'm in the middle of a game and can't pause.

I'm guessing it's a driver problem, but I have no idea.

yeah im going full amd this time: 5600 and the 6700xt. also for the first time im going to ditch hdd and going full ssd
im going to mount everything on a MSI PRO B550M-VC WiFi ProSeries
I have a similar build (6600 instead of a 6700) and it's quite a nice computer.

My advice, if you're going to flash the motherboard (I had to) stock up on brand name small capacity USBs. It wouldn't flash from a 64gb sandisk drive, but worked fine on a 8gb kingston I happened to have lying around from years ago.
 
Back