GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
I have the same issue on my 9700x. I think its normal as I have an aio that had no problems cooling an 11900k and keeping it under 85c at peek loads. I think its just due to how they report temperatures because when running the cpu in a benchmark it uses 150w and "hits" 95c but my 11900k would hit 200w and be under that by 10c. It could also be due to the high idle power of these cpus
I'm using a 7700x and the damn thing is a nightmare in terms of cooling, had to set an aggressive fan curve with minimal fan smoothing, else it would go from 40'c idle to 70-80 in a matter of a few seconds.

It peaks at 70'c under full load at 5.4ghz, but I have to run my AIO at 40-50% rpm at idle else I get temperature spikes when doing mundane shit like opening programs and watching videos on youtube.
 
  • Feels
Reactions: Another Char Clone
Its because the freaking lid is like a eighth of an inch thick, practically mandatory delidding for higher power AM5
Yes and no. I delidded my 7950X for a lark and it now sits at 82 under load rather than 95, but actual performance gains in Cinebench is just a few percent.
It may well be mandatory if you want to manually overclock and pin cores at max frequency or something, but AMD are right to say it’s not a real issue for normal users.
 
The Mac desktops do have some niche uses - the Mac Studio and Mac Pro are where you can find Threadripper-tier core counts at significantly lower energy consumption, giving you access to workstation-class performance without the heat or energy consumption. And the Mac Mini is just a generally solid desktop machine for normies that's absurdly cheap in its base configuration.
this is the woo im talking about man the laptops are primo but the mac mini costs $500. its like saying the mac studio is absurdly cheap at $2,999 because the old mac pros started at $4,999. you can always construct the perfect normie but they're in this weird price point where most people with a budget or specific goal dont seem likely to buy one

though i guess actual literal mac users, those crazy bastards, they're probably on board 100% so its a moot point
 
  • Like
Reactions: Brain Problems
I'm using a 7700x and the damn thing is a nightmare in terms of cooling, had to set an aggressive fan curve with minimal fan smoothing, else it would go from 40'c idle to 70-80 in a matter of a few seconds.

It peaks at 70'c under full load at 5.4ghz, but I have to run my AIO at 40-50% rpm at idle else I get temperature spikes when doing mundane shit like opening programs and watching videos on youtube.
This is why I've given up trying to get the temps low for me as long as it isn't thermal throttling or crashing idrc at this point.
Its because the freaking lid is like a eighth of an inch thick, practically mandatory delidding for higher power AM5
I've seen the 9950x suffer from the same thing with similar aio's, so I'm sure its normal behavior.
 
  • Feels
Reactions: Agent Abe Caprine
I don't think this mobile-first approach is going to stay Apple-exclusive. We're already seeing vendors like Minisforum offer mobile chipsets in desktop form factors because of the cost and energy savings they provide.
Intel & AMD's mobile chips are desktop chips scaled down to mobile. As such, they have many of the same design problems when it comes to power efficiency, particularly relying on high boost clocks rather than high bandwidth to achieve high performance.

Another major issue is Windows itself. There was a post around here somewhere where it turns out that clicking the Start button in Win 11 has almost as much overhead as launching a web browser.

Intel's share of the discrete GPU market drops to 0% as sales in the overall market increase

Intel Arc doesn't have any brand recognition among gaymers. It must be better than zero, but not by much. Tech enthusiasts who read too much news know they exist, and are exponentially more likely to try to rejuvenate an old CPU with a new GPU (GPU typically being the bottleneck for gaming).

Intel's been at 0% since 2019:


So I don't think failing to appeal to whatever vanishingly small percentage of tech-savvy hobbyists is putting new GPUs in old machines is the problem. 90% of the market probably doesn't think further than "Intel? Isn't that like shitty integrated graphics?" 9% of the market probably goes slightly deeper with "I heard they have driver problems and kind of suck." 1% has watched multiple YouTube videos and read TH articles to understand what the problems really are.

They can theoretically go the enthusiast -> normie route of success, but they just have to stop fucking up basic things. The fact that Battlemage has serious performance degradations on recent-gen Ryzens that are still going into new prebuilts is catastrophic. Wouldn't be surprised if it means quite a few OEMs have told Intel to pound sand.
 
Oh, that's what I thought. For games and crap like that read speed is what matters. Save files won't be 190GB write operations.
Yeah, but try downloading a game from steam on one. I used to have an old dramless nvme and games would take forever. The disk activity would be this sawtooth pattern that would slow down to write speeds of like 10kbs before the cache cleared and it would jump back up to a 100mbs then drift back down. It got worse the more the drive filled up. It was worse than a HDD.

in the future we'll be relying solely on NVMe for our SSD needs. But once again, you're getting very little improvement with newest PCIe versions so staying at 3.0 is perfectly fine. It's still fast, but it's cooler and cheaper.
I wish U.3 was standard on the consumer level and not just for servers. They are the same footprint as a 2.5 inch drive, but have incredible capabilities at PCIe speeds.
 
I've seen the 9950x suffer from the same thing with similar aio's, so I'm sure its normal behavior.
Only in the sense that the AM5 lid is immensely bad for thermals (circa 20C on its own)

Jayz video (timestamped to 9:44):
The derbauer video he mentioned (timestamped to 16:50)
 
I have the same issue on my 9700x. I think its normal as I have an aio that had no problems cooling an 11900k and keeping it under 85c at peek loads. I think its just due to how they report temperatures because when running the cpu in a benchmark it uses 150w and "hits" 95c but my 11900k would hit 200w and be under that by 10c. It could also be due to the high idle power of these cpus
I assumed I put too little thermal paste on even though I clearly didn't. Outside of my thermal OCD, it's fine. AMD just loves fucking with people like that.
 
Nonsequitur:

Another example of inferencing at native res being superior to TAA. Note this is not upscaling at all. Ship's radar in No Man's Sky using TAA (left) vs XeSS Native (right). You'll need to actually click the thumbnails to see the difference (most noticeable in the center).

1750556681754.webp1750556708565.webp
 
Last edited:
So the new Intel CPUs are rumored to be on yet another new socket and chipset. Meanwhile AMD has announced a new CPU for their AM4 socket that came out in 2016.
 
So the new Intel CPUs are rumored to be on yet another new socket and chipset. Meanwhile AMD has announced a new CPU for their AM4 socket that came out in 2016.
It's a "new" Latin American poor man's exclusive, derivative of the previous limited edition Micro Center exclusive 5600X3D. There may only be tens of thousands of either of them. I'd like to know more about AMD's AM4 strategy. How many new CPUs are being made? Are they spillover from chiplets being produced to satisfy Epyc Milan demand? Are new Cezanne dies being used to make the 5705G, or is it old inventory with a new label slapped on it?

If it's profitable to keep producing cheap CPUs made on old nodes, how about GPUs? (lolno)

Intel's LGA 1851 is unfortunate. Meteor Lake-S was planned for it but scrapped. Not that it would have been very interesting since it would have dropped the LPE cores and the good iGPU. Arrow Lake-S is like an enhanced Meteor Lake-S. Meteor Lake-PS appears to exist as an IoT/embedded socketed APU for LGA 1851 with the LPE cores and the full iGPU intact (same 7-8 Xe cores as in Meteor Lake-H). I haven't seen any discussion of that. It's not intended for consumers, but it could be considered an Intel desktop APU.

There may still be an Arrow Lake Refresh, but expectations are low. Maybe they could squeeze out another 3% with a 285KS or 290K. Rumors pointed to an 8+32 variant being considered but cancelled.

Socket longevity is nice to have, but if the price is right, it doesn't matter. I hope Intel keeps making tens of millions of CPUs that you can pick up in full systems for $100 a decade later.
 
Last edited:
It's a "new" Latin American poor man's exclusive, derivative of the previous limited edition Micro Center exclusive 5600X3D. There may only be tens of thousands of either of them. I'd like to know more about AMD's AM4 strategy. How many new CPUs are being made? Are they spillover from chiplets being produced to satisfy Epyc Milan demand? Are new Rembrandt dies being used to make the 5705G, or is it old inventory with a new label slapped on it?
I wouldn't be surprised if there were a ton more 5500X3D chips than 5600X3D. Sell the highest binned chips to a smaller, higher paying market, and stockpile the cheaper ones to sell in South America along with the 8GB 9060. My guess is that they have been stockpiling 5500X3Ds for years, and that they haven't been actively manufacturing consumer grade AM4 stuff for a while. But who knows.


It may be the thirdies X3D chip, but it's still cool that someone who bought a B350 and a 1600x eight and a half years ago can get an upgrade that sick.
 
AMD’s Next-Gen UDNA Architecture For Radeon GPUs, PlayStation-Next & Xbox-Next To Be 20% Faster In Raster, 2x In RayTracing & AI (archive)

Kepler_L2 leak of supposed estimated gains (per compute unit) over RDNA4. I think the doubling of raytracing tracks with previous rumors of AMD continuing to pursue it heavily after RDNA4's large uplift. Also says no to 3D cache for next-gen consoles (as if they need it to hit 60-120 FPS). If UDNA is being used in the consoles, I guess the CPU cores will be Zen 6, and I expect to see 12 of them.

There was some speculation over whether an "RDNA5" would precede "UDNA1" back when UDNA was announced. I think you can consider them one and the same at this point. Which is nice because we want the unification of consumer and pro capabilities to happen sooner rather than later.
 
AMD’s Next-Gen UDNA Architecture For Radeon GPUs, PlayStation-Next & Xbox-Next To Be 20% Faster In Raster, 2x In RayTracing & AI (archive)

Kepler_L2 leak of supposed estimated gains (per compute unit) over RDNA4. I think the doubling of raytracing tracks with previous rumors of AMD continuing to pursue it heavily after RDNA4's large uplift. Also says no to 3D cache for next-gen consoles (as if they need it to hit 60-120 FPS). If UDNA is being used in the consoles, I guess the CPU cores will be Zen 6, and I expect to see 12 of them.

There was some speculation over whether an "RDNA5" would precede "UDNA1" back when UDNA was announced. I think you can consider them one and the same at this point. Which is nice because we want the unification of consumer and pro capabilities to happen sooner rather than later.
Only 20% raster for next gen consoles? Is that normal or is that pretty bad for new gens now? I’m assuming they will also lean heavily on improved upscaling and frame gen, but ideally consoles can finally move past 30 fps “quality” modes and 60 fps with upscaling becomes the standard and then they can advertise higher fps with frame gen.
 
Only 20% raster for next gen consoles? Is that normal or is that pretty bad for new gens now? I’m assuming they will also lean heavily on improved upscaling and frame gen, but ideally consoles can finally move past 30 fps “quality” modes and 60 fps with upscaling becomes the standard and then they can advertise higher fps with frame gen.
+20% raster per compute unit. 64 CUs of RDNA4 (9070 XT) roughly matched 96 CUs of RDNA3 (7900 XTX). So you could get that same level of performance with ~54 CUs of UDNA1. This is shit math, of course.

PS4 = 18 CUs GCN
PS4 Pro = 36 CUs GCN
Xbox Series S = 20 CUs RDNA2ish
PS5 = 36 CUs RDNA2ish
Xbox Series X = 56 CUs RDNA2ish (clocked lower than PS5)
PS5 Pro = 60 CUs RDNA2/3/4 Frankenhybrid

The example of PS4 Pro to PS5 might mean CUs aren't moving far above 60, but you never know. I think PS5 Pro uses TSMC N4P, and the PS6 would probably use TSMC N2P. That would be going from a 5nm class node (4nm), skipping 3nm, and going down to 2nm class (with GAAFETs). The die sizes of the PS5 and PS5 Pro are apparently similar. So I think you could see more CUs in a PS6. I'll predict 72 CUs UDNA1 right now... which could put it around an RTX 4090.
 
Last edited:
  • Thunk-Provoking
Reactions: Fcret and Blackhole
Unreal Engine 5 is fixed?!
Unreal Engine 5.6 delivers smoother performance and higher FPS over UE5.4 in Paris Tech Demo (archive)
The new version of Unreal Engine clearly offers improvements in both framerate and, perhaps more importantly, reduced frame time spikes. In GPU-bound scenarios, performance is up to 25% better, while CPU-limited situations show at least a 35% framerate gain.

No way its that big at best it'll be 60 cus. Otherwise the ps6 will be 900+ minimum.
(The die sizes and nodes are estimates and guesses, Soyny doesn't tell AFAIK.)

Xbox Series X = 360mm^2, TSMC N7, 56 CUs
PS5 = 308mm^2, TSMC N7, 36 CUs
PS5 Refresh/Slim = 260mm^2, TSMC N6, 36 CUs
PS5 Pro = 279mm^2, TSMC N4P, 60 CUs

TSMC N4P to N2P is probably somewhere between +50-80% logic density, with the N3 node providing a bigger increase than N2. Getting over 60 CUs on this node at a similar size should be easy. The price of the base console will surely rise to $600 minimum, there's no escaping that.

A "PS6 handheld" could slash the CUs in half from 72 down to 36, matching the PS5 count for compatibility reasons.
 
My guess is that they have been stockpiling 5500X3Ds for years
I think it's that TSMC's N7 node is still live is getting very high yields, so there's still profit to be made by fabricating chips on it, especially since there really isnt much functionality in the last several years that matters in the consumer CPU space.
 
  • Like
Reactions: Vecr
Back