GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

AMD confirms Radeon 800M (RDNA3.5) is 19% to 32% faster than Radeon 700M at 15W

RDNA3.5 in the Strix Point APU outperforms the 780M by 19-32% in Night Raid/Timespy at 15 Watts according to AMD. They call it "Performance/Watt" but they didn't specify CU count so it's probably 890M vs. 780M (890M = 16 CUs, 880M = 12 CUs, 780M = 12 CUs).

AMD Radeon 890M “RDNA 3.5” iGPU Tested: 16 Compute Units, Faster Than Several Entry-Level Discrete GPUs & 46% Ahead of 780M

Meanwhile, the 890M with LPDDR5-7467 (unknown TDP) is 38-46% faster than the 780M in Geekbench 6 OpenCL/Vulkan, also putting it ahead of some weak desktop cards like the 6400 XT, Arc A380, and RX 580, and getting near an RTX 3050 4GB (laptop dGPU, 60W).
 
Last edited:
Somewhat unconfirmed, but there's reports of the 13700s and 14700s having above average failure rates, albeit nowhere near as high as their bigger brothers.

And that the Handbreak devs have also been seeing reports of those CPUs failing.

Warframe devs report 80% of game crashes happen on Intel's overclockable Core i9 chips — Core i7 K-series CPUs also have high crash rates


1721048561935.png


I hate webp

I wonder if theres a correlational between DDR4 and DDR5 motherboards in the failure rates.
 
I wonder if theres a correlational between DDR4 and DDR5 motherboards in the failure rates
I could see this being a issue. It has to make the chip more complicated being able to use both memory types. But we go back to why 12th gen doesn't have that issue, since it's both DDR4 and DDR5 capable. Is it the combo of memory complexity and the rocket level speed of 13th and 14th gen cores?
 
  • Thunk-Provoking
Reactions: Vecr
This is what happens when you run 14 generations of literal refreshes and deviations rather than creating new silicon die architectures. No matter how much Intel tries to mitigate this, it will result in the company's reputation being kaput.
 
  • Like
Reactions: Vecr
I could see this being an issue. It has to make the chip more complicated being able to use both memory types. But we go back to why 12th gen doesn't have that issue, since it's both DDR4 and DDR5 capable. Is it the combo of memory complexity and the rocket level speed of 13th and 14th gen cores?
The things I want to know regarding the chips that are failing.

1. Are the user’s with failed chips overclocking them?
2. What is the average cooling solution user’s with these chips are using? Are they using the lga 1700 contact frame or stock ilm?
3. What are the voltage settings on users with failed CPU’s mobos.

I’m leaning towards the likely outcome that Intel’s bad ihs design compounded its issue with the continuing to feed their CPUs insane amounts of power to push high clock speeds is expediting the failure process for the higher end CPUs, especially the ones that drawing the most power.

That’s why I’m really curious if there is any data that can compare failure rates of 900 series chips that had the contact frame and/or are being direct die cooled that should result in pretty significant thermal reductions.
 
The things I want to know regarding the chips that are failing.

1. Are the user’s with failed chips overclocking them?
2. What is the average cooling solution user’s with these chips are using? Are they using the lga 1700 contact frame or stock ilm?
3. What are the voltage settings on users with failed CPU’s mobos.

I’m leaning towards the likely outcome that Intel’s bad ihs design compounded its issue with the continuing to feed their CPUs insane amounts of power to push high clock speeds is expediting the failure process for the higher end CPUs, especially the ones that drawing the most power.

That’s why I’m really curious if there is any data that can compare failure rates of 900 series chips that had the contact frame and/or are being direct die cooled that should result in pretty significant thermal reductions.
Thing is it's happening in servers that are running stock. It has to be something in the design. Remember they went from 10 nm to 7nm. Something had to have happened I'm between that transition
 
This is what happens when you run 14 generations of literal refreshes and deviations rather than creating new silicon die architectures. No matter how much Intel tries to mitigate this, it will result in the company's reputation being kaput.
12th gen Alder Lake appears unaffected. In his latest video, MLiD talks about a source's theory that something is wrong with the how the cache connects to cores. Cache is where most of the changes are between the two microarchitectures: 1.25 MB L2/P-core in Alder Lake increased to 2 MB L2/P-core in Raptor Lake, 2 MB L2 per Gracemont cluster was doubled to 4 MB L2. An additional 6 MB of L3 cache is available to the top 8+16 Raptor Lake.

He also claims that an Intel source told him: "I can confirm that Raptor Lake was created in record time. The people who worked on it used to brag at lunch that they managed to go from Drawings to Product in just 11 months, and it still beat Zen 4...'

Basically, future product lines like Panther Lake were known before Raptor Lake, because Raptor Lake was an Alder Lake refresh with some actual changes created at the "last minute". But they did it too fast and it seems to be fucked up. It remains to be seen if Intel can suppress this disaster without needing to recall the CPUs, and if any of the more lucrative and numerous laptop CPUs are affected. At least, there are Raptor Lake-HX CPUs that shove the desktop chips into laptop BGA packages. For example, the Core i9-13980HX.
 
  • Informative
Reactions: Vecr and WULULULULU
Basically, future product lines like Panther Lake were known before Raptor Lake, because Raptor Lake was an Alder Lake refresh created at the "last minute".

It sounds increasingly like the rumors of Intel 4 being behind schedule, and then the yields being too poor for a desktop CPU, resulting in a panic-driven Alder Lake refresh are in fact true.
 
I only care about the effects on the used/refurbished market. If businesses end up panic dumping 13th/14th gen (with the exception of the Alder Lake rebrands) and buying more of the new stuff, maybe that will be a good thing. But if Intel ends up throwing them in a landfill and offering refunds, that doesn't help the used market.

If something dramatic like slashing the clocks in half or effectively turning a "125W" 13900K into a "35W" 13900T mitigates any crashing or degradation issues, you still have a nice chip. I'd buy that for a dollar.
 
I only care about the effects on the used/refurbished market.
It really makes me weary of buying a used 13/14th gen cpu. I was gonna look for deals when new shit launches, but now I don't think its worth risking it.

If something dramatic like slashing the clocks in half or effectively turning a "125W" 13900K into a "35W" 13900T mitigates any crashing or degradation issues, you still have a nice chip. I'd buy that for a dollar.
Builzoid was speculating that this may be the only way to get this under control.
 
This is so retarded, if true.
Root cause is an incorrect value in a microcode algorithm associated with the eTVB feature.
Intel’s eTVB, or Enhanced Thermal Velocity Boost, is a technology developed specifically for Intel’s Raptor Lake S processors. This technology optimizes CPU performance by automatically overclocking the processor cores beyond the maximum turbo frequency, depending on the available thermal capacity and power consumption.
This feature is particularly useful for games and applications that require high burst speeds.
What the fuck kind of "applications" and "games" do these people run, if they need to overclock an already overclocked CPU running at 300+W. Do I really need to hear jet engine noises to get 666 FPS instead of 662 in yet another piece of pixel art indie slop? Just fucking chill with the power draw, use sane defaults, and leave the "maximum-overclock-giga-turbo-boost" for retards who'd brag to each other about getting 5% more score in some pointless synthetic benchmark.
 
What the fuck kind of "applications" and "games" do these people run, if they need to overclock an already overclocked CPU running at 300+W. Do I really need to hear jet engine noises to get 666 FPS instead of 662 in yet another piece of pixel art indie slop? Just fucking chill with the power draw, use sane defaults, and leave the "maximum-overclock-giga-turbo-boost" for retards who'd brag to each other about getting 5% more score in some pointless synthetic benchmark.
Intel already called that fake news a month ago:
Contrary to recent media reports, Intel has not confirmed root cause and is continuing, with its partners, to investigate user reports regarding instability issues on unlocked Intel Core 13th and 14th generation (K/KF/KS) desktop processors.

The microcode patch referenced in press reports fixes an eTVB bug discovered by Intel while investigating the instability reports. While this issue is potentially contributing to instability, it is not the root cause.
Wonder if they lied? Or is it just the tip of the iceberg?

It's for applications and games that respond well to single-thread performance. And the feature is that you get a whopping +100-200 MHz frequency on one core. Which is about +1.7-3.5% depending on the SKU.

Thanks, but give me 3D cache instead. Even Intel has 3D cache in their pocket, "Adamantine".
 
  • Thunk-Provoking
Reactions: Vecr
Wonder if they lied? Or is it just the tip of the iceberg?
Probably a bit of both. If it it indeed a microcode (which at this point is complex enough to be really fucking buggy) problem, they can't just point the finger at motherboard vendors like AMD did with the X3Ds.

While this issue is potentially contributing to instability, it is not the root cause.
Very sus, Intel. "Not the root cause", so you know what the issue is, and don't tell anyone?

It's for applications and games that respond well to single-thread performance.
Considering that most software is pretty much a cache-miss simulator, reducing memory latency/increasing L1/2/3 (or writing better code, lol) would be much more beneficial, but whatever they're using for CPU caches is expensive, and just putting more voltage through the thing is easier.
 
Considering that most software is pretty much a cache-miss simulator, reducing memory latency/increasing L1/2/3 (or writing better code, lol) would be much more beneficial, but whatever they're using for CPU caches is expensive, and just putting more voltage through the thing is easier.
Well, they did make cache changes, but increasing L3 as with 32 MiB -> 96 MiB X3D cache doesn't usually help in the greatest game/software of all... benchmarks.

Pushing a CPU hard to get +100-200 MHz on top of >5.5 GHz is ridiculous, but some enthusiasts wanted it. Not as many anymore, as AMD is doing great in DIY and everyone wants X3D despite lower clocks. I have little desire to overclock anything ever again. I'm thinking about underclocking/undervolting instead.

Arrow Lake is a clean break from Alder and Raptor Lake, with a chiplet ("tile")-based design for the first time in Intel desktop CPUs. That could solve a lot of Intel's problems, and while chiplets can cause their own problems, the design has been beta tested in laptops with Meteor Lake. Arrow Lake is similar to Meteor Lake, with the same amount of tiles and some parts like the iGPU and NPU looking exactly the same.

Adamantine was supposed to debut as an L4 cache in Meteor Lake, but it hasn't shown up anywhere, including Arrow Lake rumors. It could be really nice if it's not vaporware and they add a lot of it in the future. Lightweight operating systems like LibreELEC only need about 2 GiB, and as you go up to 4 GiB, 8 GiB, 16 GiB, etc. more use cases are satisfied without any need for external DRAM. Initial rumors suggested more like 128 MiB to 512 MiB. Broadwell already had 128 MiB of eDRAM a decade ago, let's see gigabytes this time plz.
 
I'm thinking about underclocking/undervolting instead.
I figure that's more or less required now, unless you *really* need the extra performance, or water cooling your CPU/GPU. The first thing I did after upgrading from (the now ancient) 6700k to a 7950X is put in in 105W "ECO"-mode, because running it a 95C just doesn't seem like a great idea. It's not as if they're started using Unobtanium that would allow the ever smaller transistors to handle that kind of temperatures. Although, the new "meta" seems to be "fuck safety margins", let's just push it as far as it goes (hi, 12VHPWR).
The thing is, the complexity of games (and software in general, really) hasn't increased in a long time, so the "average user" wouldn't notice the extra 200MHz (aside from the fan noise, and increased room temp, of course) anyway. KeanuPunk2077 runs about as well as it did on an almost 10 year old CPU, only now I get more stuff smeared on my screen, which is done on the GPU anyway.

About damn time. Still waiting until they can start playing nice with the GPU vendors and use the same memory pool for both, as that should bump performance for GPU heavy stuff quite a bit.
 
That still ticks me off. EPS12V already existed and did the job better. I certainly never heard about any power sockets melting on servers these past three decades, and some of those CPUs draw utterly absurd amounts of power.
 
  • Like
Reactions: Vecr
That still ticks me off. EPS12V already existed and did the job better. I certainly never heard about any power sockets melting on servers these past three decades, and some of those CPUs draw utterly absurd amounts of power.
The new connector is unnecessary and derbaur had a video talking about it. 8 pins have a larger safety gap than the 12vhpwr. I don't know why nvidia doesn't admit defeat with this connector.


1721072157314.png
 
The new connector is unnecessary and derbaur had a video talking about it. 8 pins have a larger safety gap than the 12vhpwr. I don't know why nvidia doesn't admit defeat with this connector.


View attachment 6195035
Because the shills are still claiming that everything is just "user error, plug it in better bro". Just ignore instances of it not clicking, pushing in all the way, etc.

With a fan base like that, you never need to admit to anything being wrong.
 
Back