GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

I regret I went with LGA1700 instead of AM4. Now if I want to up my 12400 I'll have to get an AM5 board and DDR5 RAM. And there's no way I'm gonna get 64GB of DDR5 RAM that I paid for these DDR4 sticks. By then I'll have three platforms to hoard.
 
  • Informative
Reactions: Brain Problems
Stress testing my potato PC as I've been running into BSoDs and games just shutting down at random

stressing the RAM does nothing
stressing the CPU does nothing
stressing both results in an immediate error and PC crash

View attachment 7408966

is there a specific test I could throw at it to pinpoint the cause?
Your CPU and RAM are both older than dirt. The time you spend trying to diagnose this is going to cost more than replacing both will. Whoever sold you DDR4-2366 (where on Earth did that speed come from?) was trolling you, since on Zen 2 your FCLK should be the same as your MCLK and you're far under reasonable FCLK for a 3600X.

If you're trying to replace RAM, it's difficult to get good DDR4 RAM. Samsung b-die isn't as easy to find anymore.

If you're curious anyway:

I'm not aware of anyone in the RAM OC community using OCCT RAM test for serious testing. 1usmus configuration from TestMem5 will provide a diagnostic code that you can try to interpret with this sheet on google docs. Karhu is also good, but isn't free and doesn't provide a diagnostic code.

Note that some "RAM errors" are CPU-related, since the memory controller is part of the CPU IO die.

If RAM test passes, y-cruncher (as suggested before) is pretty good at detecting either CPU or RAM issues. In my experience with a 9800x3d, the most sensitive tests were SFTv4 / SNT (CPU), FFTv4 (RAM), and VT3 (CPU SOC). BBP is also useful to test cooling, but in my experience would always pass if SFTv4 and SNT passed.

e: You can also get an error from Vcore flucatuations when alternating between two tests as in a combined CPU / RAM test. y-cruncher won't catch that since it runs at a fairly constant Vcore. However, if that's the case, it tends to only appear after a prolonged time, and your failure was at 5 seconds.
 
I regret I went with LGA1700 instead of AM4. Now if I want to up my 12400 I'll have to get an AM5 board and DDR5 RAM. And there's no way I'm gonna get 64GB of DDR5 RAM that I paid for these DDR4 sticks. By then I'll have three platforms to hoard.
Are you actually doing anything bandwidth-bound?
 
I regret I went with LGA1700 instead of AM4. Now if I want to up my 12400 I'll have to get an AM5 board and DDR5 RAM. And there's no way I'm gonna get 64GB of DDR5 RAM that I paid for these DDR4 sticks. By then I'll have three platforms to hoard.
Does it really matter? You can just slap in a 12700 or something and you'll be set until AM6 comes out in 2028. No games right now need anything higher than DDR4 for main system RAM and you don't strike me as someone who's going to be shelling out the big bucks for retardedly high framerates.

Socket longevity is kind of a meme imo. AM4 benefited a lot because Zen was immature and you could get a lot from going from Zen 1 to Zen 3. Zen 4 to Zen 6 will probably not be so drastic a leap.
 
people rightly shart on DLSS for things like framegen but DLAA looks incredibly crisp on a 4K monitor and I think some people are missing out just because of their DLSS BAD kneejerk reaction.
 
people rightly shart on DLSS for things like framegen
Frame generation is a good technology, except not in the use cases that Jensen tries to prop it up as which is "free frames for zero effort". You still need good and stable FPS for frame gen to work. ML upscaling helps to get more raw frames by rendering the game at a lower resolution, then upscaling it with Tensor cores at essentially negative cost (GPU usage went down on my 3090 when enabling DLSS in Witcher 3), and frame generation will be necessary to fill in the gaps for high refresh rate, high resolution monitors as even a 5090 will struggle to output 240FPS@4K in many optimized titles.

In short, ML upscaling and frame gen is good for getting an already well running, but not necessarily good looking game to look and perform better. The hate it gets is because lazy AAA game devs treat it as an excuse to not optimize their games, which makes it work like shit, as well as DLSS being exclusive to Nvidia hardware and only select AAA games. If Nvidia, AMD and Intel bothered to introduce these features at a driver level so that they could work on older titles much like Lossless Scaling but without the overhead, I believe the tech would get better reputation. Obviously a generic upscaler/frame generator wouldn't be as good as one implemented in the game, but Lossless Scaling shows that it can be good enough. Nvidia could call it "DLSS Lite" or whatever, to differentiate that it's the inferior version that's not meant to represent what the tech is truly capable of.
 
AssCrack update. People using cheap shit like A620 motherboards, no PBO, no water cooling (allowing the CPU to be pushed harder), were more likely to be safe. (Never overclock)
Wccftech: ASRock Claims AMD Ryzen 9000 Series “Failure Fiasco” Is Linked to PBO Configurations, But the Actual Issue Might Run Much Deeper (archive)

This is mostly just reporting on the Tech Yes City video I posted, but it includes this damning tweet from Ian Cutress:

ryzenmaster.webp
 
  • Informative
Reactions: Brain Problems
Frame generation is a good technology, except not in the use cases that Jensen tries to prop it up as which is "free frames for zero effort". You still need good and stable FPS for frame gen to work. ML upscaling helps to get more raw frames by rendering the game at a lower resolution, then upscaling it with Tensor cores at essentially negative cost (GPU usage went down on my 3090 when enabling DLSS in Witcher 3), and frame generation will be necessary to fill in the gaps for high refresh rate, high resolution monitors as even a 5090 will struggle to output 240FPS@4K in many optimized titles.

In short, ML upscaling and frame gen is good for getting an already well running, but not necessarily good looking game to look and perform better. The hate it gets is because lazy AAA game devs treat it as an excuse to not optimize their games, which makes it work like shit, as well as DLSS being exclusive to Nvidia hardware and only select AAA games. If Nvidia, AMD and Intel bothered to introduce these features at a driver level so that they could work on older titles much like Lossless Scaling but without the overhead, I believe the tech would get better reputation. Obviously a generic upscaler/frame generator wouldn't be as good as one implemented in the game, but Lossless Scaling shows that it can be good enough. Nvidia could call it "DLSS Lite" or whatever, to differentiate that it's the inferior version that's not meant to represent what the tech is truly capable of.
AMD didn't have the hardware AI for upscaling until the 9000 series here but agreed AI upscaling should be a DX/VULKAN standard.
 
  • Informative
Reactions: Brain Problems
Wccftech: ASRock Claims AMD Ryzen 9000 Series “Failure Fiasco” Is Linked to PBO Configurations, But the Actual Issue Might Run Much Deeper (archive)

This is mostly just reporting on the Tech Yes City video I posted, but it includes this damning tweet from Ian Cutress:

View attachment 7421691
I think ASRock's explanation doesn't make much sense. 9800X3Ds require significant effort to convince them to draw more than around 110A TDC/120A EDC (stock current limits on my board are 120A/180A respectively) without ECLK overclocking, even with PBO on and maxed frequency offset. The only application I have with current draws that high is y-cruncher. Even other CPU benchmarks like Cinebench won't get close.

The previous explanation about vSOC fluctuations is also lacking. That's a power-saving feature; ASRock is just the only manufacturer that runs with it enabled by default on desktops. Other motherboard vendors uses static vSOC by default because most EXPO profiles will require some increase in vSOC over stock, it's easier to tune that increase with static vSOC, and you will only save around 2W of power anyway. You could incinerate the chip if you had a very large overshoot from a bad VRM, but that would be a VRM issue, not a BIOS issue.

All that being said, Ryzen Master is bad software with a significant chance of rendering your PC unbootable without CMOS clear, and you shouldn't use it if you have any other option. You sometimes don't have other options since ZenTimings won't read certain settings (mostly ProcODTs) properly on certain boards.
 
Frame generation is a good technology, except not in the use cases that Jensen tries to prop it up as which is "free frames for zero effort". You still need good and stable FPS for frame gen to work. ML upscaling helps to get more raw frames by rendering the game at a lower resolution, then upscaling it with Tensor cores at essentially negative cost (GPU usage went down on my 3090 when enabling DLSS in Witcher 3), and frame generation will be necessary to fill in the gaps for high refresh rate, high resolution monitors as even a 5090 will struggle to output 240FPS@4K in many optimized titles.

It's best to compare what we actually had access to in the past. In the past, if you had to run at a resolution below your monitor's native res, it upscales the image using primitive nearest-neighbor or bilinear interpolation. This looks significantly worse than DLSS. If the game couldn't quiiiite handle a smooth frame rate, the go-to on console was smearing everything with motion blur, while on PC, you basically were told to eat shit. Either just keep reducing settings until the frame rate smooths out, or buy a new GPU, peasant. Also, DLSS replaces TAA and looks better. Even if the game runs fine at native res, DLSS Native beats TAA for image quality.

TAA:
MW2-TAA.webp

DLSS:
MW2-DLSS.webp


If Nvidia, AMD and Intel bothered to introduce these features at a driver level so that they could work on older titles much like Lossless Scaling but without the overhead

NVIDIA and AMD both have driver-level upscaling, but this is a simpler upscaling that doesn't use motion vectors (still better than bilinear). As far as I know, motion vectors can't be extracted from the raw data drivers have access to, or Lossless Scaling would have FSR 2, not just FSR 1, as both are open-source.

AMD didn't have the hardware AI for upscaling until the 9000 series here but agreed AI upscaling should be a DX/VULKAN standard.

XeSS runs on AMD GPUs and uses AI inferencing with a lower-fidelity version of the model it uses on native Intel hardware; it's still better than FSR in quality. There is no technical reason you can't inference on older GPUs; this is likely a strategic decision to drive sales of the 9000 series.
 
Last edited:
TAA:
MW2-TAA.webp

DLSS:
MW2-DLSS.webp
I don't think this example shows dlss being better than taa.

I do agree tho going back and watching comparisons taa and dlss/fsr(even fsr2/3) are more stable than msaa and other solutions, in certain conditions. With the caveat of good implementation. From what I've seen msaa falls apart with finer details in the distance and static objects are much less stable. I think this video shows what I mean. https://www.youtube.com/watch?v=EUYQKHuRMJM

Where I think the temporal options fail is in motion, the upscalers/taa might look good in a screen shot but as soon as the camera or the player moves the image gets grainy/blurry. even fsr4/dlss doesn't solve this completely. Generally, I think most people don't care and aren't paying attention to these things so it never gets talked about. And when the native implementations of taa in most of these games suck and most games don't give you different aa options , it's easy to see why upscalers won in the end.

There's also the fact that many older 3d (late 2000s - 2010s) games look very sharp especially on high resolution displays with or without aa on. So whenever you go to play them, they look sharp and don't have the taa blur that we are used to today. I went back to play sonic adventure 2 and at 60fps the game looks and feels fast, whereas while I'm playing the re4 remake at 120fps just moving the camera has a visual sluggishness that betrays what you do in that game. Obviously its a slower paced game due to its movement but the shooting should feel slow when the mouse sensitivity is high.

Ultimately I think what we want is a solution that both looks good and feels good to play with and i don't think the current solutions do that, yes they might look nice in a screenshot but in movement it looks weird and blurry.
 
I don't think this example shows dlss being better than taa.

DLSS (really DLAA when you do it at native res) doesn't blur out the fine detail the way TAA does.

DLAA left, TAA right (taken from the source image, since KF webp are compressed):

1748481190542.webp1748481295579.webp

Pretty much anywhere you look at, the DLAA screenshot has fine detail more crisply resolved than the TAA screenshot.
 
  • Informative
Reactions: Brain Problems
DLSS (really DLAA when you do it at native res) doesn't blur out the fine detail the way TAA does.

DLAA left, TAA right (taken from the source image, since KF webp are compressed):

View attachment 7424399View attachment 7424402

Pretty much anywhere you look at, the DLAA screenshot has fine detail more crisply resolved than the TAA screenshot.
I guess on first glance they look the same but when you inspect it the difference are more obvious.
 
Well, to be fair, complaining about graphics in the last 10 years feels like I'm complaining that there aren't enough flecks of gold in my caviar.
Yeah honestly aside from egregious example graphics aren't really the problem with gaming right now. Medium settings on most gams look fine. the problem i have have is a lack of gameplay innovation in general. Aside from control schemes improving and qol being better in some ways. Gameplay has not evolved much imo from ps3/360 era, and the stories are shittier too with few exceptions. And the abuse of remakes.
 
  • Like
Reactions: Brain Problems
Yeah honestly aside from egregious example graphics aren't really the problem with gaming right now. Medium settings on most gams look fine. the problem i have have is a lack of gameplay innovation in general. Aside from control schemes improving and qol being better in some ways. Gameplay has not evolved much imo from ps3/360 era, and the stories are shittier too with few exceptions. And the abuse of remakes.
Well we're still stuck with most studios making games with barebone physics and not many dynamic systems due to seventh and eighth gen consoles having such weak CPUs. And even now when consoles have decent CPUs, the entire AAA industry is built around making gay cinematic games.

Personally, I'm excited for possibilities of modern LLMs in games. Obviously an entire game run by an LLM would be schizophrenic but it'd make a big improvement over stuff like Oblivion's radiant AI.
 
Well we're still stuck with most studios making games with barebone physics and not many dynamic systems due to seventh and eighth gen consoles having such weak CPUs. And even now when consoles have decent CPUs, the entire AAA industry is built around making gay cinematic games.
physics in games kinda ended up dying because the cpus cant run instructions fast enough to simulate real time physics. its a hardware limitation as going anything above 4ghz gets them really, really hot, and we dont have a way to cool down cpus fast enough for it to be feasible. having the gpu do the physics for you isnt performant enough and the only other option is faking the physics through other means. hence them pivoting to making graphics the seller in games which clearly worked.
 
Back