GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Luddite behavior. Technology advances and so do techniques. It's not like Ray Tracing is taking over tomorrow. I remember watching some behind the scenes footage about the Halo 2 demo at E3. It was ray traced, but on the Xbox they had to scrap it because it wasn't viable for the console hardware. Now it actually kinda is. That is fucking awesome what's happened in just around 20 years.

I don't think it's unreasonable to need to upgrade your hardware every 7-8 years. The 20 series GeForce is now 6 years old and is entirely capable of running games like Cyberpunk with RT on. Even the 2060 can handle it at over 60 fps and 720p native res. The 6000 series Radeon is four years old and inferior to the 20 series GeForce for raytracing (not to mention upscaling), but it can handle partial raytracing (i.e. just shadows & reflections) just fine, and the 6700 XT can muddle by tolerably on full scene raytracing at 1080p if you insist. It can't handle path tracing at all, don't try.

At this point, the reason so many games still use cube-mapped reflections and stenciled shadow maps is likely because they still have to release on the PS4. The PS5 is now 5 years old and is still sharing many of its major releases with an 11-year-old system.

Sounds like AMD is really pushing Intel's shit in. Granularity of AVX10 spec is quite retarded, maybe it would be fine if Intel was still leading in innovation.

AMD hasn't really made a significant impact on the ISA since x86-64, and that was 25 years ago. AVX is Intel's bailiwick, and it's a gigantic fucking mess that everyone's been complaining about for 10 years at least now. It's their mess to fix, and it looks like they are finally fixing it. Of course, given the longevity of hardware these days, the software situation won't be fully unfucked until 2035.
 
Last edited:
Plenty of VRAM to play with.
Exactly. Plenty of core power for games, but also plenty of VRAM for various workloads. I can even load Flux.1 all in VRAM. Can't use frame generation though, shame on me for being poor and not buying more.

Too bad that thanks to this schmuck, if you don't want to deal with burning power connectors, asinine pricing and you want that 24GB of VRAM for ML/productivity workloads, used 3090's are the only good option on the market. Mine costed less than a brand new 4070Ti.
1742485342246.png
He could sell you 24GB GPU's that won't burn, or won't miss ROPs, or won't cost you an arm and leg, but he won't. Because he's a cunt.
 
I still have yet to find anyone that can justify why people should be this mad about raytracing.

"I don't like raytracing, the games don't look good and the performance sucks."
Then don't play raytraced games. There's an entire three decades of PC games at your fingertips that you can play right now that don't require this. Hell there are still new AAA games coming out that don't use raytracing. Why is this something to get upset about? Are you really particularly bothered that you're not able to play modern slop like Alan Woke at 100+ fps? By the time any of this matters enough to impact a game you'll want to play, 4090-level raytracing will be available on $200 cards.

I don't understand why there's this persistent pool of people who seethe uncontrollably at every RTX demo.
what about games like the latest indiana slop that doesnt run on cards without rt




I found this video, which does a pretty good job showing the differences:
im the 100 fps loss for enabling any of that useless shit
 
You don't need VRAM, guys.
Half-Life 2 RTX Is a Great Showcase of Sampler Feedback Technology, Significantly Reducing VRAM Usage

NVIDIA reportedly pushes GeForce RTX 5060 Ti back to mid-April, RTX 5060 to mid-May

Samsung, SK hynix and Micron present next-gen HBM4 memory - I believe they need to get to 16-Hi 64 GB for Nvidia's upcoming 1 TB monstrosity.

Sounds like AMD is really pushing Intel's shit in. Granularity of AVX10 spec is quite retarded, maybe it would be fine if Intel was still leading in innovation.
Only advantage on desktop I can see is better memory controller, Infinity Fabric is hitting the wall currently on dual channel setups.
AMD should lead on x86 now. They kinda did by making AVX-512 work without overheating, but maybe they should dictate the future of x86 like with x86-64 25 years ago. On the other hand, letting Intel take all the blame might be the better strategy.

Zen 6 should be the gen to improve the memory controller since Zen 4 and 5 used the same I/O die. Zen 6 desktop I/O die has been leaked as a relatively large 155mm^2 chiplet on TSMC N4C. Faster memory speeds are probably needed to help support up to 24 cores. I'll lock in a prediction of 4 CUs of RDNA3.5 from 2 CUs of RDNA2 to compete with the improved Arrow Lake iGPU. Then there's probably a big NPU in there bloating that die area, maybe XDNA2 50 TOPS.
 
  • Informative
Reactions: Brain Problems
What's the likelihood we could ever see Nvidia or AMD make a high end card with HBM again? Vega and Radeon VII used HBM2, but I am curious what more modern cards performance could look like using the newest gen HBM.
Using GDDR7 to boost bandwidth by ~33% didn't do much for RTX 5000. The lower power consumption of HBM is another potential advantage though. It's also more compact.

HBM needs to be a lot cheaper and not in heavy demand by enterprise for AI. There were plans to make a low cost variant of HBM for consoomers but those evaporated a long time ago. China is starting to make HBM2 memory, so if the AI bubble pops, maybe it will become a cheap commodity. It will still be preferred for enterprise GPUs and CPUs (like Xeon Max with 64 GB), but put 4-8 gigabytes on a consumer CPU and you would have a great L4 cache or iGPU VRAM. Check back in 10 years.

Blast from the past:
Intel to Create new 8th Generation CPUs with AMD Radeon Graphics with HBM2 using EMIB (archive)
 
Last edited:
  • Informative
Reactions: Ibanez RG 350EX
What's the likelihood we could ever see Nvidia or AMD make a high end card with HBM again? Vega and Radeon VII used HBM2, but I am curious what more modern cards performance could look like using the newest gen HBM.
The Nvidia H100 PCIe 96 GB uses HBM3.
1742494400964.png
It also has NVLink.
1742494419768.png
And it retails for a measly $28,000. A single one mind you. You want that 192GB of VRAM? That's $56,000 at minimum. Keep in mind that Nvidia has killed off NVLink in consumer GPU's with the 40 series, and they have gracefully given us measly 32GB of VRAM on the 5090 while the datacenters usually get 80GB at minimum on a single card, except they get entire clusters with multiples of them connected with NVLink.

VRAM is cheap. Nvidia wouldn't undercut their AI business if they gave us more VRAM in consumer GPU's for local workloads. They're just greedy, and they don't give a shit since they have a monopoly.

At the same time, it doesn't seem like AMD is planning on making prosumer Instinct GPU's for home users any time soon. It's the second major chunk of the AI space that's taking up all of TSMC's capacity. So the only way we're gonna see GPU's that are actually worth the money is if the AI gold rush ends, and the demand for enterprise AI chips suddenly dwindles. Nvidia and AMD stop producing so many Hopper and Instinct datacenter chips, TSMC capacity loosens up and it's time to cater to the consumer market a bit more again. Good luck seeing that future happen though. I also doubt we'll be seeing home HBM GPU's because of that, all HBM goes to AI datacenter chips now.
 
I wonder how much power that 10% overclock is going to burn. The 5070 is the hottest 70-class GPU yet, but surprisingly, it's not a whole lot hotter than GPUs were 15 years ago.

1742509599542.png
 
  • Informative
Reactions: Ibanez RG 350EX
The biggest issue with current Nvidia GPU's is that Nvidia sets a very aggressive boost curve by default that draws excessive power for no real performance improvement. My 3090 would keep bumping up to ~1800MHz then down to ~1600MHz all the time during 3DMark and Superposition at >900mV, then after setting it to 1710MHz at 800mV it would bob between 1710 and 1725MHz and I'd get a slightly higher result with it. Basically zero performance or stability impact, but improved thermals and noise. I wouldn't be surprised if this constant back and forth of "HIT THE HIGHEST CLOCK" and "okay that's too high bump it down below the base boost clock" behavior is actually detrimental to the performance compared to "just keep it steady at the boost clock" that you have to accomplish manually with undervolting.

I assume 40 and 50 series are the same. They set them to be overly aggressive to hit the advertised boost clocks, this then leads to higher power draw, higher thermal output and higher noise output. Undervolting is a must with these. Obviously Nvidia won't be doing these out of the factory and 99% of people are deathly afraid of touching MSi Afterburner in fear they'll destroy their card so most just eat shit.
 
The biggest issue with current Nvidia GPU's is that Nvidia sets a very aggressive boost curve by default that draws excessive power for no real performance improvement. My 3090 would keep bumping up to ~1800MHz then down to ~1600MHz all the time during 3DMark and Superposition at >900mV, then after setting it to 1710MHz at 800mV it would bob between 1710 and 1725MHz and I'd get a slightly higher result with it. Basically zero performance or stability impact, but improved thermals and noise. I wouldn't be surprised if this constant back and forth of "HIT THE HIGHEST CLOCK" and "okay that's too high bump it down below the base boost clock" behavior is actually detrimental to the performance compared to "just keep it steady at the boost clock" that you have to accomplish manually with undervolting.

I assume 40 and 50 series are the same. They set them to be overly aggressive to hit the advertised boost clocks, this then leads to higher power draw, higher thermal output and higher noise output. Undervolting is a must with these. Obviously Nvidia won't be doing these out of the factory and 99% of people are deathly afraid of touching MSi Afterburner in fear they'll destroy their card so most just eat shit.
They are actually. They could have been set at 2/3rd the power and still have 95% of the factory performance. Of course you probally should be comfortable if you're buying a 1K+ GPU with afterburner. The over electric consumption is all modern computer parts these days. Everything needs to be undervolted.
 
Last edited:
  • Informative
Reactions: Brain Problems

  • Opacity micromaps significantly optimize alpha-tested geometry, delivering up to 2.3x performance improvement in path-traced games. By efficiently managing opacity data, OMM reduces shader invocations and greatly enhances rendering efficiency without compromising visual quality.
  • Shader execution reordering offers a major leap forward in rendering performance — up to 2x faster in some scenarios — by intelligently grouping shader execution to enhance GPU efficiency, reduce divergence, and boost frame rates, making raytraced titles smoother and more immersive than ever. This feature paves the way for more path-traced games in the future.

Sounds like optimization to me.


Is using inferencing to decompress textures a morally objectionable "fake texel" technique?

  • Real-time path tracing can be enhanced by neural supersampling and denoising, combining two of the most cutting-edge graphics innovations to provide realistic visuals at practical performance levels.

Even more optimization!

Microsoft will also be delivering updates to Windows WARP (Advanced Rasterization Platform). WARP, a CPU-based software renderer for DirectX, will get enhanced capabilities and performance updates.

Finally, the true sequel to Unreal can be made.
 
Last edited:
Back