GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

They sell desktop versions of the 5050? I thought those were for laptops only
Yes, at the same price as the better 9060xt 8gb (which is 30-50% faster than the 5050). It's a objectively shit product. A 50 tier gpu msrp should not be at 250 dollars, but here we are today.

Somebody managed to get a 17% fps uplift with their 5050 after he nigger-rigged a CPU cooler onto it after trying to beat 1080ti performance. They managed to hit 3300MHZ gpu clock with the 5050. The 50 series lineup is really good with overclocking
It's crazy how the 1080ti still holds up with entry level cards, and still is relevant 8 years after release.
 
Last edited:
Yes, at the same price as the better 9060xt 8gb (which is 30-50% faster than the 5050). It's a objectively shit product. A 50 tier gpu msrp should not be at 250 dollars, but here we are today.

Somebody managed to get a 17% fps uplift with their 5050 after he nigger-rigged a CPU cooler onto it after trying to beat 1080ti performance. They managed to hit 3300MHZ gpu clock with the 5050. The 50 series lineup is really good with overclocking
It's crazy how the 1080ti is still relevant 8 years after release, and the entry level cards are STILL ON PAR with it.
Modern gpu needs external modification to beat an 8 year old card what a fucking joke
 
They sell desktop versions of the 5050? I thought those were for laptops only
Stealth launched $250 card. Desktop version uses GDDR6 to cut costs, while the laptop version uses GDDR7 for better power efficiency.

It's a objectively shit product. Why is a 50 tier graphics card's msrp at 250 dollars? Crazy
What gets me is that the 5050 (GB207) is 149mm^2 for 2,560 CUDA cores, while the 5060 Ti (GB206) is 181mm^2 for 4,608 cores. That's 80% more cores for 21.5% more die area.

Either it's not using the TSMC "4N" node like the others (unlikely), or it's a consequence of both dies using 32 MiB of L2 cache, which could be dominating the die area. If it's not on a cheaper node, the GB207 die is objectively shit. If it is, then it's even more overpriced than we thought.
 
Either it's not using the TSMC "4N" node like the others (unlikely), or it's a consequence of both dies using 32 MiB of L2 cache, which could be dominating the die area. If it's not on a cheaper node, the GB207 die is objectively shit. If it is, then it's even more overpriced than we thought.
The funniest part about the 5050 is that it can't even beat the 4060 :story:. You're paying the same price for an inferior product. It still needs a 8 pin PCIE cable so you can't even add it to a shitty Optiplex build as a upgrade, because IT HAS A HIGHER TDP THAN THE 4060! Well done Jensen.

If the 5050 was at the 100-150 MSRP price range, it would be one of the best 50 series cards they made. As the saying goes, there are no bad products, just bad prices.

 
Last edited:
This is wrong as an absolute statement. There are lots of operations and whole applications that are bandwidth bound. All kinds of operations on large arrays of data, from copying to solving large sparse linear systems, are bandwidth-bound. It's pretty easy to saturate bandwidth with graphics operations, too. The 9700XT's 644.6 GB/s sounds like a lot. But if you want to run your monitor's refresh rate of 180 Hz, you're limited to a total 3.58 GB of data processed per scene, and that includes anything that has to be looked up multiple times (although larger caches mitigate this). 3.58 GB goes real fast in an era where textures can be up to 64 MB. Even if you use compression, it goes fast.
Refresh rate is generally not bound by vram speed but speed of ramdacs or PTY of modern equivalent and it's frame buffer.
If image is not prepared in time it will just resend the original frame.
4k frame is 33MB/slice, so even at 180Hz that's 6GB/ without any compression. Nvidia has patented integrated bus/ram compression as a part of the compression is used as encode/decode scheme, so you increase effective bus width as well as reducing vram requirements. The modern renders only render what is seen, so the rest of scene behind you can sit in vram compressed.
All this time, my dumb ass wasn't applying enough force for these two little, evidently present bumps to latch onto the connector, which is why it didn't work and why it kept constantly falling out at the slightest movement.
These connectors are absolutely dumb and they also require an insane amount of force. When you are bending the mb to insert it, something's not right.
 
AYANEO NEXT 2 announced: Ryzen AI MAX+ 395 “Strix Halo” APU, two fans, and built-in battery

AYANEO-NEXT2-1200x624.webp

A second Strix Halo handheld has appeared.
 
looking forward to turning the old one vertical
I highly recommend getting some North Bayou monitor arms, improves ergonomics and frees up desk space. Ideally avoid the ones with a straight pipe since there is no tightening on those and they like to move around.
 
Refresh rate is generally not bound by vram speed but speed of ramdacs or PTY of modern equivalent and it's frame buffer.
If image is not prepared in time it will just resend the original frame.
4k frame is 33MB/slice, so even at 180Hz that's 6GB/ without any compression. Nvidia has patented integrated bus/ram compression as a part of the compression is used as encode/decode scheme, so you increase effective bus width as well as reducing vram requirements. The modern renders only render what is seen, so the rest of scene behind you can sit in vram compressed.
You are overthinking this. If you want to run a game at any given frame rate, the max total data that can be processed in that frame, reads and writes combined, all data in aggregate (from vertices to the framebuffer) is just your memory bandwidth divided by the frame rate. This is simply a theoretical upper bound; it is physically impossible for the machine to go faster. Real-world performance will always be lower. This is just true of software in general. How fast do you want your software to run? How much memory bandwidth do you have? Well that tells you the simple formula to get the max amount of data your software can process, doesn't matter if you're running a database, a FEA crash simulation, or a video game.

In the case of the 9070 XT vs the 5070 Ti, the 9070 XT actually has a more powerful processor. It can do more math, churn more texels, and write more pixels than the 5070 Ti. The only place the 5070 Ti has an advantage is memory bandwidth, where it's got 40% more. And what do we see in actual games? The 5070 Ti either is equal to or outperforms the 9070 XT. The reality is in modern games, ultra textures and whatnot gobble bandwidth like nobody's business, and you run out of lanes before you run out of compute.

To think of it concretely, 4K@ 32bpp @ 120fps will eat 4 GB/s of bandwidth all on its own. Now just kind of work out how much shit can potentially be onscreen in a game. If we start talking about things like 2K textures, post effects, and RT samples and whatever, that bandwidth is gone before you know it.
 
  • Agree
Reactions: Fcret
nVidia always had more efficient internal infrastructure. It is not just bw.
3060Ti has enough processing power to run 3x4k@60Hz and that's really a bottom end card now.
Bandwith costs money, so anything that's not top of the range on nvidia gets 256 bit or similar bus.
AMD's vega couldn't be saved even with HBM. Memory is a lot, but it isn't everything.
 
However, I can make my ASUS laptop last longer with better software, I'm all ears. I have a Ryzen 7 6800U that refuses to run below 2 GHz most of the time unless I limit it in the ASUS software. Even then, I've never gotten over 90 minutes.
I don't know if it would help / be what you are looking for, but on my pc I made 2 power plans with Shortcuts to switch to each one to limit max cpu frequency
1. ultimate performance for anything intensive
2. 35% max cpu frequency (3gz instead of 5.5-6) HWM said about half the core Voltage (0.7 VID from 1.4-1.5)
Untitled.webp
 
Last edited:
AMD's vega couldn't be saved even with HBM. Memory is a lot, but it isn't everything.
amd did a intel with HBM though by just mehing and dropping the fucking thing head first onto some sharp metal spikes.
the 4096 bus speed looks amazing and even when people pick these cards to test they can see that it helps but the lack of refinement on it is the killing blow.
like why bother picking a R9 or a Radeon VII when it's unexplored territory that you basically have to beta test when you can just fork money for a 2070ti instead and have reliable speeds and stability?
 
  • Like
Reactions: Dawdler
like why bother picking a R9 or a Radeon VII when it's unexplored territory that you basically have to beta test when you can just fork money for a 2070ti instead and have reliable speeds and stability?
Nvidia's reputation for reliability is overblown and outdated.

NVIDIA Investigates GeForce RTX 50 Series "Blackwell" Black Screen and BSOD Issues
NVIDIA GPUs continue to face driver woes as RTX 5000 launch is called 'absolutely abhorrent'
Latest Nvidia drivers boost synthetics by 6-8% on RTX 50 GPUs — Users still report stability woes

Probably because gaming is a side gig for them now.

But at least you can super size it:

GeForce RTX 50 SUPER Series on track for a holiday 2025 release
 
So far my 50 series are more stable than my iGPU AMD laptop... Kernel 6.12 and/or a new AMD firmware made it all go to hell.
Finally got it stabilized at least.

I look forward to not being able to buy a 5080 Super under 200% of MSRP.
 
Is it just me or is it absolutely insane that you can get 8c/16t ryzen mini PCs with a 780m in aliexpress for around $300?

Granted it’s without RAM or storage.
 
Last edited:
Nvidia's reputation for reliability is overblown and outdated.
seeing as hbm2 and 2070 are way over 5 years old and in no way shape or form the fake frame cards were mentioned by me... yeah it's outdated.
though amd ditching HBM can be considered a historic L moment because it was good shit, the 4096 bus made things extremely smooth and that's something even budget builds mention on his R9 nano video but... fucking hell, retardation due to cost, something everyone that has tasted from the HBM cup seem to still crave today.
1753741427727.webp
this is from the radeon VII video.
Is it just me or is it absolutely insane that you can get 8c/16t ryzen mini PCs with a 780m in aliexpress for around $300?
chinkshit is always 50% chance of being some heavily modified inferior hardware to make it look and have the same price as the thing it's advertised or it's actually the thing advertised and you got lucky with the price.

no inbetween and no refunds whitto piggu.
 
though amd ditching HBM can be considered a historic L moment because it was good shit, the 4096 bus made things extremely smooth and that's something even budget builds mention on his R9 nano video but... fucking hell, retardation due to cost, something everyone that has tasted from the HBM cup seem to still crave today.
Consumer HBM was derailed by it being expensive (maybe 5x per GB of GDDR?) and in extremely high demand by enterprise, especially after AI took off.

There were plans almost a decade ago to make a gimped, "low cost" version of HBM2 for consumers, but it never happened:

HBM3: Cheaper, up to 64GB on-package, and terabytes-per-second bandwidth
53536_02_low-cost-hbm-way-hit-mass-market-soon_full-640x360.webp

It might come back to consumers some day, but you get a lot of bandwidth from GDDR7 and presumably whatever comes next. HBM could help in improving efficiency and bringing more dense memory to compact products, and would be a good choice for CPU L4 cache or something. But it won't come until demand wanes and prices drop. China's CXMT is reportedly mass producing its own version of HBM2, which might help eventually, but not as long as they are using it all for AI. I also read about some competitor recently, not the failed Hybrid Memory Cube, but anything else is vaporware until proven otherwise.
 
Back