GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
is the 5050 going to be a laptop GPU or a desktop?
It seems to be going into both, presumably with a full 2560 cores and 8 GB GDDR6. 130W TDP on desktop.

‘Cheaper’ Nvidia RTX 5050 laptops leaked via retail listings

The RTX 4050 Laptop has 2560 cores and 6 GB on 96-bit (192 GB/s bandwidth), so they could easily improve on that. The desktop RTX 5050 should at least improve on the RTX 3050 (8 GB variant with 2560 cores), but probably not reach the RTX 4060 (3072 cores, 20% more).

Nvidia should disable some cores and make an RTX 5040 SFF with 8 GB, to improve on the RTX 3050 6 GB which still sells for $200 these days. But that's hopium, not based on any leaks.
 
Nvidia should disable some cores and make an RTX 5040 SFF with 8 GB, to improve on the RTX 3050 6 GB which still sells for $200 these days.
What they should do is to offer better performance at the same price bracket, instead of keeping it stagnant just to make people keep buying the same shit over and over thinking that they're getting something better because the lead number is bigger. But I digress, that's Nvidia we're talking about.
 
What they should do is to offer better performance at the same price bracket, instead of keeping it stagnant just to make people keep buying the same shit over and over thinking that they're getting something better because the lead number is bigger. But I digress, that's Nvidia we're talking about.
Even if it weren't Ngreedia at the wheel, the "Suggested" is doing heavy lifting in "MSRP" these days, and they decided or couldn't move to a better node this generation, muting any performance/$ impact they could possibly have. This gen is doomed to suck while RTX 60-series *should* have a node shrink and 12 GB at the low-end.

With the dies they've decided to make, the peasants can have an RTX 5050 that is faster than the 3050 but slower than the 4060, at a lower price than the 4060/5060.

The RTX 5060 ($299) has 50% more cores and higher bandwidth than the 5050. So a $249 MSRP would suck for the 5050. $199 would be good. The 3050 6 GB still sells for around that.
 
Did Nvidia ever make an attempt at replacing the 1030 with anything newer? Right now I don't think there are any new cards in the atom weight category of "It works. $100"
 
What they should do is to offer better performance at the same price bracket, instead of keeping it stagnant just to make people keep buying the same shit over and over thinking that they're getting something better because the lead number is bigger. But I digress, that's Nvidia we're talking about.
Nobody is forced to buy a new GPU every time NVIDIA releases a new product. If you just blindly buy a new card every year even when the improvement is marginal, you're just not very smart and don't deserve any sympathy.
 
Did Nvidia ever make an attempt at replacing the 1030 with anything newer? Right now I don't think there are any new cards in the atom weight category of "It works. $100"
I suspect there's no interest at producing anything in that price range (or even two times as much), nvidia especially is trying to squeeze every bit of silicon available for server GPUs, with massively higher margins and demand.
 
Did Nvidia ever make an attempt at replacing the 1030 with anything newer? Right now I don't think there are any new cards in the atom weight category of "It works. $100"
Consider the die area. GT 1030 is based on GP108, a 74mm^2 die. The tiny "MX" laptop GPUs used the same die. They don't make those MX GPUs anymore as they aren't lucrative or very competitive with integrated graphics.

GTX 1630 uses a 200mm^2 die. RTX 3050 (GA107) uses a 200mm^2 die. These at least work at 75W for adding some display outputs. RTX 4060 (AD107) brings die size down to 146mm^2. RTX 5060 is 181mm^2, 5050 will be smaller. I guess it could end up smaller than the 4060 and their smallest die since GP108.

Navi 44 is 199mm^2. Intel's Alchemist DG2-128 is 157mm^2, but the A310 based on that die did have a $110 MSRP, and the A380 was $139. I think those prices are long gone since I see A310 for $150+ on ebay, and nowhere on Newegg. There hasn't been any buzz about a Battlemage-based B380. Rather, there's renewed speculation about a "high-end" B770.

MATROX launches LUMA Pro A380 with two Alchemist GPUs, 12GB memory and 8 mini-DisplayPort 2.0 connectors
 
Did Nvidia ever make an attempt at replacing the 1030 with anything newer? Right now I don't think there are any new cards in the atom weight category of "It works. $100"
iGPUs have gotten so much better that there's no reason for anything that small to exist. My laptop has a 3050 Ti Mobile and a Ryzen CPU with a 680M iGPU. The iGPU is roughly half as capable as the GeForce (reminder that 720p is about half as many pixels as 1080p), so anything significantly lighter weight wouldn't be worth the silicon.
 
I suspect there's no interest at producing anything in that price range (or even two times as much), nvidia especially is trying to squeeze every bit of silicon available for server GPUs, with massively higher margins and demand.
There's not a ton of reason to bother with that price range when old-stock dGPUs and APUs are plentiful. In past eras, it made sense since low-end new gen GPUs would give you access to some newer features but a budget a 2000/3000 series card has basically all the features of 4000/5000 series and even the 900 and 1000 series are still viable if you don't need RT or DLSS.

The near future is probably going to be dGPUs starting at $500 for true enthusiasts and iGPUs for everyone else. The consoles have already moved to this model and I think it's likely that AM6 and whatever Intel is doing is going to involve making APUs much more powerful on the desktop so casual/midcore gamers can just use those.
 
I think it's likely that AM6 and whatever Intel is doing is going to involve making APUs much more powerful on the desktop so casual/midcore gamers can just use those.
Some improvements in the desktop CPUs (not graphics-focused APUs) could really shut down low-end dGPUs. Arrow Lake's full iGPU is already about twice as fast as the slow Ryzen 7000/9000 desktop CPU iGPU (2 CUs RDNA2, equivalent to Radeon 610M). Nova Lake may skip from Xe1 to Xe3 graphics.

For Zen 6 "Olympic Ridge" desktop CPUs, AMD might go with 8 CUs of RDNA4 instead of 2 CUs of RDNA2. That could make it competitive with the 8600G/8700G. Even if they disable some CUs for low-end 8/10-core SKUs, it could still reach the 8600G level (8 CUs RDNA3) if it's high-clocking RDNA4.
 
Nobody is forced to buy a new GPU every time NVIDIA releases a new product. If you just blindly buy a new card every year even when the improvement is marginal, you're just not very smart and don't deserve any sympathy.
I think the same thing whenever I hear about people whining about "CPU Socket longevity" and "Upgradability", how often are you really going to upgrade your computer? You could buy a late model AM4 processor and use it for 7 years just fine until AM6 is well established with little issue if you're only using the computer for personal use and vidya.

For as much as sperging about computer parts is enjoyable, the vast majority of people only "need" to upgrade about once every 5-7 years depending on how high end the parts you buy are.
 
Last edited:
Being able to plop a new, later gen, CPU into an existing motherboard is largely a convenience if you ask me, no need to reinstall windows and everything that comes with switching a motherboard.
It was a nice feeling to go from the 3000 to 5000 Ryzen series without needing to change everything else out. Just a BIOS flash and done in minutes.
 
Some improvements in the desktop CPUs (not graphics-focused APUs) could really shut down low-end dGPUs. Arrow Lake's full iGPU is already about twice as fast as the slow Ryzen 7000/9000 desktop CPU iGPU (2 CUs RDNA2, equivalent to Radeon 610M). Nova Lake may skip from Xe1 to Xe3 graphics.

For Zen 6 "Olympic Ridge" desktop CPUs, AMD might go with 8 CUs of RDNA4 instead of 2 CUs of RDNA2. That could make it competitive with the 8600G/8700G. Even if they disable some CUs for low-end 8/10-core SKUs, it could still reach the 8600G level (8 CUs RDNA3) if it's high-clocking RDNA4.
The biggest problem with iGPUs is the lack of dedicated VRAM, so they have to cut into your system RAM, and it's obviously a lot slower. Not a problem for most people, but it's a bottleneck for professional or vidya use. But I guess neither is really the target audience for low-end hardware these days anyway.
 
I think the same thing whenever I hear about people whining about "CPU Socket longevity" and "Upgradability", how often are you really going to upgrade your computer? You could buy a late model AM4 processor and use it for 7 years just fine until AM6 is well established with little issue if you're only using the computer for personal use and vidya.

Willing to bet it's under 0.1% of users. I got my first PC in 1989. Since then, I have put a new CPU into an old motherboard zero times.

The biggest problem with iGPUs is the lack of dedicated VRAM, so they have to cut into your system RAM, and it's obviously a lot slower.

Two channels of DDR5 is plenty for 1080p@60fps. You can find lots of videos of iGPUs running modern games that fast at low-ish settings on YouTube.
 
  • Agree
Reactions: Susanna
Back