GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

4090 = 16,384 CUDA cores, 1008 GB/s bandwidth, 24 GB
4070 Ti Super = 8,448 cores, 672 GB/s, 16 GB
4070 Ti = 7,680 cores, 504 GB/s, 12 GB
5070 = 6,144 cores, 672 GB/s, 12 GB
4070 = 5,888 cores, 504 GB/s, 12 GB

Maybe 5th gen tensor cores are amazing, but the debate should be about whether the 5070 can even beat the 4070 Ti or 4070 Ti Super in gaming, not the 4090.

Screenshot_20250107_051149_Chrome.jpg
Just in raw core count, the 5070 doesn't beat the 4070 super according to Gamers Nexus. 7168 cores vs the 5070's 6144 cores. I'm not upgrading. Maybe the higher bandwidth gives it a edge. I also know, at least at 1080p 60 FPS, every game I throw at my 4070 Super combined with my i9 12900K runs natively without DLSS or any of the other shit. Seriously Nvidia, the fuck?
 
View attachment 6830520
Just in raw core count, the 5070 doesn't beat the 4070 super according to Gamers Nexus. 7168 cores vs the 5070's 6144 cores. I'm not upgrading. Maybe the higher bandwidth gives it an edge. I also know, at least at 1080p 60 FPS, every game I throw at my 4070 Super combined with my i9 12900K runs natively without DLSS or any of the other shit. Seriously Nvidia, the fuck?
It’s useless to count cores and compare like for like before the cards release, just wait for benchmarks and reviews to inform your decisions.
 
The compute comparisons are done on fp8 for the 4xxx series and fp4 for the 5xxx series. What does this mean exactly? Is this just for AI training? I do compute for work (archviz) and I am not sure if this matters to me at all.

2000$ for the 5090 is absolutely fucking bonkers.
I am curious but skeptical about all their other tech. The latest slop games are already blurry and shimmery as it is.
 
IMG_5775.jpeg


Interesting slide from micron, Nvidia‘s VRAM supplier.
 
Last edited:
  • Informative
Reactions: milk
32 GB VRAM on a single card is pretty good. Interestingly enough if the price is similar to the 4090. If the use case allows for multi card setups, it would finally be more cost effective to just buy two 5090s assuming that they can be fit in a case or open frame. The issue is that most CPUs these days only have 24 pcie lanes. With most setups only having a single 16x slot capable of pushing 16 lanes.

You'll need a server CPU, and it's likely NVIDIA is soft-locking the cards from being used that way so they can sell high-margin server products.
 
  • Agree
Reactions: Gog & Magog
No idea about Archviz. What did previous benchmarks show?
The compute comparisons are done on fp8 for the 4xxx series and fp4 for the 5xxx series.
1736257194149.png
This is the graph they show on their website, forgot to add it on my earlier post

I rely on Blender Opendata for my estimations for compute performance. The top performers are about what you would expect, but there are some outliers (4070ti being significantly better than 3090 for example, and the 4070 being a close equivalent to the 3090)
 
I think we are hitting the moment that there is no any gains to get without gimmicks like frame generation and you have to choice between

You play AAA slop? 5080 you dont play slop? 5070

The 5090 was priced like that because nvidia probably believe that will deter scalpers but even they are showing in their own benchmarks that the gains compared with the 4090 are with gimmicks, without the gimmicks they are showing a 20% gains at best

AMD need to start reconsidering their strategy because at this rate there is no reason to get their cards unless they go for like 400 buck top
 
  • Like
Reactions: Brain Problems
I'd wager Nvidia is including DLSS 4 frame generation into the "beating the 4090" metric. We won't really know until third party benchmarks.
it certainly is part of the shilling material, I'm seeing an awful lot of BRO 4090 PERFORMANCE FOR $500 conveniently not mentioning the 4x framegen. but hey graph is higher that's all that matters.

AI heuristics predicting your input and acting on it ahead of time. I'm only half-joking, I'm pretty sure this has been explored.
fake input for fake frames. at some point people will pay $3000 to watch a game play itself instead of watching a stream...
 
Last edited:
Hot take: The APU/Handheld PCs are advancing rapidly enough to make the 60 series and below pointless.
That's not that much of a hot take. I think the wider market would agree since it seems like a lot of companies are trying get into handhelds.
 
I think we are hitting the moment that there is no any gains to get without gimmicks like frame generation and you have to choice between
It was already known that Blackwell wasn't going to be as big of a step as Lovelace since both of them are fabbed on roughly the same process node. The performance uplifts are coming from refinement of the underlying architecture and faster VRAM. Nvidia and AMD have been reluctant to use TSMC's 2nm process because of costs but apparently Nvidia is considering going to Samsung so we'll likely see some big changes for the 6000-series.

The 5090 was priced like that because nvidia probably believe that will deter scalpers but even they are showing in their own benchmarks that the gains compared with the 4090 are with gimmicks, without the gimmicks they are showing a 20% gains at best
The 5090 is priced that way because the 4090 already effectively has a price of $2000 and there's no more left on the market since they stopped production.

AMD need to start reconsidering their strategy because at this rate there is no reason to get their cards unless they go for like 400 buck top
Correct. The 5070 looks like it might trade blows with the 4080 which puts AMD in a weird situation. At the very least, they cannot charge more than $549 for flagship.
 
There's no point to having a 90 series for the average consumer or even the enthusiast at this point.

I think the 90 series' new market share is people and small businesses who have a standard ATX/ITX desktop/workstation/tower server, and want to do mid to high-level ML inference/rendering at a lower cost of entry than an enterprise card. It's the best tool for the job for an independent contractor or small business that needs GPU compute. For a small business, 2k for an important tool is negligible if it can bring in more revenue.

It doesn't make sense as a server card because their coolers make them take up 3 slots even with the FE. Compared to their enterprise cards with 2 Slots with passive cooling on Rack servers.
 
no issues with the 9800x3d afaik, are you referring to the 7950x3d?
The 7950X3D, yes. Having the VCache on only one CCD let some games run on the non-VCache set of cores. I think of lot of the problems were fixed. But since the 9950X3D does the same thing, who knows if those problems will pop up again.
 
Just generating even more frames is extremely underwhelming. AMD recently unfucked FMF in Adrenaline so I could experiment with it in Diablo 4 and Modern Warfare a bit, and my conclusion is it's fine enough if your base frame rate is 60 fps, but Lossless Scaling's FSG1, which doesn't use any fancy motion vectors at all, just purely what it can get from the pixels on screen, looks just as good.

I think the reason is that if you're running fast enough for frame generation to actually work, frames are flipping by you too fast for your eye to notice much difference between one algorithm and another. This is very different than upscaling, where naive interpolation (like FSR1 or bilinear) looks like shit compared to inferencing. So this is a case where I think NVIDIA's overshot the value that inferencing actually provides.

You play AAA slop? 5080 you dont play slop? 5070

If you play AAA slop, its main target is going to be the PS5. That's a 6650 XT, more or less. Anything better will be fine. People vastly overestimate how much processing power they need to be fine.
 
Back