GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Nvidias claim of twice the power of previous gen was based on raytracing and going forward RT will be important so I wouldn't be surprised if AMD leaned into it even if it won't be required/essential on the PC for another two years, that is my prediction. Rasterization is without a doubt the most important thing now but people benchmark the stupidest shit.
Whatever they release 1080p won't be a problem and if the price is right and the RT performance is decent then they have some solid cards to sell, Nvidia might be forced to announce a RTX 3050 at a xx50 price point and that's where it gets interesting.
I would like to see a RTX 3050 Low profile.
 
I for one have no interest whatever in an RTX 3090.

It's too big in every way, and the only thing that can justify it would be if you planned to max things in 8K. For which you will need a screen costing several thousand pounds. Or a projector costing tens of thousands of pounds.

I think the RTX 3090 is very much a Veblen good. They show it off to get the consoomers all fluffed nicely, before they settle for a RTX 3070 third party.

As for Big Navi, I'm skeptical. If it comes in a 95% a RTX 3080 but for about £100 less, it'll be a win. Especially since Ampere has been such vaporware.
 
A Linux update may have let slip AMD Big Navi's mammoth core specs
"According to the document Big Navi will have a 256-bit memory bus, while Navy Flounder appears to have a 192-bit memory bus."

I know I sperg about the bus width but it is important, bandwidth have been a bottleneck for AMD/ATI and they always need a bit more to achieve what Nvidia does with less.
 
A Linux update may have let slip AMD Big Navi's mammoth core specs
"According to the document Big Navi will have a 256-bit memory bus, while Navy Flounder appears to have a 192-bit memory bus."

I know I sperg about the bus width but it is important, bandwidth have been a bottleneck for AMD/ATI and they always need a bit more to achieve what Nvidia does with less.
Honestly, this could just be them fucking around again. I'd imagine the driver developing team and the actual engineers are close, if not the same, and both understand not to leak information. They have to have known this would leak information if they did it. So I wonder then if the bus width is actually real.

Aside from that, still too early to speculate. This could be an entirely revolutionary architecture that does away with high VRAM speeds in favor for on-die cache hierarchies. Who knows?
 
  • Thunk-Provoking
Reactions: Smaug's Smokey Hole
I would like to see a RTX 3050 Low profile.
As long as the TDP is 75w or lower it's likely at the minimum MSI will make one. They made low profile 1650s, 1050s, 1050tis, 750tis, RX550s, and even a low profile RX 560 1024 core.

I'm also looking forward to said card, my TV PC is a low profile tine boi and needs a good upgrade.
 
Their rumored 12GB card would be using 12 chips for a 384bit interface instead of 192bit and the 20GB card could have a 512bit interface instead of 256, AMD have been pretty extreme with memory bandwidth from time to time(with HBM) but they haven't used anything above a 256bit interface since the R9 390. The upcoming consoles on the other hand... They have spent a couple of years to create a powerful and cost efficient product so there's always that.

Red Tech Gaming is saying that while the bus is small, this is offset by Big Navi having an absolutely monstrous sized cache. Less than a month to go to find out.

I still only care about rasterized performance. I hope not a single % of that was sacrificed for any amount of RT performance for the price.

I really like the effect of Ray Tracing in the demos I've seen. Haven't seen it in a game I've played yet, but I think it adds a lot if framerate is still above acceptable thresholds.


Honestly, this could just be them fucking around again. I'd imagine the driver developing team and the actual engineers are close, if not the same, and both understand not to leak information. They have to have known this would leak information if they did it. So I wonder then if the bus width is actually real.

At some point, you have to release the changes necessary to support your cards so that they can begin to work their way through the pipeline towards the end user. It's not like GNU/Linux users will buy a card and then go to the AMD website to download the source and compile it (well, Gentoo users probably would). It has to be tested, merged and then work its way out into distros like Ubuntu. I don't know how long all that takes but if they're launching in October - especially if it's not a paper launch, I could see them having to do that around now. Not an expert - just know there's a reason other than leaking they need to do this.
 
I really like the effect of Ray Tracing in the demos I've seen. Haven't seen it in a game I've played yet, but I think it adds a lot if framerate is still above acceptable thresholds.

Raytracing will be great when hardware adoption is so widespread it can become an actual part of the gameplay instead of being graphical bells and whistles. Like realtime shadows before it, it was a neat graphical effect but it could be turned off to run on weaker hardware or hardware without SM2.0 or whatever it was running on so for a couple of years developers couldn't rely on it to create something new. HDR for a while just made everything bloomy, now you can't see that well into the dark unless you go into the dark so tension and atmosphere can be built that way.
For the next year or two I suspect that RT will be mostly limited to making helmets shiny and beefing up the lighting model, but those things can be turned off and the game will be the same. The release of a RTX xx50 or equivalent AMD model will be a good indicator that it's time to start upgrading from pure rasterizers.
 
Lol at the Gamers Nexus 3090 review. A tl;dr of 'Get fucked, Nvidia marketing'.

2020-09-24 14_25_09-Start.png
 
Lol, so apparently some 3080 models cheaped out on capacitors and it's causing quite a few issues at max clock speeds.

MSI and Zotac, I've heard.

This doesn't surprise me. Zotac are a budget brand but usually more of the "unspectacular but reliable" variety rather than the sketchy Chinese brands like COLORFUL and Yeston or the Wish / Aliexpress scams. And MSI are the masters of the Mug's Eyeful, which was a term used by Alan Sugar to describe his hi-fi gear because from a distance or to the uninformed it looked like a nice stacker system but inside was mostly shit tier. So too is MSI gear usually covered in "gamer" aesthetic angular crap, belt-hitching names, and RGB, and other things to appeal to teenagers and those with the maturity of same, but underneath it's cost-down electronics.

EVGA say they did use a similar capacitor type but delayed deliveries to rework it. All I'm gonna say is that I'm going to run mine without any overclocking when it turns up thanks to this.
 
  • Like
Reactions: Smaug's Smokey Hole
I wonder how/if these crashes are affected by power supplies. Like, let's say that you are experiencing crashing with one of the shitty-capacitor MSI/Zotac cards and you're running it with some cheapo 80-plus-not-even-bronze PSU. Could you ameliorate the GPU's stability by switching to a higher-quality PSU (such as one that's 80-plus-gold/titanium certified)?

Maybe the reason these cards made it to consumers is that AIB partners only tested on "good" PSUs that output "cleaner" power?
 
I wonder how/if these crashes are affected by power supplies. Like, let's say that you are experiencing crashing with one of the shitty-capacitor MSI/Zotac cards and you're running it with some cheapo 80-plus-not-even-bronze PSU. Could you ameliorate the GPU's stability by switching to a higher-quality PSU (such as one that's 80-plus-gold/titanium certified)?

Maybe the reason these cards made it to consumers is that AIB partners only tested on "good" PSUs that output "cleaner" power?

It shouldn't matter how much ripple there is on the power supply so long as it is within the ATX specification. Which all but the most utterly bargain basement shit tier Chinese knockoff fire hazard supplies are these days. And if you have one of those, you have bigger problems than a bit of ripple.
 
It shouldn't matter how much ripple there is on the power supply so long as it is within the ATX specification. Which all but the most utterly bargain basement shit tier Chinese knockoff fire hazard supplies are these days. And if you have one of those, you have bigger problems than a bit of ripple.
Ah, I miss those JohnnyGuru "gutless wonder" reviews.

But yeah, any decent power supply nowadays is just fine. Shitty components or overrated gpu clocks are gonna fuck up regardless.

Just amazing you can spend $700+tax minimum on a gpu and still get shit. Then again Space Invaders RTX Edition shows that even spending well over a grand doesn't buy you properly tested components. What a shame.
 
It shouldn't matter how much ripple there is on the power supply so long as it is within the ATX specification. Which all but the most utterly bargain basement shit tier Chinese knockoff fire hazard supplies are these days. And if you have one of those, you have bigger problems than a bit of ripple.

Ok, so like I said, I don't understand this issue very well, but I think what I said applies even to PSUs that fall within ATX specification. Let me elaborate on what I think and if you or anyone else can point out where/if I'm wrong, please do.

My understanding is that PSU "ripple" refers to variation in the PSU's voltage output (I read this Gamers Nexus article to double-check). Different components in a PC need different voltages (either 12v, 5v, or 3.3v), so the PSU receives electricity from the wall (say, 120v @ 60 Hz) and uses that input to output at the required voltages.

Page 15 of Intel's ATX Specification document states that a PSU's voltage output should be within ±5% of those numbers. So for example, the "3.3v" output can be anywhere from 3.14v to 3.47v.

So let's say that PSU 1 produces "ripply" output that spans full range allowed by ATX specification and PSU 2 produces output that is much more consistent (its "3.3v" output in practice is between 3.26 and 3.34).

The function of the capacitors that MSI/Zotac cheaped out on is to further refine the voltage before it reaches the GPU.

Graphics card A (Asus TUF RTX 3080, for example) has the expensive capacitors that are better at refining voltage. It can operate without crashing using electricity from either PSU (e.g., voltage anywhere from 3.14v to 3.47v).

Graphics card M (MSI Trio RTX 3080) has the cheap capacitors. It can operate without crashing using electricity from PSU 2, but may crash when using electricity from PSU 1. The problem is that the actual GPU chip is quite sensitive (i.e., it requires voltage input that falls within a narrow range around 3.3v or else it will crash), and Nvidia's boost algorithm assumes the capacitors are good enough to accept voltage input within a wide range from the PSU (i.e., anything within ATX spec) and refine it so that it can power the GPU without crashing.

So we see the following result from each combination:
  1. PSU 1 + GPU A: No crashes
  2. PSU 1 + GPU M: Crashes
  3. PSU 2 + GPU A: No crashes
  4. PSU 2 + GPU M: No crashes
In past generations (Turing, Pascal, etc.), the GPU was not as sensitive, and the cheapo capacitors would have sufficed well enough and most consumers would never know or care which capacitors were used. In this generation, the GPU is more sensitive than the AIB partners expected, and that's why we see these crashes.
 
Ok, so like I said, I don't understand this issue very well, but I think what I said applies even to PSUs that fall within ATX specification. Let me elaborate on what I think and if you or anyone else can point out where/if I'm wrong, please do.

My understanding is that PSU "ripple" refers to variation in the PSU's voltage output (I read this Gamers Nexus article to double-check). Different components in a PC need different voltages (either 12v, 5v, or 3.3v), so the PSU receives electricity from the wall (say, 120v @ 60 Hz) and uses that input to output at the required voltages.

Page 15 of Intel's ATX Specification document states that a PSU's voltage output should be within ±5% of those numbers. So for example, the "3.3v" output can be anywhere from 3.14v to 3.47v.

So let's say that PSU 1 produces "ripply" output that spans full range allowed by ATX specification and PSU 2 produces output that is much more consistent (its "3.3v" output in practice is between 3.26 and 3.34).

The function of the capacitors that MSI/Zotac cheaped out on is to further refine the voltage before it reaches the GPU.

Graphics card A (Asus TUF RTX 3080, for example) has the expensive capacitors that are better at refining voltage. It can operate without crashing using electricity from either PSU (e.g., voltage anywhere from 3.14v to 3.47v).

Graphics card M (MSI Trio RTX 3080) has the cheap capacitors. It can operate without crashing using electricity from PSU 2, but may crash when using electricity from PSU 1. The problem is that the actual GPU chip is quite sensitive (i.e., it requires voltage input that falls within a narrow range around 3.3v or else it will crash), and Nvidia's boost algorithm assumes the capacitors are good enough to accept voltage input within a wide range from the PSU (i.e., anything within ATX spec) and refine it so that it can power the GPU without crashing.

So we see the following result from each combination:
  1. PSU 1 + GPU A: No crashes
  2. PSU 1 + GPU M: Crashes
  3. PSU 2 + GPU A: No crashes
  4. PSU 2 + GPU M: No crashes
In past generations (Turing, Pascal, etc.), the GPU was not as sensitive, and the cheapo capacitors would have sufficed well enough and most consumers would never know or care which capacitors were used. In this generation, the GPU is more sensitive than the AIB partners expected, and that's why we see these crashes.
Pretty much exactly what's happening, except it seems these chips are affected by even wireless signals, etc. A Wi-Fi board may not be able to keep these chips stable. Bluetooth controllers etc may disrupt the chip enough to cause a CTD.

Correct me if I'm wrong, but Nvidia could have also hit the limit of what is possible on the architechture itself. Are they using 11nm+? Have they used the weaker chips for these "baseline" 30xx cards, and are cherry picking golden samples for FE and/or "Super/ti" models?

Either way, it is a bad omen that these AiB cards have deviated from spec. Nvidia is probably shitting all over these companies, internally. To fix these cards right, the partners would need to retool every PCB theyve made so far, and they won't as Tech Jesus said.

AMD's board partners should be taking very good notes from this. Don't cheap out on important parts, then charge consumers more than the OG spec.

Eta: this is probably the reason for such a paper launch. Only FE models and those that follow that spec actually work properly.
 
Last edited:
I'm surprised that they would fuck it up this badly. You're paying hundreds of dollars for this GPU, and what you get it something with cheap capacitors? Like, what the fuck? At most, they'll spend a few cents more on the higher quality capacitors, and then shit like this won't happen where their reputation is tarnished and then ultimately having to spend more money fixing returned units and modifying future units to have the higher quality capacitors.

What happened to the engineers? Validation? No one saw the design using the shitty capacitors and thought it was a bad idea? Was it even tested? Did they run prototype GPUs at maximum utilization for a week non-stop to make sure it was stable?

And Nvidia went even further with their scumminess and locked down FP16 and FP64 even more. Its only slightly better than the 2080 TI despite being capable of twice as much performance if they kept the ratio the same, They're forcing people to dish out thousands of dollars more for their Quadro and Tesla lines.

Want to do some home scientific computing on the cheap? TOO FUCKING BAD PAY UP $5000. I'm really glad the Radeon VII exists.

Really hoping "Big Navi" ends up completely BTFOing Nvidia.
 
Last edited:
I'm surprised that they would fuck it up this badly. You're paying hundreds of dollars for this GPU, and what you get it something with cheap capacitors? Like, what the fuck? At most, they'll spend a few cents more on the higher quality capacitors, and then shit like this won't happen where their reputation is tarnished and then ultimately having to spend more money fixing returned units and modifying future units to have the higher quality capacitors.

What happened to the engineers? Validation? No one saw the design using the shitty capacitors and thought it was a bad idea? Was it even tested? Did they run prototype GPUs at maximum utilization for a week non-stop to make sure it was stable?

And Nvidia went even further with their scumminess and locked down FP16 and FP64 even more. Its only slightly better than the 2080 TI despite being capable of twice as much performance if they kept the ratio the same, They're forcing people to dish out thousands of dollars more for their Quadro and Tesla lines.

Really hoping "Big Navi" ends up completely BTFOing Nvidia.
Idk, they probably assume l33t gaemerz will gobble up whatever they jizz out. It seems to be true more often than not.
 
Back