GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

  • 🔧 At about Midnight EST I am going to completely fuck up the site trying to fix something.
I've actually seen some people wonder how AMD can suddenly have a card 2x (and then some) faster then the last 'flagship'. Easy when they decide to make a full card with a better arch.

I've been wondering the same thing, Vega and Radeon VII wasn't meh levels of effort, they weren't sandbagging the competition, they were hugely expensive and under delivered in many ways. But during all this time they have been funded by and worked closely with both Sony and Microsoft on the next-gen consoles and that's not nothing.

The supposed benchmarks showing that they're ahead in rasterization while way behind in raytracing is a bit surprising though because that's a huuuuge leap in performance. I thought RT would be their (marketing) strength having worked with MS that not only wants it for their console but they're the arbiter of the standard on Windows/D3D.
 
  • Thunk-Provoking
Reactions: Just Some Other Guy
I've been wondering the same thing, Vega and Radeon VII wasn't meh levels of effort, they weren't sandbagging the competition, they were hugely expensive and under delivered in many ways. But during all this time they have been funded by and worked closely with both Sony and Microsoft on the next-gen consoles and that's not nothing.

The supposed benchmarks showing that they're ahead in rasterization while way behind in raytracing is a bit surprising though because that's a huuuuge leap in performance. I thought RT would be their (marketing) strength having worked with MS that not only wants it for their console but they're the arbiter of the standard on Windows/D3D.
TBF, the ray tracing benchmarks used are developed for Nvidia's implementation since they were the only game in town at the time. I think actual game performance will be better, though honestly I literally only care about raster atm.

Edit - As for the 1st part, I wasn't factoring in GCN comparisons. Just compared to 5000 series Navi. It should be pretty easy to over double up on a 5700xt when making a full card.
 
Last edited:
It would be very strange if all of this sort of mirror the Nvidia FX era where ATI(now AMD) were top dog at a great price, it would be strange because the 3000 series doesn't suck like the FX cards did but it being unavailable is like the phantom flagship 5950(I think?) that mostly existed on paper. It was there if you could buy it and you couldn't buy it.

I wonder why they went with Samsung, I think TSMCs process is built/licensed on Samsungs stuff and it works. Personally I've been suspecting GDDR6X manufacturing not being up to snuff but that wouldn't stop them from putting out 3070's unless they're waiting to see what AMD actually have.

I think they are using 7nm, but only have enough for the professional cards.. Also Samsung apparently gave them a great deal on 8nm, but it's still an inferior silicon and it shows.

Edit - I'm not too surprised if AMD knocks it out of the park. Despite driver issues, the 5700xt showed promising performance. Some forget it was only a 40CU card.

I've actually seen some people wonder how AMD can suddenly have a card 2x (and then some) faster then the last 'flagship'. Easy when they decide to make a full card with a better arch.
The story goes that Nvidia couldnt get the wafer count they wanted at the price they wanted from TSMC, both because TSMC has no real competition and because AMD has been vacuuming up 7nm orders for some time now. So Nvidia put out claims that they wre also looking at samsung's 8nm node, because it offered "comparable" performance for a much lower price, in an attempt to get lower pricing from TSMC.

TSMC called Nvidia's bluff.

Now Nvidia is stuck making gargantuan dies on this garbage node. Of course 8nm isnt actually a BAD node per say, at sub 1900 mhz it's quite efficient, but its die fault rate is just high enough to make a big die like GA102 very expensive/hard to make, and the push to get ampere to clock as high as possible is obliterating power usage. Part of this may be due to optimization, samsung and TSMC's nodes are not identical, and the decision to go with samsung came too late in ampere's life to optimize the arch for the node, so if nvidia sticks with it for a second generation (say a delay in 5nm) we could see a fermi 2.0 style optimization. But for now they have left themselves open for a serious licking.
I've been wondering the same thing, Vega and Radeon VII wasn't meh levels of effort, they weren't sandbagging the competition, they were hugely expensive and under delivered in many ways. But during all this time they have been funded by and worked closely with both Sony and Microsoft on the next-gen consoles and that's not nothing.

The supposed benchmarks showing that they're ahead in rasterization while way behind in raytracing is a bit surprising though because that's a huuuuge leap in performance. I thought RT would be their (marketing) strength having worked with MS that not only wants it for their console but they're the arbiter of the standard on Windows/D3D.
RT is still a meme for the most part, even with ampere the performance hit is not worth it for most consumers, and that is on 3080/3090 cards. Most of the market is 3050/3060 level, and those eventual cards wont be able to RT worth a damn. It will likely only be used sparingly with this generation of consoles due to the huge performance hit, and AMD decided to chase rasterization to corner nvidia in outright performance with cheaper dies. We can see the results here, with the possibility that a 500mm die with a 256 bit DDR6 bus beating a 700mm die with a 320 bit GDDR6X bus.

Radeon VII was a complete fucking joke. It was an AI card repurposed for gaming, it's drivers were trash, and to this day remain trash. Instinct was never meant to be a radeon card. Vega 64 was a decent idea, but it focused too much on things like FP32 that dont help games, much like VIII vega 64 was much better as a rendering/computational card. AMD has had this issue for awhile. GCN as an arch isnt bad but the lack of specialization for different tasks leaves it an inefficient mess. AMD certainly didnt have the money to revamp it, and Raja was a total idiot for pursuing a 3DFX strategy with Vega instead of just making a bigger polaris in the first place. The RTG group in the last few years has doen some major optimization to vega for the APUs, and if vega 64 had the 2019 optimizations in 2016 instead of just being repurposed it would have been a legitimate 1080ti competitor.

The other thing is that rDNA could have easily been a 2080ti competitor. the 5700xt at stock power settings only sits around 225-240 for a 40CU card, and it is squarely between the 2070 super and 2070. They could have done a 56-60 CU card with a 12GB 384 bit bus for bandwidth and gotten the 50% more hardware needed to hit the 2080ti. AMD just....didnt bother to do it. For some reason. So it's not so much that AMD is suddenly improving, they WERE sandbagging with the RX 5000 series, and with rDNA 2 were just getting the core counts they should have had last year.
 
The story goes that Nvidia couldnt get the wafer count they wanted at the price they wanted from TSMC, both because TSMC has no real competition and because AMD has been vacuuming up 7nm orders for some time now. So Nvidia put out claims that they wre also looking at samsung's 8nm node, because it offered "comparable" performance for a much lower price, in an attempt to get lower pricing from TSMC.

TSMC called Nvidia's bluff.

Now Nvidia is stuck making gargantuan dies on this garbage node. Of course 8nm isnt actually a BAD node per say, at sub 1900 mhz it's quite efficient, but its die fault rate is just high enough to make a big die like GA102 very expensive/hard to make, and the push to get ampere to clock as high as possible is obliterating power usage. Part of this may be due to optimization, samsung and TSMC's nodes are not identical, and the decision to go with samsung came too late in ampere's life to optimize the arch for the node, so if nvidia sticks with it for a second generation (say a delay in 5nm) we could see a fermi 2.0 style optimization. But for now they have left themselves open for a serious licking.

RT is still a meme for the most part, even with ampere the performance hit is not worth it for most consumers, and that is on 3080/3090 cards. Most of the market is 3050/3060 level, and those eventual cards wont be able to RT worth a damn. It will likely only be used sparingly with this generation of consoles due to the huge performance hit, and AMD decided to chase rasterization to corner nvidia in outright performance with cheaper dies. We can see the results here, with the possibility that a 500mm die with a 256 bit DDR6 bus beating a 700mm die with a 320 bit GDDR6X bus.

Radeon VII was a complete fucking joke. It was an AI card repurposed for gaming, it's drivers were trash, and to this day remain trash. Instinct was never meant to be a radeon card. Vega 64 was a decent idea, but it focused too much on things like FP32 that dont help games, much like VIII vega 64 was much better as a rendering/computational card. AMD has had this issue for awhile. GCN as an arch isnt bad but the lack of specialization for different tasks leaves it an inefficient mess. AMD certainly didnt have the money to revamp it, and Raja was a total idiot for pursuing a 3DFX strategy with Vega instead of just making a bigger polaris in the first place. The RTG group in the last few years has doen some major optimization to vega for the APUs, and if vega 64 had the 2019 optimizations in 2016 instead of just being repurposed it would have been a legitimate 1080ti competitor.

The other thing is that rDNA could have easily been a 2080ti competitor. the 5700xt at stock power settings only sits around 225-240 for a 40CU card, and it is squarely between the 2070 super and 2070. They could have done a 56-60 CU card with a 12GB 384 bit bus for bandwidth and gotten the 50% more hardware needed to hit the 2080ti. AMD just....didnt bother to do it. For some reason. So it's not so much that AMD is suddenly improving, they WERE sandbagging with the RX 5000 series, and with rDNA 2 were just getting the core counts they should have had last year.
There we go, typing out the stuff in my head, but too distracted from work to actually do :D

5700XT is actually a wonderful card, assuming it works for you. My wife's does just as good as my 1080ti all for a mid range card. Nobody should be too surprised about what a full size rdna2 version can do.

I think it has more to do with being surprised that AMD Radeon is finally starting to wake up.
 
  • Like
Reactions: Allakazam223
The story goes that Nvidia couldnt get the wafer count they wanted at the price they wanted from TSMC, both because TSMC has no real competition and because AMD has been vacuuming up 7nm orders for some time now. So Nvidia put out claims that they wre also looking at samsung's 8nm node, because it offered "comparable" performance for a much lower price, in an attempt to get lower pricing from TSMC.

TSMC called Nvidia's bluff.

Now Nvidia is stuck making gargantuan dies on this garbage node. Of course 8nm isnt actually a BAD node per say, at sub 1900 mhz it's quite efficient, but its die fault rate is just high enough to make a big die like GA102 very expensive/hard to make, and the push to get ampere to clock as high as possible is obliterating power usage. Part of this may be due to optimization, samsung and TSMC's nodes are not identical, and the decision to go with samsung came too late in ampere's life to optimize the arch for the node, so if nvidia sticks with it for a second generation (say a delay in 5nm) we could see a fermi 2.0 style optimization. But for now they have left themselves open for a serious licking.

RT is still a meme for the most part, even with ampere the performance hit is not worth it for most consumers, and that is on 3080/3090 cards. Most of the market is 3050/3060 level, and those eventual cards wont be able to RT worth a damn. It will likely only be used sparingly with this generation of consoles due to the huge performance hit, and AMD decided to chase rasterization to corner nvidia in outright performance with cheaper dies. We can see the results here, with the possibility that a 500mm die with a 256 bit DDR6 bus beating a 700mm die with a 320 bit GDDR6X bus.

Radeon VII was a complete fucking joke. It was an AI card repurposed for gaming, it's drivers were trash, and to this day remain trash. Instinct was never meant to be a radeon card. Vega 64 was a decent idea, but it focused too much on things like FP32 that dont help games, much like VIII vega 64 was much better as a rendering/computational card. AMD has had this issue for awhile. GCN as an arch isnt bad but the lack of specialization for different tasks leaves it an inefficient mess. AMD certainly didnt have the money to revamp it, and Raja was a total idiot for pursuing a 3DFX strategy with Vega instead of just making a bigger polaris in the first place. The RTG group in the last few years has doen some major optimization to vega for the APUs, and if vega 64 had the 2019 optimizations in 2016 instead of just being repurposed it would have been a legitimate 1080ti competitor.

The other thing is that rDNA could have easily been a 2080ti competitor. the 5700xt at stock power settings only sits around 225-240 for a 40CU card, and it is squarely between the 2070 super and 2070. They could have done a 56-60 CU card with a 12GB 384 bit bus for bandwidth and gotten the 50% more hardware needed to hit the 2080ti. AMD just....didnt bother to do it. For some reason. So it's not so much that AMD is suddenly improving, they WERE sandbagging with the RX 5000 series, and with rDNA 2 were just getting the core counts they should have had last year.

Great post!

RT is a meme until consoles do it, that's what will drive it. I think on the PC side it will mirror the Xbox and shaders, I've probably made a sperg post about it going from graphical feature to part of gameplay.

AMD could have gone 384bit but that would make things complicated in routing the board, adding more CUs would also require more power going to the GPU and that would make things even harder unless they dumped the frequency. I think the 5700XT was their most reasonable product at the time, unlike and looping back to my previous post, the flagship Nvidia FX card that probably coined the term paper launch. AMDs overall brand these days have been being the sensible and not sensational option, they still pushed things with the 5700XT.

I don't have a horse in this race I've just been around graphics hardware for a long time so it's football to me, reading the scores and arguing about it.
 
  • Informative
Reactions: Allakazam223
Is AMD going to go to dual core for this XTX 6900 model? They have been using the terminology of Infinity Fabric, which is what Ryzen uses to communicate between CCX, right?

What if AMD has finally found a way to use CPU research and apply it to a GPU? Could we see a true fusion of ATI and AMD? Multi-core chips that 'just werk.'

Dual core GPU with a bank of HBM for each core with the tech behind Ryzen CCX?


:optimistic:
 
Is AMD going to go to dual core for this XTX 6900 model? They have been using the terminology of Infinity Fabric, which is what Ryzen uses to communicate between CCX, right?

What if AMD has finally found a way to use CPU research and apply it to a GPU? Could we see a true fusion of ATI and AMD? Multi-core chips that 'just werk.'

Dual core GPU with a bank of HBM for each core with the tech behind Ryzen CCX?


:optimistic:

No dual-gpu for RX 6000, but the entire industry could move to using multi-chip modules in the following generation, with AMD RDNA 3, Nvidia Hopper, and Intel Xe2/4 or whatever it's called. As long as any scaling issues are resolved, that will be the future of GPUs since they can get better yields with smaller, cheaper dies. Maybe the total die area for big consumer GPUs can go up too.

They can have more than 1 bank of HBM per "core". Radeon R9 Fury had 4 HBM dies for example.
 
Last edited:
  • Informative
Reactions: Allakazam223
3070 reviews coming out. Apparently it's pretty much what was claimed, a 2080ti for $500.

My only issue is I saw that 3080 launch. Zero stock and almost nobody was able to get the nearly fabled $700 version. So they'll have to excuse my skepticism of these reviews for the FE 3070 actually mattering.
 
  • Like
Reactions: Allakazam223
What my brain saw during the press conference:
FDA77644-6AAF-4677-8648-7736D9A72CC1.png
 
And by extension, the 3070.

Edit - r/AMD already bitching about prices. Again.

I forgot that AMD is supposed to give you the same (or better) performance for 2/3 the price.
They seem to think AMD is still the cheap alternative to the big boys.

They are the big boys now.
 
Big Navi release is on.

6800 XT trades blows with 3080

$650

6900 XT trades blows with 3090

$1000

If true, and this is with a grain of salt because they almost certainly cherry-picked the test games to be the more favourable ones to their cards, then unless Nvidia can get RTX 3080s to the public in large numbers in the next three weeks they are fucked like a whore on payday.

As for me, I'm not cancelling my RTX 3080 order just yet. I'm going to wait for reviews, and then probably ring up Scan just before release day to confirm how much stock is expected, then try to get my hands on a Radeon 6800 XT, then, if I succeed and have a confirmed delivery date, cancel my RTX 3080 order.

Also, I like how they've left a gap for a Radeon 6900 vanilla, so that when the expected RTX 3080 Ti vaporware appears (which will probably be a RTX 3080 20GB as rumoured) they'll be able to counter it. Then they can come up with a Radeon 6700 XT to hoover up all the midrange buyers.

Also also, SFF users will be pleased to note that this uses 20W less power than Ampere. That's important because the largest SFX power supply you can get is an 850W Silverstone brick.
 
  • Like
Reactions: Smaug's Smokey Hole
If true, and this is with a grain of salt because they almost certainly cherry-picked the test games to be the more favourable ones to their cards, then unless Nvidia can get RTX 3080s to the public in large numbers in the next three weeks they are fucked like a whore on payday.

As for me, I'm not cancelling my RTX 3080 order just yet. I'm going to wait for reviews, and then probably ring up Scan just before release day to confirm how much stock is expected, then try to get my hands on a Radeon 6800 XT, then, if I succeed and have a confirmed delivery date, cancel my RTX 3080 order.

Also, I like how they've left a gap for a Radeon 6900 vanilla, so that when the expected RTX 3080 Ti vaporware appears (which will probably be a RTX 3080 20GB as rumoured) they'll be able to counter it. Then they can come up with a Radeon 6700 XT to hoover up all the midrange buyers.

Also also, SFF users will be pleased to note that this uses 20W less power than Ampere. That's important because the largest SFX power supply you can get is an 850W Silverstone brick.
I just don't see any room for a TI anything outside of just more ram or a rumored 7nm refresh.

Those massive 400w power spikes and higher frequency instabilities that the 3080 had shows that it's pushed up against the wall. Unless Nvidia sandbagged the current ampere cards and has actual better silicon waiting in reserve, I don't see the reasoning of "All nvidia has to do is release a 3080ti".
 
What's the disadvantage to waiting a few months for benchmarks and drivers and competition? What do I meaningfully miss out on by waiting a few months to a year?

Double post, but I can't seem to figure out how to add a quote into an edit quickly.

Nothing really. I usually end up waiting for waterblocks anyways, so it never matters to me :D

Unless you need the performance now, you're always better off waiting anyways.
 
Back