GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

I'm really tempted to buy one of AMD's new 7900XT. I know the price / performance point is pitched just at the level to make you go up to the 7900XTX but I think I can resist that in my case because I'm more about comparing it to what I have which is an old 480 with 8GB. I'm not a big gamer and I want something with lots of RAM for some AI noodling about. Nvidia has the edge with AI but I'm gambling that will change and I think 20GB of VRAM will offset it.

Main thing holding me back is that I expect there to be very limited supply of them at the actual MSRP and most of them will be heavily marked up. It's basically if I could wait until late January but that's going to miss Christmas which is when I actually have the most free time to play around with this. After New Years, it's going to be hectic again. Fuck - why did AMD have to wait and wait and wait to release these bloody things. It's like they want me to buy an Nvidia card.

I doubt AMD's gunning for a serious piece of the AI market. AI/ML requires the ability to churn through enormous enormous amounts of tensor arithmetic. NVIDIA is miles ahead of AMD on this front, both in hardware and the software stack. However, this comes at a cost, because Tensor Cores eat a lot of die space and can't really be used for anything else. The architecture of the Instinct datacenter GPUs suggests that AMD's going to head off in an orthogonal direction. Yes, they can do AI/ML, but their real strength is general compute, being able to churn through large arrays of FP32 and FP64 data much, much faster than NVIDIA's cards can. So what they are going for looks like "Best in class at everything other than AI/ML, but still adequate in AI/ML."


But still, for playing around, 20 GB of VRAM is a lot, and the card should be fine.
 
I doubt AMD's gunning for a serious piece of the AI market. AI/ML requires the ability to churn through enormous enormous amounts of tensor arithmetic. NVIDIA is miles ahead of AMD on this front, both in hardware and the software stack. However, this comes at a cost, because Tensor Cores eat a lot of die space and can't really be used for anything else. The architecture of the Instinct datacenter GPUs suggests that AMD's going to head off in an orthogonal direction. Yes, they can do AI/ML, but their real strength is general compute, being able to churn through large arrays of FP32 and FP64 data much, much faster than NVIDIA's cards can. So what they are going for looks like "Best in class at everything other than AI/ML, but still adequate in AI/ML."


But still, for playing around, 20 GB of VRAM is a lot, and the card should be fine.
I don't think the tensor cores give Nvidia a major AI/ML advantage over AMD in any segment. It's the software and lack of support that continues to hobble AMD. This is their latest plan to address it AFAIK.

One would hope that AMD doesn't let Nvidia trample all over them forever when ordinary people start getting more interested in high-end GPUs for ML than gaming... since 4K/120 is getting easy.
 
RIP to all of the drooling idiots who couldn't wait a couple weeks and coughed up $1400+ for a 4080 it looks like.

https://www.techpowerup.com/review/amd-radeon-rx-7900-xtx/

1.png 2.png 3.png
 
I don't think the tensor cores give Nvidia a major AI/ML advantage over AMD in any segment. It's the software and lack of support that continues to hobble AMD. This is their latest plan to address it AFAIK.

One would hope that AMD doesn't let Nvidia trample all over them forever when ordinary people start getting more interested in high-end GPUs for ML than gaming... since 4K/120 is getting easy.

H100 can crunch through matrices 5x-10x faster than an MI250X in 8-32 bit precsison. But despite being a year older and having 25% fewer transistors, the AMD card handily beats the NVIDIA card in both scalar and tensor FP64 performance, which is what you need for a lot of physics & engineering computations.

I believe AMD is betting that NVIDIA's cards are overspecialized. More companies are doing AI/ML, but these companies are also doing and will continue to do lots of traditional HPC. So it will be easier to convince them to spend millions of dollars on a massive computer that is pretty good at AI/ML, but also really good at lots of traditional workloads. The NVIDIA cards, despite the hype, aren't providing a compelling price/performance benefit over EPYC and Xeon for the kinds of workloads a lot of these companies want to do (think traditional auto/aero/marine/engine/etc companies who want to use AI/ML for design optimization, but aren't going to stop doing math-heavy simulations).
 
Last edited:
lmao the 7900 is literally a cheaper 4080 that fight equally in all regards to it except in Raytracing but no one cares about memetracing so AMD get the win on this one, imagine being Nvidia technical team that had to be forced to make a shitcoin miner rig that could game only for the crash to kill it

Practically speaking if you are in 3090/3080 you dont get the super boost that is advertised unless you fork for the 4090, but if you were a Radeon 6XXX user you actually get a easy to discern boost, if you cont care about blasting fast speed and want something solid that will not explode the 7900 is the best value if you are doing generational upgrades
 
lmao the 7900 is literally a cheaper 4080 that fight equally in all regards to it except in Raytracing but no one cares about memetracing so AMD get the win on this one, imagine being Nvidia technical team that had to be forced to make a shitcoin miner rig that could game only for the crash to kill it

Practically speaking if you are in 3090/3080 you dont get the super boost that is advertised unless you fork for the 4090, but if you were a Radeon 6XXX user you actually get a easy to discern boost, if you cont care about blasting fast speed and want something solid that will not explode the 7900 is the best value if you are doing generational upgrades
Haven't looked at the reviews yet, sounds like what we expected.

People will care about memetracing eventually since it will just be in every new console and PC game at some point. The implementation by the devs determines if it's a gimmick or good. Maybe AMD will do better with RDNA 4. Also, there is an RDNA 3+ confirmed coming to mobile which might be in a desktop refresh next year.
 
Haven't looked at the reviews yet, sounds like what we expected.

People will care about memetracing eventually since it will just be in every new console and PC game at some point. The implementation by the devs determines if it's a gimmick or good. Maybe AMD will do better with RDNA 4. Also, there is an RDNA 3+ confirmed coming to mobile which might be in a desktop refresh next year.
The majority of the Nvidia paid shills like Linus had to shit on the raytracing category just to find anything to complain but the thing fall flat when you are using CP2077 at 4k as a way to test it, also memeing that "professionals" will find it unappealing because it doesnt do things like Blender super fast, omitting that actual professionals dont use normal cards for it but specialized ones like the Quadro

Also tons of reviewers love to say that "compared to the 4090 is not that great" again omitting the fact that the main card that the 7900 was suppose to compete was not the 4090 but the 4060-4080 category and in that regard Nvidia lost badly, the card already fight toe to toe against the 4080 in performance and beat it price, it also beat any theorical 4070 that they would have to sell it at lost because the 4080 is barely a improvement compared to the 3090 and gimping more the 4080 would only give you a 3090 again

So Nvidia is now in a weird spot that releasing a 4070 is pointless because the competition is already shitting you in that regard
 
I doubt AMD's gunning for a serious piece of the AI market. AI/ML requires the ability to churn through enormous enormous amounts of tensor arithmetic. NVIDIA is miles ahead of AMD on this front, both in hardware and the software stack. However, this comes at a cost, because Tensor Cores eat a lot of die space and can't really be used for anything else. The architecture of the Instinct datacenter GPUs suggests that AMD's going to head off in an orthogonal direction. Yes, they can do AI/ML, but their real strength is general compute, being able to churn through large arrays of FP32 and FP64 data much, much faster than NVIDIA's cards can. So what they are going for looks like "Best in class at everything other than AI/ML, but still adequate in AI/ML."


But still, for playing around, 20 GB of VRAM is a lot, and the card should be fine.

H100 can crunch through matrices 5x-10x faster than an MI250X in 8-32 bit precsison. But despite being a year older and having 25% fewer transistors, the AMD card handily beats the NVIDIA card in both scalar and tensor FP64 performance, which is what you need for a lot of physics & engineering computations.

I believe AMD is betting that NVIDIA's cards are overspecialized. More companies are doing AI/ML, but these companies are also doing and will continue to do lots of traditional HPC. So it will be easier to convince them to spend millions of dollars on a massive computer that is pretty good at AI/ML, but also really good at lots of traditional workloads. The NVIDIA cards, despite the hype, aren't providing a compelling price/performance benefit over EPYC and Xeon for the kinds of workloads a lot of these companies want to do (think traditional auto/aero/marine/engine/etc companies who want to use AI/ML for design optimization, but aren't going to stop doing math-heavy simulations).
I don't think the tensor cores give Nvidia a major AI/ML advantage over AMD in any segment. It's the software and lack of support that continues to hobble AMD. This is their latest plan to address it AFAIK.

One would hope that AMD doesn't let Nvidia trample all over them forever when ordinary people start getting more interested in high-end GPUs for ML than gaming... since 4K/120 is getting easy.
Thanks. You both know a lot more about this than I do.

I feel like this situation must change because there are so many AMD cards now and they are so capable in every other way (except RayTracing in a relative sense) that I feel support for AI/ML work must grow. Ditto 3D rendering software. I'm therefore tempted to gamble on the 7900 as I know it is "good enough" for now and I think will become more level with Nvidia over time. Though I've been wrong before.

Off to read some reviews and try to ferret out some pricing / availability information.
 
also memeing that "professionals" will find it unappealing because it doesnt do things like Blender super fast, omitting that actual professionals dont use normal cards for it but specialized ones like the Quadro
Personally I bought a used 2070 specifically for blender workloads, it is not the main render machine (there is a proper render machine in office) but it's enough to work on projects reliably, provided you are careful with the scene settings and all.
Using an AMD card limits your renderer options as cycles is pretty much your only choice with an AMD card (not that it's a bad renderer, it's actually very solid), and optix will save you a lot of time which is important when time spent waiting for a test render is money.
I probably said this before but AMD really needs to get their shit together on computing. I don't particularly like nvidia and prefer AMD's openness, but nvidia GPUs are the only choice if you do compute based workloads.

Regarding the latest card drop, we (consumers) are still losing. both AMD's 7000 series and nvidia's 4000 series are horrible value, I do not know anybody who would spend more than 1k for a GPU, regardless if it's the best or not.
Other mid range cards are usually overpriced to shit (at least locally) and the barrier of entry for a respectable gaming pc is higher because of this.
I am still fond of the day when I grabbed an rx 580 for 250 dollars. That card managed to run most things very humbly.
 
also memeing that "professionals" will find it unappealing because it doesnt do things like Blender super fast, omitting that actual professionals dont use normal cards for it but specialized ones like the Quadro
Lots of pros use gaming cards. They are far above serviceable for most tasks.

The only ones I know that actually use Quadros are firms and colleges/universities, but in all likelihood, not because the gaming cards are insufficient but because that's what workstation suppliers gave them. Corporate tier margins are where it's at.

Also, I dunno how much has changed in the recent years but for the most part, just about every sizable company uses CPU rendering.
 
The only ones I know that actually use Quadros are firms and colleges/universities, but in all likelihood, not because the gaming cards are insufficient but because that's what workstation suppliers gave them. Corporate tier margins are where it's at.
CAD software companies probably drive a lot of Quadro adoption as well. There's one of the big players I know of where if you call them for support, their first question is what kind of graphics card you're using. If you say GeForce or Radeon, they immediately hang up on you.

That never stopped the engineering shop I used to work at many years ago, you just had to know how to fix your own problems.
 
CAD software companies probably drive a lot of Quadro adoption as well. There's one of the big players I know of where if you call them for support, their first question is what kind of graphics card you're using. If you say GeForce or Radeon, they immediately hang up on you.

That never stopped the engineering shop I used to work at many years ago, you just had to know how to fix your own problems.
I forgot to mention this as well. Being on the supported hardware list is also the thing that pushes Quadro sales.

Apart from that, things like 10 bit color support is also limited to Pro GPUs. Although pretty much everyone I know in the design space is using a MacBook Pro with a glossy laptop display.

The last true bastion of being anal about color accuracy is in grading, but they use dedicated playback and capture cards to eliminate any fuckeries the OS might do with the footage. And of course, who can forget old school post prod guys that MUST have everything in tape.
 
Last edited:
Turns out, I was wrong :( AMD won't be taking the efficiency crown, this generation, at least not with chiplets. AiBs may end up delivering the "3GHz" that AMD promised, TPU OC'd an ASUS card above 3GHz and it turned out to be interesting:
1670956656772.png


A deeper pipeline, double the shader engines, shared resources between shaders, "built for 3GHz"...

:thinking:
 
Potentially dumb question - i'm eyeballing a ssocket 1700 motherboard and starting with an i3 because I dont have much $ to go around for a new pc.

If I upgrade to an i7 or i9 does my existing RAM need to change also?
 
Potentially dumb question - i'm eyeballing a ssocket 1700 motherboard and starting with an i3 because I dont have much $ to go around for a new pc.

If I upgrade to an i7 or i9 does my existing RAM need to change also?
Unless Intel has done something screwy with the 12th and 13th generation, no - you'll be able to re-use what you have.
 
Potentially dumb question - i'm eyeballing a ssocket 1700 motherboard and starting with an i3 because I dont have much $ to go around for a new pc.

If I upgrade to an i7 or i9 does my existing RAM need to change also?
Just be mindful that the LGA 1700 motherboards come in two RAM form factor types -- DDR4 or DDR5. They aren't interchangeable, so you're going to have to pick a flavor and stick with it.
 
  • Agree
Reactions: PC LOAD LETTER
Looks like the 7900xtx sold out in record time to the dismay of Nvidia fanbois. No, the supposed "destroying" RT lead of the 4080 does not matter nearly as much as they think it does.

I'm NOT happy that $1k GPUs instantly sell out, but consoomers only have themselves to blame.
Meanwhile the 4080 are getting dust in stores, im expecting Nvidia taking the L and do a price reduction or not and just believe that people have 1200 bucks for their overpriced cards

AMD should just iron their drivers to get the memetracing out of the way so the paid shills probably had to complain that is shit for other reasons like muh coil whine
 
Meanwhile the 4080 are getting dust in stores, im expecting Nvidia taking the L and do a price reduction or not and just believe that people have 1200 bucks for their overpriced cards

AMD should just iron their drivers to get the memetracing out of the way so the paid shills probably had to complain that is shit for other reasons like muh coil whine
It's funny how people consider the AMD rt results shit. So apparently that means that ray tracing has actually been shit this whole time on ampere, right? Because the games haven't changed at all. It means the 4080 rt is shit because let's be honest, 15% is never "destroying" when it comes to gpu performance.
 
It's funny how people consider the AMD rt results shit. So apparently that means that ray tracing has actually been shit this whole time on ampere, right? Because the games haven't changed at all. It means the 4080 rt is shit because let's be honest, 15% is never "destroying" when it comes to gpu performance.
The problem with raytracing is two fold, one the devs have to actually care to implement it and second you are wasting resources in fancy lights that only a hand of rich people (and consoles but eh) can use, so trying to keep whining about muh raytracing when in a year only AAA games give enough of a crap to implement it, even Linus claimed that it was a not issue because even he doesnt use or particularly care about that issue

AMD should just be ironing those drivers and complete their AI thingy implementation just to shit the bed of nvidia more, there are theories in the air that Nvidia will just cut the VRAM of the 4080 to ship it as the 4070 and call it a day
 
  • Like
Reactions: Dude Snakes
Back