GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

Does the “newer gen performs one tier up” adage still hold? My 6900XT runs Starfield just fine on 4k Ultra so if it does hold then the 7800XT should perform the same and be perfectly fine for high end 2023 gaming.

Now, true, it is probably using FSR2 to do this, but I’ve looked for the signs of upscaling and seen nothing, and this is on a 48” monitor I’m sitting quite close to, so like, does it matter if it is cheating when I can’t even tell?
Nah, my 7900XT is more or less about the same as a 6950XT, maybe 10% faster in 4k gaming? Not enough to justify the markup (I spent $750 and got it with starfield, so it was a deal for me). The 6900XT actually performs BETTER in AI related tasks right now due to AMD dragging their feet on ROCM (less gay CUDA) support.
 
  • Informative
Reactions: Brain Problems
This is where we are.
starfieldhigh.JPGstarfieldlow.JPG
starfieldall.JPG
 
So just how much do radeons suck at AI? because I tend to find them much cheaper and only slightly used, but every SD guy says to run away from those.

I fucking hate nvidia but they seem to be the only game in town.
 
So just how much do radeons suck at AI? because I tend to find them much cheaper and only slightly used, but every SD guy says to run away from those.

I fucking hate nvidia but they seem to be the only game in town.
Idk. It's really up to you to decide how serious you are about the stuff. People do AI stuff on AMD cards. It won't be as fast or efficient as Nvidia, but it's not like it just won't do it.

If you need the bleeding edge, then yeah. It's Nvidia. If you're like "well I'm gonna tinker with it here and there but I really don't want Nvidia" then I'm sure an AMD card is serviceable.
 
So just how much do radeons suck at AI? because I tend to find them much cheaper and only slightly used, but every SD guy says to run away from those.

I fucking hate nvidia but they seem to be the only game in town.
Just a couple of weeks ago AMD announced that they've made huge improvements for SD.
amd-stable-diffusion-copy-scaled.jpg

I haven't seen any real benchmarks on it, this just popped up in my newsfeed one day.
 
So just how much do radeons suck at AI? because I tend to find them much cheaper and only slightly used, but every SD guy says to run away from those.

I fucking hate nvidia but they seem to be the only game in town.
I’ve done Stable Diffusion on my 6900XT since the release and it’s worked fine. It’s slower than a comparable Nvidia card, but it works and has decent size RAM. I’d still recommend getting Nvidia if this is what you’re after because AMD support is kind of shaky, I have to run a special ROCM version of PyTorch in docker just to load the AI, which is a lot more hassle than the basically just works of Nvidia.
Microsoft Olive
By the name that’s probably a windows exclusive. Typical. 😩
 
7800XTs actually seem to be selling very well and running out of stock on places like newegg.
L1 Techs have a few reviews on different models and a general opinion piece on it:

Their take is that it's a pretty good new standard card. I'm not surprised it's selling well. All the people who have wanted to upgrade but couldn't or wouldn't go crazy and buy something like a 7900XT or 4080 now have something which isn't cheap but does have decent price-performance.

I doubt many 6800 owners will be upgrading but those who sat out the last generation (or two) it's a good prospect. Particularly as it has newer features like if you want to stream it has the AV1 encoding.

So just how much do radeons suck at AI? because I tend to find them much cheaper and only slightly used, but every SD guy says to run away from those.

I fucking hate nvidia but they seem to be the only game in town.

Nvidia are still the standard but I would say AMD now have their teeth in Nvidia's heel. The improvement even just this past six months is remarkable. They're doing what I've been saying they should for a while which is to hire a bunch of people and get them working hard on AI. My advice depends on how ultra serious you are. If you want to get into actual AI research or something, absolutely need CUDA, then it's Nvidia. If you want to dabble in AI art generation or even more so if you want to game and do AI art generation in between, AMD is now okay for that.

Case in point:
I’ve done Stable Diffusion on my 6900XT since the release and it’s worked fine. It’s slower than a comparable Nvidia card, but it works and has decent size RAM. I’d still recommend getting Nvidia if this is what you’re after because AMD support is kind of shaky, I have to run a special ROCM version of PyTorch in docker just to load the AI, which is a lot more hassle than the basically just works of Nvidia.
This is what I was doing. In fact, @snov helped me get started. But now I'm just using DirectML (specifically I'm using ComfyUI) and producing images pretty quickly. Whilst my 7900XT probably isn't quite as fast technically as Nvidia like for like, it has 20GB VRAM which I couldn't afford with an Nvidia card, is excellent for gaming and the software side is getting better all the time. Just a few months ago I was jumping through all manner of technical hoops to get Stable Diffusion working on Windows with AMD. A couple of weeks ago I literally just downloaded ComfyUI from its repo and typed python main.py --directml into the Windows terminal and I was up and running. I guess I already had Python installed if you want to count that but you'd need to with Nvidia as well.

I guess Nvidia is still better and it is the pro choice. But AMD are acceptable. Get a few different opinions, fwiw but this is what I'm doing.

EDIT: I made this with the prompt 'a robot kiwi bird' and it took 34 seconds from start to finish. That's 1024x1024 image without upscaling. And with a second pass refinement stage. SDXL model. For me, that's fast enough.
robot_kiwi.png
 
Last edited:
Just a couple of weeks ago AMD announced that they've made huge improvements for SD.
View attachment 5323785

I haven't seen any real benchmarks on it, this just popped up in my newsfeed one day.
Does this apply to all cards or just the 7xxx series?
I’ve done Stable Diffusion on my 6900XT since the release and it’s worked fine. It’s slower than a comparable Nvidia card, but it works and has decent size RAM. I’d still recommend getting Nvidia if this is what you’re after because AMD support is kind of shaky, I have to run a special ROCM version of PyTorch in docker just to load the AI, which is a lot more hassle than the basically just works of Nvidia.

By the name that’s probably a windows exclusive. Typical. 😩
Yeah that's what concerns me, I already seen this happening in the IoT sector with rpi were basically those SBCs became the standard so everything its tailored to those and all other boards even if better priced with better specs are incompatible with those solutions or barely runs at all with too many bugs to be considered reliable at all, and if its for industrial operations you just can't risk it.
Nvidia are still the standard but I would say AMD now have their teeth in Nvidia's heel. The improvement even just this past six months is remarkable. They're doing what I've been saying they should for a while which is to hire a bunch of people and get them working hard on AI. My advice depends on how ultra serious you are. If you want to get into actual AI research or something, absolutely need CUDA, then it's Nvidia. If you want to dabble in AI art generation or even more so if you want to game and do AI art generation in between, AMD is now okay for that.
As fun as SD is to use I rather move to other stuff with more professional potential like GPT alternatives like LLaMA, tho I was just reading some stuff and it turns LLaMA is more CPU than GPU dependent? now I don't know if I'm better off with my Ryzen or should've gone with one of those used $100 Xeons which have more cores than a redhead kid has freckles, even if the Ryzen wipes the floor with it in single core perf.
 
  • Informative
Reactions: Brain Problems
Their take is that it's a pretty good new standard card. I'm not surprised it's selling well.
This is kinda baffling to me, that there is 0% performance uplift and 0% value improvement. What is even the point of releasing a new card at that point?
 
This is kinda baffling to me, that there is 0% performance uplift and 0% value improvement. What is even the point of releasing a new card at that point?
7700 XT was the first nearing acceptable bump in new price/performance since 3060 Ti almost 3 years ago, and 7800 is a bit better than that.
 
As a 6800xt owner I really don't like the 7800xt naming. Otoh for $500 it offers solid hardware specs that doesn't rely on software compensations with spotty implementation to deliver.

Nvidia is just about mocking people who want vram by putting 16gb on the shit 4060ti at $500.

AMD offers 16gb on a real GPU for $500.

If gamers choose a 4060ti over the 7800xt, all hope is lost and they deserve to get bent over and anally raped with the biggest dildo Jensen can find, wrapped in a leather jacket.
 
This is kinda baffling to me, that there is 0% performance uplift and 0% value improvement. What is even the point of releasing a new card at that point?
It's probably more like a 5% uplift (I think 6800 XT was going for $480 so almost same value). It could have been intended to be more, but RDNA3 is a flawed architecture that did not hit its original performance targets.

Consider it a refresh, and one that can potentially become cheaper over time because the silicon seems cheaper (compare the die size of Navi 21 to the die sizes of the GCD and MCDs in Navi 32).

It was said that these cards or dies have been sitting around in warehouses for months. AMD didn't want to release it while they could continue to sell off RX 6000 cards, and might have felt forced to by Nvidia launching new products with similar problems.

These companies will launch new products even if they're almost exactly the same. I don't think the world needed an i3-1315U that is 100 MHz faster than the i3-1215U using the same silicon, or a 14600K, AMD refreshing Cezanne mobile chips twice (5800U = 5825U = 7730U), and so on.

7700 XT is an improvement, just not worth it priced so closely to the 7800 XT.

DVujNqQ9kLcKbcDiZ4bJHm-970-80.png

Here is Tom's showing 7700 XT as 34% faster than the 6700 XT in their sample of games at 1440p, and also hitting 60 FPS average instead of 45. 6700 XT is going for about $330, so 7700 XT is 36% more expensive at a $450 MSRP. So if that price drops relative to the 7800 XT, it could become a decent option. I just saw the 6800 at $400, slightly less performance but more VRAM.
 
Last edited:
Imagine if the 7800XT was just named 7700XT. Same MSRP. 56% increase in average 1440p fps. 33% more vram. This is vs a 6700xt.

Now that sounds smokin'. You could even claim more value because the corpo dick suckers love saying shit like "but muh inflation means my dommy mommy corpo needs more of my money".

Hence that a $500 MSRP is even better vs the same price a few years ago.
 
As fun as SD is to use I rather move to other stuff with more professional potential like GPT alternatives like LLaMA, tho I was just reading some stuff and it turns LLaMA is more CPU than GPU dependent? now I don't know if I'm better off with my Ryzen or should've gone with one of those used $100 Xeons which have more cores than a redhead kid has freckles, even if the Ryzen wipes the floor with it in single core perf.
I haven't played around with LLMs yet though I hope to. But my early investigations of it suggest that it wants very high RAM. I.e. 30GB+. I think only a few can you hope to run in normal domestic machines. Training might be out of the question. Happy to be corrected though - I believe there are some newer ones that can be run on high spec home machines...

This is kinda baffling to me, that there is 0% performance uplift and 0% value improvement. What is even the point of releasing a new card at that point?
I thought I was clear. It's a big jump forward from anybody on a 5700, 580 or 480. Most people don't buy cards every year and given the insane prices during the mining craze and the fact they hadn't sunk that much after it, there's a large number of people holding off. They hold off because there is uncertainty - will the new cards be wildly better, will they be similar, will they be cheaper, on and on. Lots of reasons to hold and wait. Now the waiting is over - people can see what the new cards are like, the pricing, etc. They have certainty - so all the people who were putting back an upgrade can now buy without feeling they're about to find out they screwed themselves by not waiting. And when they upgrade, what are they going to buy - an older generation or the newer one with a little better performance and nice new features? The latter, generally.

The thing is selling out and not because they're artificially constraining supply to get some headlines. The 7800XT hit Amazon's best seller list and that requires volume. So I don't know why you're baffled - AMD released this thing because they thought it would sell and they were right.
 
I thought I was clear. It's a big jump forward from anybody on a 5700, 580 or 480
I was on a 5700xt until just recently, and it feels underwhelming that the 7000 series doesn't beat the price/perf value the 6000s have offered for the last 6 months or so. I'm glad I didn't wait. Say hello to the new card, same as the old card.
So I don't know why you're baffled - AMD released this thing because they thought it would sell and they were right.
I have no doubt it will sell, the 6800 offers the same value proposition at present and it sells just fine. I guess I just have this old fashioned expectation of generational improvement.
 
Back