GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

Do these work with TPM chips on board the motherboard or in the CPU? And what protections does GNU/Linux have on the bootstack?
TPM needs to be set manually on most distros. The Gentoo wiki has good articles on it if you like non-systemd distros.
I have been thinking about going back to my Linux roots (pre-System D days so I'm sure I'm useless on it now)
Open-rc, runit, s6, upstart, Busybox (rescue cds) Epoch, highly modified sysvinit work depending on amount of headbanging and distro. Personal anecdote: Open-rc and systemd seem to be most common aside from Solaris being special.
I can't and do not want to speak for the early btrfs days though.
You gotta work around the quirks of the fs is the most important part. Like how in JFS can loose files due to the journal writes being postponed until there's another action to trigger it. I think btrfs still has some raid mode issues that aren't sorted (I don't use raid so...)
 
  • Informative
Reactions: Overly Serious
Hell yeah!, I guess.

"Drive two 8K displays or up to four high-resolutions displays or projectors — up to 5K60 — from a single Matrox LUMA Series graphics card."

It's Intel ARC GPUs but still.
 
It's honestly crazy to me that right now you can get a 480GB SSD for under $25 and if you spend roughly $10-15 more you can get a 1TB SSD even.
I bought an 1 TB NVMe drive the other week, fast drive from a big brand, $60. Threw the biggest Amiga software collection on there I could find, ~45 GB compressed. On a 1 TB drive, that barely matters. I won't live long enough to try all of that software. I used to have Amiga floppies when they were still the thing to have. Drawers and drawers and drawers and these Floppy disk boxes, many of them. The data contained on them was a fraction of that collection. I also had a 200 MB harddrive then that cost something like $400 in modern money. It felt gigantic and impossible to fill up.
 
Was watching Paul's Hardware and he mentioned a 6950xt was going for $600 these days. I thought that can't be right.

I did some quick searching and while I didn't find one for that price, they seem to have dropped well below the $1000 msrp. Maybe yanks will have better luck, because while $600 is still a lot of money, if you can get a nearly top of the line card for that, it might be worth the investment?

I bought an 1 TB NVMe drive the other week, fast drive from a big brand, $60. Threw the biggest Amiga software collection on there I could find, ~45 GB compressed. On a 1 TB drive, that barely matters.
I've been tempted to do something similar with a PS1, Dreamcast, or PS2 collection. I think the PS2 has a few tb for the whole thing. Would want it external though.
 
Maybe yanks will have better luck, because while $600 is still a lot of money, if you can get a nearly top of the line card for that, it might be worth the investment?
Newegg has one for $599.99 (ASRock Phantom Gaming OC RX 6950 XT 16 GB). Most expensive one is $1200+.

EDIT: That means we'd pay $599.99+ unless the state you reside in doesn't have sales tax (Oregon/Montana/Deleware/New England), Everyone else pays 5%-9% sales tax, so in select states only (tm).
 
Last edited:
Would you be willing to give me a few pointers to get me started on SD with an AMD card? I thought I'd made a good choice with a 20GB VRAM Radeon card and then I realised how difficult it was to do anything AI with it. I've recently been contemplating selling it (at a loss!) to buy an Nvidia card purely so I can do SD and related. 4080 is much less RAM and a rip-off. It would be frustrating to have to swap.

I have tried. I found a tutorial which was supposed to "convert" models somehow. I don't recall it involving Docker at all. And whilst I sort of got it working the results were greatly disappointing and I then broke it with some sort of version issue.

I'm not a technical novice, but I don't really know how to get this working and just a few good links or the concepts would help. The tutorial I read didn't really explain everything and assumed you understood how SD and models all work. So when something went wrong, I couldn't figure it out.
Unless you are doing training to fine tune models you really don't need that much vram. The best price per vram is a 3060 12GB if you are doing training, but you take a hit in raw ITs. A 3080 is cutting it close and it's what I use, but it's committed solely to SD and not to any monitors.

As for running it on your machine you have two options. You can run it through a a VM in docker or you can let it access the GPU directly via your OS. Not sure if docker uses directed IO so you may take a performance hit.

For docker all you have to do is download docker and git, then clone

https://github.com/AbdBarho/stable-diffusion-webui-docker and run it.

For Local follow the steps in a webui repo like Automatic111


Additional information can be found in the stable diffusion thread on kiwi farms.


Edit: late and gay, but I don't think anyone has linked the main thread for some reason.

I'd recommend ZFS, it's my favourite file system in the world.

I have a sneaking suspicion that you are running TrueNAS Scale instead of just Ubuntu or Gentoo. Am I correct?
 
Last edited:
Sorry, I've no idea how Shark works. You'll have to read up on it on your own.
So my adventures in Shark. And I can't emphasise enough how helpful you and others here have been in this thread, This is a bit of an overlap with the Stable Diffusion thread but I think the focus is still mostly on brand vs. brand GPU which this merely feeds into, so I'll keep it here. I'm probably nearly done on this tangent anyway.

I confirmed earlier that the RDNA3 cards aren't supported by ROCm, yet. Though more reading has revealed markers for them have now shown up in the OpenSource repo since January. So perhaps it's plausible that it wont take so long this time to add it. @The Ugly One 's post was extremely informative on the situation though. I think I'm likely jumping ship to Nvidia, but I'm getting ahead of myself.

Shark, once understood, is relatively easy to install. In fact, it's largely a case of downloading a .exe and running it from that. You don't even install it beyond that. There is an install guide about downloading particular AMD drivers but I skipped that and was fine - the regular ones now have everything needed and you don't have to go searching for betas. There are two versions of Shark. A CLI only one and one which incorporates either the SD UI or a clone (looks slightly different but maybe just different configuration).

The first run is a long process. It insists on downloading the SD 2.whichever model from HuggingFace by itself and for quite a while I could see that it was doing something by the way my CPU cores were all ramping up but I couldn't tell what. All I know is that there is a LOT of searching for manifests, downloading JSON files, generating all sorts of arcane ML term stuff. Even on my threadripper it was taking quarter of an hour. Once done it did a test run automatically and made a little cowboy picture. Blink and you miss it. It was only because I had Performance Monitor open that I saw the sudden brief GPU spike go by.

So it worked in getting my GPU to produce the pictures. I didn't take formal benchmarks but it was pretty quick. IIRC, 10-12 seconds. A lot faster than the 8-10 minutes doing it on CPU.

Once the initial building, downloading, whatever else it is doing is done, it's quick. So my first prompt on a model took around 19-20 minutes. My second about ten seconds. The first problem is this preparation stage applies to new models I download. Don't know about training models if that's even possible as I haven't got that far. So I wanted to try out lots of different models and I'd have to go through this long waiting period. I don't know if that's true for every model or if one is based on another I've already used, it remembers things it needs and the new one is onboarded more. I'm not that far yet. If that's not the case, though, and if I wanted to batch produce more than a few images from one model, Shark would be faster than doing it on CPU. So some limitations (I doubt the author's fault!) but really your only option if you're on RDNA3.

The second problem with it is that some of the models I tried to use just error'd out. I don't know why or if this is a hard limit or just requires some poking around. But anyway, the gist of my experiences (and happy to provide answers if someone else is trying to use AMD for SD) is that if SD is important to you I would say buy an Nvidia card rather than AMD. You'll have a significantly easier time of it and you wont require @snov levels of knowledge to get it all working. If you DO go AMD, make sure it's not RDNA3 (unless you're reading this in the future and it has more support when you are).

So what am I going to do? Well... despite Shark being a fair attempt at a solution, I am likely to try and sell my 7900XT and get an Nvidia card. But there's a caveat that I don't have a lot of time to get deep into this right now I'll likely hold off a bit before I do that. Who knows, a month or two when I'm ready AMD might have announced RDNA 3 support is right around the corner and I'll just hang on. The RDNA3 cards are very good for gaming and certainly a Hell of a lot of a better value proposition than Nvidia, imo. So that's why I kept this in this thread rather than move it over to the SD one because really it's less about detailing the specifics of how I went about this and more about the outcomes for people deciding between cards based on their needs. The 7900XT/X are powerful cards, and if you want to dip your toe in the waters of SD then Shark is, imo, the way to go and will let you do so. But if you're serious about it, they're not good. You need Nvidia or as a fallback RDNA2 gen, not RDNA3. That's my take on it all, anyway. Hope that's useful to somebody.


Linux would be LKUS (Linux Unified Key Setup). You have to support and kernel options for it alonside an initramfs. t LVM (Logical volume Management) is required if you have a /boot on the ESP or the bootloader does not support decryption. There is true/veracrypt/tcplay compatibility support but I do not know how good it is. Just remember that LUKS metadata shows your cipher type (Choice of every kernel cyrpto in the API* or write your own) , making it possible to brute force your password if it's shit.

*AES, Anubis, CAST5/6, Twofish, Serpent, Camellia, Blowfish, DES, ShangMi(1/2), IDEA, GOST (read-only), Elliptic Curve Diff-Hellman, EC and its variants, RSA, ElGamal (I think), DSA etc.
You are obviously overflowing with knowledge on this stuff. Grant me your wisdom, oh beneficent one -- If I wanted to store the keys on a separate device and require it for starting up my planned home-built NAS (but obviously not leave it plugged in the whole time or it's pointless in the case of theft), what should I be looking at? Just a regular old USB drive or something more sophisticated? I believe there are Yubi Keys and presumably competitors. They'd let me lock the drive the keys are on, plug it in on the occasions I need to reboot the NAS, hide it in a sock the rest of the time, right?

It's honestly crazy to me that right now you can get a 480GB SSD for under $25 and if you spend roughly $10-15 more you can get a 1TB SSD even.
I alternate between your viewpoint, and looking at decent drives for my planned NAS where a quality 14TB HD costs me £450. :(

Was watching Paul's Hardware and he mentioned a 6950xt was going for $600 these days. I thought that can't be right.
It's part of AMDs new marketing. They're pushing the fact that you can get 16GB for less than $600 whilst for Nvidia to get 16GB you're pushing $1100.
amd.jpg
And whilst yes, the 6950XT is last gen it's still a heck of a beefy card. They're establishing the idea that 16GB is the new baseline for gpus. Which is a bit cheeky as a lot of people game on 1080 and they don't need it. But at the same time they're not entirely wrong - 8GB is cutting it fine these days and the intermediary 10, 12GB only offer a little more headroom. This is a bed of Nvidia's own making really for charging such crazy amounts for higher VRAM. AMD have found a number they can compete on very well and so they're trumpeting the difference. TBH, maybe I've just been frog-boiled but compared to what else it out there, 6950XT for $600 is looking pretty good right now. I wouldn't really want to get a new GPU with < 10GB VRAM and 12GB is really the baseline for new that you expect to last.

I have a sneaking suspicion that you are running TrueNAS Scale instead of just Ubuntu or Gentoo. Am I correct?
Wrong yet weirdly close. Gentoo used to be my distro of choice back in the day and I've considered going back to it recently, those I suspect the distro scene has evolved a lot and I might be better off picking something else these days. I have less time to evolve than I used to. And I am spec'ing up a NAS right now that I had intended to put TrueNAS on. Though that was going to use one of AMDs recent APUs and not have a graphics card - not planning to run StableDiffusion on it.
 
I have a sneaking suspicion that you are running TrueNAS Scale instead of just Ubuntu or Gentoo. Am I correct?
Nah, all my computers are either macOS or NixOS.
You'll have a significantly easier time of it and you wont require @snov levels of knowledge to get it all working. If you DO go AMD, make sure it's not RDNA3 (unless you're reading this in the future and it has more support when you are).
I just want to clarify that I don’t actually know all that much about these things. I’m a hobbyist who got it to work after spending some time figuring it out, that’s all. People who actually know computers could do it quicker and better, my docker setup is actually kind of janky.
 
  • Like
Reactions: Overly Serious
Well... despite Shark being a fair attempt at a solution, I am likely to try and sell my 7900XT and get an Nvidia card.
Just curious, have you tried the DirectML fork of stable-diffusion-webui? That's what I messed around with on my AMD desktop before I picked up a cheap 2060 to play with. My 5700xt was several times faster than CPU, though still much slower than the 2060.
 
An option I don't think is mentioned enough is that you really don't need to buy a GPU. The novelty of stable diffusion runs out for most people after a week. If you are selling your GPU, at a loss anyway maybe you should just give a Lambda or AWS GPU instance a try. You would only be out $1.5 an hour.

Installing Automatic111 SD only requires a few dependencies and calling a script on git. You can have it up and running in 20 minutes on most machines. Then just nuke the instance when you are done. Once you get it configured right the first time just keep a backup of your webui-user.sh locally and reuse it whenever you want to go back. Plus you have a lot more VRAM for training.

The 1x NVIDIA RTX A6000 package seems best bang for buck at .80 an hour. I'm really wondering if I fucked up buying a 3080.

 
Last edited:
And whilst yes, the 6950XT is last gen it's still a heck of a beefy card. They're establishing the idea that 16GB is the new baseline for gpus. Which is a bit cheeky as a lot of people game on 1080 and they don't need it. But at the same time they're not entirely wrong - 8GB is cutting it fine these days and the intermediary 10, 12GB only offer a little more headroom. This is a bed of Nvidia's own making really for charging such crazy amounts for higher VRAM.

There's a big difference between the average new video card having 16 GB of RAM and the average gamer having 16 GB of RAM. As video card sales continue falling, and the longevity of hardware keeps going up, what's new and what's now will have less and less relationship. The higher your requirements, the smaller your potential market.

1682947846940.png
 
Just curious, have you tried the DirectML fork of stable-diffusion-webui? That's what I messed around with on my AMD desktop before I picked up a cheap 2060 to play with. My 5700xt was several times faster than CPU, though still much slower than the 2060.
I did come across it. I actually thought that's what Shark is though, no? Or is Vulkan different to DirectML? Oh, it's different, but same principle. So there's an approach out there I still haven't tried yet. Thanks - I would imagine it may be similar to Shark so I'm less interested in it from a performance point of view and more if it works with models that threw errors on Shark. Thanks will give it a go, probably next weekend now.

I went through so many different combinations until the concepts started to click and I began to understand what PyToch was, what ROCm was, what models were... It was easy to miss something in the cartesian product of language versions, repos and how-tos. Cheers.

An option I don't think is mentioned enough is that you really don't need to buy a GPU. The novelty of stable diffusion runs out for most people after a week. If you are selling your GPU, at a loss anyway maybe you should just give a Lambda or AWS GPU instance a try. You would only be out $1.5 an hour.
This is a stonkingly good piece of advice. I'm not making lewds or anything dodgy so I guess I can overcome my privacy obession and do this in the cloud. Thanks for the specific recommendation on which package. If this is going to be around $1.50 an hour (don't worry, I'll test that first) then for the price of electricity and the difference in price between selling old card and buying new, that's probably more than I'll need.

I think you're right about the novelty wearing off. I do have a couple of specific goals in mind though for a project once I find my way around the software. Depending on how well I can get it to respond to specific prompts, anyway.


There's a big difference between the average new video card having 16 GB of RAM and the average gamer having 16 GB of RAM. As video card sales continue falling, and the longevity of hardware keeps going up, what's new and what's now will have less and less relationship. The higher your requirements, the smaller your potential market.

View attachment 5105005
There is a big difference between the two but if anything I think that reinforces my point. The longer term you're in it for, the more you want to start off with a decent card which will last. Between a minor difference in speed and going from 10GB VRAM to 16GB VRAM, I'd honestly rather the VRAM in terms of longevity.
 
@Overly Serious Quotes/replies not working for me atm. For your last point that has always been the case. It's only been recently I've seen people treat extra vram as "not worth anything" or even "fluffed up stats"...which is really weird behavior. I'm assuming it's just cope from people who massively overspent on a slew of 8-12gb cards during the past 2 years.

Vram has always been a major factor in gpu longevity. No way would the 1080ti have held on for so long if it had 8gb instead of 11gb. I'll always take beefier hardware than software solutions that are sometimes not implemented.
 
@Overly Serious Quotes/replies not working for me atm. For your last point that has always been the case.
Yes, it has. From both the replies to me on this I wonder if perhaps people are reading into my posts something I don't intend. I'm not being partisan here. I'm saying AMD have a valid point in their marketing on this, even if GPU prices are still too high all round.
 
I think the problem with Nvidia/AMD is that they are trying to make the best card they can to beat the other. The issue is that requirements aren't keeping up with performance, and so cards get more expensive and more powerful for no reason. To the point where people are fine buying used cards from 8 years ago, and still getting decent frames at 1080 which is the most common resolution. There is going to come a time when Nvidia and AMD are going to have a bloodbath with their new line up, and we are seeing that right now with the 4000 series on the Nvidia side. I'm sure AMD isn't far behind. $200-300 is the ideal range for a midrange graphics card in 2k and a high res in 1080, and eventually the markets are going to reflect that.
 
I haven't shown interest in any recently released games as of late other than Atomic Heart and RE4 remake. Maybe I'm just out of the loop, but it seems like the games industry is in a rut.

I just don't feel any new release justifies the expense. 4K is also still fairly expensive, and it's always a compounded expense. In order to game on the 4K you need a good GPU, but you also need an expensive monitor, but if you have a big GPU, you also likely need a big PSU, especially with these polyslot beasts coming out. Factor in inflation and the (IMO) uninteresting releases, and you've got all sorts of pressure to move away from gaming.

I used to be very active as a teenager, I've gone back to that, and will double down on it when my 2060 croaks.
 
I think the problem with Nvidia/AMD is that they are trying to make the best card they can to beat the other. The issue is that requirements aren't keeping up with performance, and so cards get more expensive and more powerful for no reason. To the point where people are fine buying used cards from 8 years ago, and still getting decent frames at 1080 which is the most common resolution. There is going to come a time when Nvidia and AMD are going to have a bloodbath with their new line up, and we are seeing that right now with the 4000 series on the Nvidia side. I'm sure AMD isn't far behind. $200-300 is the ideal range for a midrange graphics card in 2k and a high res in 1080, and eventually the markets are going to reflect that.
This is me right now, and the fact of the matter is I have no space/interest to game in 4k, so even a 16GB card is highly overkill for my use case of light internet/retro gaming/occasional AAA at 1080p.
 
This is me right now, and the fact of the matter is I have no space/interest to game in 4k, so even a 16GB card is highly overkill for my use case of light internet/retro gaming/occasional AAA at 1080p.
I haven't even started with my Steam backlog. It's barely over 10 years at this point, but the amount of games that get accumulated over time is unbelievable considering I rarely buy stuff outside of the 2 major sales.

I also haven't parted with my 1050ti.
 
  • Feels
Reactions: Kane Lives
Back