GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

Same with AMD, people only screech about "muh competition" with Intel so they can buy nvidia for cheaper.
A friend of a friend just built a new rig with some help from another guy who's more into this scene and as far as I can tell there was zero consideration of an AMD card whatsoever. It wasn't what type of GPU is good for you it was what type of nVidia card is good for you. And look, like I've said before, I think most people in this thread and the outer world are beyond the slapfights that we were formed in thanks to scarcity and price but I'm still surprised when the market is treated as a monopoly versus the duopoly it actually is.
 
If they're actually nuking their dGPU effort they will also start to wind down the team updating the drivers in maybe a year. That's really bad for the consumer.
Intel will supposedly triple or quadruple the size of their mobile iGPUs within the next 3 years or so with Arrow Lake, putting the iGPU near the size of the A580. That is for laptops, but they could make a desktop APU competitor if they wanted to, and even put a big cache on it to act as VRAM. Just use the Foveros technology already demonstrated with Lakefield.

Intel can have some relevance in graphics. Or they can continue to fumble everything.
 
Intel will supposedly triple or quadruple the size of their mobile iGPUs within the next 3 years or so with Arrow Lake, putting the iGPU near the size of the A580. That is for laptops, but they could make a desktop APU competitor if they wanted to, and even put a big cache on it to act as VRAM. Just use the Foveros technology already demonstrated with Lakefield.

Intel can have some relevance in graphics. Or they can continue to fumble everything.
I'm not disagreeing with you but the iGPU/APU is different from the dGPU, as seen by AMDs APU woes. That's why in my scenario if they already plan to nuke their dGPU efforts there will soon-ish be a skeleton crew using hand-me-downs from the iGPU team to maintain the drivers. Maybe they could face a class action lawsuit if they did nothing, idk.

My point is that drivers killed S3, Matrox, PowerVR, Rendition, 3DLabs and everyone else that entered the gaming space.
 
Also not helping: people using AMD as a scapegoat to buy nVidia cards at reduced prices.

If AMD announced RDNA3 pricing that aims to undercut the 40 series, nVidia might get scared to reducing prices, whereupon customers buy nVidia GPUs. Those people never intended to buy AMD cards no matter how hard they undercut.

I think it's something more fundamental than that though.

You CAN set up a gaming rig with a 3050ti and an i3 Intel processor and play many games and browse the Internet without much issue, but why are you cheaping out on your computer components? Surely something that you use everyday (you think) deserves higher priced components...
Maybe I'm the extreme outlier. I will never consider buying any Nvidia product except a x86 CPU if they ever make one. Not only is Nvida ass on Linux/BSD, but I hate the way they do business as well.
 
Has anyone seen the Arc cards tested with the Windows version of DXVK (or even WineD3D)? Given that a lot of what people are complaining about seems to be bugs in the Microsoft compatibility layer they're relying on for DX10 and older, it'd be interesting to see if things might run better in CSGO and shit with the compatibility layers that some random spergs on the internet created vs Microsoft's pajeetware.
 
. I will never consider buying any Nvidia product except a x86 CPU if they ever make one. Not only is Nvida ass on Linux/BSD, but I hate the way they do business as well.
People sometimes contest this and says times have changed and in a way it's true and nvidia drivers work better than they ever did before at least on linux but invariably you will run into something that doesn't work and it'll never get fixed because Nvidia. Doesn't. Care. If you can live with this nvidia is quite viable on linux nowadays and their proprietary blob drivers are not some stark contrast to AMDs stuff since while AMD has open drivers, you still get *a lot* of firmware bobs. While AMD can cause you less problems in this space both aren't really what I'd consider "open source". Still, nvidia might make you run straight into a wall. You've been warned, and all that.

They're pretty much the only thing though if you're interested in machine learning. AMDs support there is utter garbage. I feel this might become more important in the future when end-user software will take advantage of the GPU for AI.
 
  • Like
Reactions: Allakazam223
Most people who are building a home PC aren't, despite them being so quick to spout out "muh CUDA! Muh ML!" when questioned on their devotion to Nvidia.
The problem is that for a lot of ML you need an obnoxious amount of RAM or it'll be either impossible or violently slow. 48-64 GB of VRAM requirements are not unheard of, which you will not find in any single consumer card for a long time. They are basically useless here. A sensible person just rents (a) GPU(s) from one of the various farms these days. These aren't consumer-lines GPUs then though.

That said, there's things like Stable Diffusion now which run just fine even on mid-level cards and I could totally see being implemented in future video games and applications like Krita/Gimp. As things are now, it's nvidia or bust then.
 
  • Thunk-Provoking
Reactions: Allakazam223
People sometimes contest this and says times have changed and in a way it's true and nvidia drivers work better than they ever did before at least on linux but invariably you will run into something that doesn't work and it'll never get fixed because Nvidia. Doesn't. Care. If you can live with this nvidia is quite viable on linux nowadays and their proprietary blob drivers are not some stark contrast to AMDs stuff since while AMD has open drivers, you still get *a lot* of firmware bobs. While AMD can cause you less problems in this space both aren't really what I'd consider "open source". Still, nvidia might make you run straight into a wall. You've been warned, and all that.

They're pretty much the only thing though if you're interested in machine learning. AMDs support there is utter garbage. I feel this might become more important in the future when end-user software will take advantage of the GPU for AI.
Thankfully I have zero interest in machine learning. ML is a bit of a joke to be honest. All those stable diffusion and GPT threads on 4chan cemented my opinion that ML is largely just fraud. I am not impressed that the most sophisticated ML models are only able to copy Wikipedia articles in response to a query.
 

The Ryzen 7000 iGPU can be overclocked to 3 GHz. Being on 6nm instead of 7nm probably helps a little, and it's just 2 CUs.

If Zen 4's clock speeds are anything to go by, RDNA 3 on 5nm could reach very high clocks. 4 GHz has been mentioned.

The problem is that for a lot of ML you need an obnoxious amount of RAM or it'll be either impossible or violently slow. 48-64 GB of VRAM requirements are not unheard of, which you will not find in any single consumer card for a long time. They are basically useless here. A sensible person just rents (a) GPU(s) from one of the various farms these days. These aren't consumer-lines GPUs then though.

That said, there's things like Stable Diffusion now which run just fine even on mid-level cards and I could totally see being implemented in future video games and applications like Krita/Gimp. As things are now, it's nvidia or bust then.
48 GB could come sooner than later if Nvidia makes a Titan and doubles from the 24 GB of 3090/4090. We'll see if AMD gets off their ass and puts 32 GB in a card. I think they are essentially betting on CUDA translation to make themselves relevant in compute/ML.
 
48 GB could come sooner than later if Nvidia makes a Titan and doubles from the 24 GB of 3090/4090. We'll see if AMD gets off their ass and puts 32 GB in a card. I think they are essentially betting on CUDA translation to make themselves relevant in compute/ML.
The bet for doing the CUDA translation is pretty risky when the superior hardware already exists. I've been tenuously following the AMD ROCm project that essentially allows you to use AMD GPU acceleration for ML and rendering tasks in Linux and while it has made significant headway, it still isn't ready for prime time. Look at these Blender render benchmarks of Cuda vs ROCm.


rocm.jpg
 
Don't know what this guy's deal is but I like the cut of his jib and hope no one will pop into the thread posting about how he's a fake and a liar and a shill. I just want to enjoy the idea that this long-haired oldschool computer shop dude is on the level.

Also, love him all but calling nVidia a bunch of cocksuckers for releasing the Big Boi before the lower models and then finishing the conclusion by saying, hey check out the older lines of cards, they're cheaper now...
 
Don't know what this guy's deal is but I like the cut of his jib and hope no one will pop into the thread posting about how he's a fake and a liar and a shill. I just want to enjoy the idea that this long-haired oldschool computer shop dude is on the level.

Also, love him all but calling nVidia a bunch of cocksuckers for releasing the Big Boi before the lower models and then finishing the conclusion by saying, hey check out the older lines of cards, they're cheaper now...
Tbf Nvidia are a bunch of cocksuckers.
 
What's the best current PSU? I've seen the Corsair RMT1000x recommended a lot but reviews complain about how stiff the cables are, some even calling them cheap.
 
What's the best current PSU? I've seen the Corsair RMT1000x recommended a lot but reviews complain about how stiff the cables are, some even calling them cheap.
You can refer to https://cultists.network/140/psu-tier-list/ generally for "is this PSU good?" type questions. Then just look for something that's the appropriate wattage for whatever you're putting together.
 
Back