- Joined
- Sep 7, 2016
I asked this in the No Stupid Questions thread but I was wondering if I could get a straight answer here.
I have an Nvidia graphics card with my build. It works really well, but some games aren't well optimized for it. REmake 2 for example is completely broken with Nvidia unless you use older graphics drivers from 2018, which isn't something I want to do for just one game. At first I was planning on building an AMD build at some point in the future, but then I wondered if I could just buy an AMD card for my current build and switch between the two so that I can cover my bases and not spend another $1500.
My question is, is it possible/okay to have two GPUs in one build and have them work individually? I'm not looking for Crossfire/SLI, I'm just wondering if it's a good idea to have both in the machine so that I can switch on the fly in case a game isn't optimized for one of them.
I think it could be done but it would be a royal pain in the ass. I posted a reply in no stupid questions a minute ago so I've had plenty of time to think about it and it is possible, it should be easier than ever. I would recommend buying a cheap $50-70 AMD card and try it out with the Nvidia card so that you have an idea of what you're getting into.
Can someone explain what makes a good GPU vs a bad one? In terms of cores, RAM, clock speeds, and anything that sets them apart from CPUs.
I sort of understand what they are - parallel multi-core processers that are optimized for simpler operations than a CPU - but I was a console fag growing up and never needed to learn.
They have similarities with CPUs but a CPU can't make the very, very, very broad assumptions of what it will be processing and outputting that a GPU can and be designed and optimized accordingly. It is a very good question and you've got me thinking about the decisions and progress that differentiated GPUs over the last 20+ years and how it has changed, I'll write a sperg post later. One thing that affects ALL of the GPU market, including Qualcomm and Apple, is the patent situation. When Apple broke with Imagination and announced they were going to make their own GPU, Imagination responded with "we don't think you can" - it wasn't calling Apple engineers morons or incapable, they were saying that they would run into patent issues not only with them but with everyone else. That is something that differentiates GPUs from each other, if one company patents the solution to 1+1 then the other guy is forced to go with 0.5*4 to get the same result or they will get fucked by lawyers. Not really like that, but they have to differentiate their hardware so the equivalent of the same hardware function or processing element in an AMD and Nvidia GPU will perform differently in different ways. That's why they can't be directly compared using numbers.