GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
I asked this in the No Stupid Questions thread but I was wondering if I could get a straight answer here.

I have an Nvidia graphics card with my build. It works really well, but some games aren't well optimized for it. REmake 2 for example is completely broken with Nvidia unless you use older graphics drivers from 2018, which isn't something I want to do for just one game. At first I was planning on building an AMD build at some point in the future, but then I wondered if I could just buy an AMD card for my current build and switch between the two so that I can cover my bases and not spend another $1500.

My question is, is it possible/okay to have two GPUs in one build and have them work individually? I'm not looking for Crossfire/SLI, I'm just wondering if it's a good idea to have both in the machine so that I can switch on the fly in case a game isn't optimized for one of them.

I think it could be done but it would be a royal pain in the ass. I posted a reply in no stupid questions a minute ago so I've had plenty of time to think about it and it is possible, it should be easier than ever. I would recommend buying a cheap $50-70 AMD card and try it out with the Nvidia card so that you have an idea of what you're getting into.

Can someone explain what makes a good GPU vs a bad one? In terms of cores, RAM, clock speeds, and anything that sets them apart from CPUs.

I sort of understand what they are - parallel multi-core processers that are optimized for simpler operations than a CPU - but I was a console fag growing up and never needed to learn.

They have similarities with CPUs but a CPU can't make the very, very, very broad assumptions of what it will be processing and outputting that a GPU can and be designed and optimized accordingly. It is a very good question and you've got me thinking about the decisions and progress that differentiated GPUs over the last 20+ years and how it has changed, I'll write a sperg post later. One thing that affects ALL of the GPU market, including Qualcomm and Apple, is the patent situation. When Apple broke with Imagination and announced they were going to make their own GPU, Imagination responded with "we don't think you can" - it wasn't calling Apple engineers morons or incapable, they were saying that they would run into patent issues not only with them but with everyone else. That is something that differentiates GPUs from each other, if one company patents the solution to 1+1 then the other guy is forced to go with 0.5*4 to get the same result or they will get fucked by lawyers. Not really like that, but they have to differentiate their hardware so the equivalent of the same hardware function or processing element in an AMD and Nvidia GPU will perform differently in different ways. That's why they can't be directly compared using numbers.
 
They have similarities with CPUs but a CPU can't make the very, very, very broad assumptions of what it will be processing and outputting that a GPU can and be designed and optimized accordingly. It is a very good question and you've got me thinking about the decisions and progress that differentiated GPUs over the last 20+ years and how it has changed, I'll write a sperg post later. One thing that affects ALL of the GPU market, including Qualcomm and Apple, is the patent situation. When Apple broke with Imagination and announced they were going to make their own GPU, Imagination responded with "we don't think you can" - it wasn't calling Apple engineers morons or incapable, they were saying that they would run into patent issues not only with them but with everyone else. That is something that differentiates GPUs from each other, if one company patents the solution to 1+1 then the other guy is forced to go with 0.5*4 to get the same result or they will get fucked by lawyers. Not really like that, but they have to differentiate their hardware so the equivalent of the same hardware function or processing element in an AMD and Nvidia GPU will perform differently in different ways. That's why they can't be directly compared using numbers.
... so corporations who have the most resources and get there first reap the benefits, while 2nd place may need to use less optimal solutions, and 3rd place even less optimal, and so on and so forth?
 
... so corporations who have the most resources and get there first reap the benefits, while 2nd place may need to use less optimal solutions, and 3rd place even less optimal, and so on and so forth?

Pretty much. For example, Qualcomm with their Snapdragon uses the Adreno GPU which is an anagram of Radeon that they bought from ATI, currently part of AMD, ATI sold their Radeon mobile group that included the patents/tech bought from (among others, I'm biased) Bitboys that had focused on developing(and patenting) GPU technologies for mobile devices/phones years before the first iPhone came out and made mobile GPUs relevant. Qualcomm might not have used anything of what they bought, what they paid for might have been legal armor.
 
  • Informative
Reactions: ???
Can someone explain what makes a good GPU vs a bad one? In terms of cores, RAM, clock speeds, and anything that sets them apart from CPUs.

I sort of understand what they are - parallel multi-core processers that are optimized for simpler operations than a CPU - but I was a console fag growing up and never needed to learn.
Sometimes it's not even hardware that makes or breaks a GPU. Drivers play a large role in what experience you'll have. My 5700XT for example is currently plagued by a couple of driver issues that make it a tad too annoying to use. That's really the largest issue of pc components vs console: experiences can (and will) be inconsistent with others using the same hardware as you.
 
Sometimes it's not even hardware that makes or breaks a GPU. Drivers play a large role in what experience you'll have. My 5700XT for example is currently plagued by a couple of driver issues that make it a tad too annoying to use. That's really the largest issue of pc components vs console: experiences can (and will) be inconsistent with others using the same hardware as you.
PC has come a long way, but "just put it in and play" is still better on console, even if day 0 patches have muddied that water the last few years.
 
  • Agree
Reactions: Smaug's Smokey Hole
Sometimes it's not even hardware that makes or breaks a GPU. Drivers play a large role in what experience you'll have. My 5700XT for example is currently plagued by a couple of driver issues that make it a tad too annoying to use. That's really the largest issue of pc components vs console: experiences can (and will) be inconsistent with others using the same hardware as you.
You got that right. I found out recently that RE2make is totally broken on current Nvidia drivers. If you play it with updated drivers, the game's lighting breaks. All the lights appear at first but then gradually dim to where you're in total darkness. That is unless you play with drivers from 2018, then it works fine.

PCs are far more technical to work with than consoles. Even stuff that should realistically run on your system might not work at all. Like RE2make. My system's powerful enough to handle it on medium to high settings but that's a moot point when I can't even see anything.
 
  • Agree
Reactions: Smaug's Smokey Hole
This should be the right thread for this.

Upgraded my GPU and changed vendors from Nvidia to AMD and I noticed something strange that might be a difference in the Nvidia/AMD drivers at some layer: certain Windows HW accelerated GUI functions are slower to start drawing. Not by a lot, tens of milliseconds, but it is still there. It is not a caching or buffering issue, it happens every time, and it only seems to apply to the time between the call and execution of the drawing. Weird and annoying.
 
This should be the right thread for this.

Upgraded my GPU and changed vendors from Nvidia to AMD and I noticed something strange that might be a difference in the Nvidia/AMD drivers at some layer: certain Windows HW accelerated GUI functions are slower to start drawing. Not by a lot, tens of milliseconds, but it is still there. It is not a caching or buffering issue, it happens every time, and it only seems to apply to the time between the call and execution of the drawing. Weird and annoying.
I know AMD stated that they are working on a solution to that, but basically that feature isn't there yet and Windows is doing scheduling with the CPU instead of the GPU with AMD cards
 
  • Thunk-Provoking
Reactions: Smaug's Smokey Hole
Supposedly the Nvidia's upcoming GTX 3090 gets 5248 CUDA cores and 24 gigs of VRAM while the GTX 3080 gets 4352 CUDA cores and 10 gigs of VRAM.

10 gigs on the 3080 seems a bit low while 24 gigs on the 3090 seems insanely high. Not that there's really any game that would use 10GB right now, much less 24GB, so who the fuck knows. The 3090 seems like a Titan or Quadro right out of the gate.
 
  • Like
Reactions: Allakazam223
Supposedly the Nvidia's upcoming GTX 3090 gets 5248 CUDA cores and 24 gigs of VRAM while the GTX 3080 gets 4352 CUDA cores and 10 gigs of VRAM.

10 gigs on the 3080 seems a bit low while 24 gigs on the 3090 seems insanely high. Not that there's really any game that would use 10GB right now, much less 24GB, so who the fuck knows. The 3090 seems like a Titan or Quadro right out of the gate.
I have a hard time believing that bullshit with how the 2000 series was such a small jump. We should be lucky they weren't pulling this shit years ago. A top of the line NVIDIA card from 4.5 years ago is still in the top 10 graphics cards on the market, you couldn't say that back in 2015. By the time the 3000 series comes out it will be 5 years since the 1080 and it will just then be slightly obsolete.
 
I have a hard time believing that bullshit with how the 2000 series was such a small jump. We should be lucky they weren't pulling this shit years ago. A top of the line NVIDIA card from 4.5 years ago is still in the top 10 graphics cards on the market, you couldn't say that back in 2015. By the time the 3000 series comes out it will be 5 years since the 1080 and it will just then be slightly obsolete.

I've said it in several threads: 1080p@60 is a solved problem, for now, in the same way audio is solved. No one cares about audio anymore, it's not a something anyone has to be concerned about like it used to be.

Games are so tightly tied to consoles and what they can deliver that they need to come up with new shit, they've been pushing 144fps and, separately from that, 4K at ultra for a while. Ultra is often a scam. Run the game on low and it still looks pretty good, textures and detail will suffer but the dynamic effects a game relies on like light and shadow will still be there. I think this will be repeated with raytracing down the line.
In my )))expert((( opinion 2023-2024, a couple of years into next-gen, will be a fantastic time to upgrade using low mid-range(250-350 bucks, hopefully) RT capable GPUs. The 1080/TI will last for a couple of extra years on PC running at least 1080p - largely dictated by the market(look at steam surveys of what hardware is used, the price of hardware used then project that forwards).

It's now two years per generation/architecture, it used to be 6-9 months. Power is an issue, they're really pushing the envelope on that and what they release now on the high-end will be more powerful than the consoles before they even launch while consoles will dictate what the game does. High-end for PC gaming is fucking stupid.
 
  • Like
Reactions: Aberforth
Supposedly the Nvidia's upcoming GTX 3090 gets 5248 CUDA cores and 24 gigs of VRAM while the GTX 3080 gets 4352 CUDA cores and 10 gigs of VRAM.

10 gigs on the 3080 seems a bit low while 24 gigs on the 3090 seems insanely high. Not that there's really any game that would use 10GB right now, much less 24GB, so who the fuck knows. The 3090 seems like a Titan or Quadro right out of the gate.
Red Dead 2 at 4k will happily use more than 8GB, but I agree that the 3090 might be a Titan card. Has Nvidia used a x90 moniker before? I know AMD seems to reserve those highest numbers (x95?)for dual GPU cards.
 
I have not seen a devaluation of something so hard since the bolivar super strong was announced

Eg19OSrXkAAHRQb.jpg
 
  • Thunk-Provoking
Reactions: Smaug's Smokey Hole
I have no idea what I'm doing when it goes to GPU passthrough. My setup goes like this:

1x GeForce GTX 750 Ti (from my old build)
1x Radeon RX 580 (my current card)

Should I plug the 580 into my VM and the 750 Ti into my Linux machine, or vice versa?

I also know AMD cards had a kernel bug, but a search shows that got fixed, or at least I think it did. I know I could do a kernel patch to fix it at least.
 
I have not seen a devaluation of something so hard since the bolivar super strong was announced

View attachment 1564300

Jesus fucking christ. If half of their performance claim is mostly true in 3/4 of of applicable areas then we're entering into one of those golden PC eras again.

@GHTD what are you doing with it and what's the problem? The large discrepancy in VRAM could mess with things, that's just me speculating and it's probably something for the Questions thread.
 
Last edited:
  • Like
Reactions: Allakazam223
Jesus fucking christ. If half of their performance claim is mostly true in 3/4 of of applicable areas then we're entering into one of those golden PC eras again.

The most surprising thing is the price cut, a XX80 series always were super expensive, now? you can get twice the power of a 2080 for half the price, the 2080ti is going down in the second market, EVGA is already showing their models, also that insane promise of 360hz gaming is making some people sweat high ray traced sweat
 
The most surprising thing is the price cut, a XX80 series always were super expensive, now? you can get twice the power of a 2080 for half the price, the 2080ti is going down in the second market, EVGA is already showing their models, also that insane promise of 360hz gaming is making some people sweat high ray traced sweat

It's too good to be true, so it's probably not true, but these things have happened in the past. I don't believe their numbers straight up, they're probably leaning heavily on RT/Tensor performance, but like I said it's pretty amazing if they could put out a high-end card at... a 2009 price.

What interests me more is what will be available under $400 and $300. Ray tracing isn't something for sub-200 yet, hitting that price point means wide and soon to be universal adoption of it in games. 2023 is my guess for that golden egg, a sub-200 card that is able to both support and adequately run RT-only games - console ports in other words. Consoles will 100% drive RT adoption and AMDs solution will likely not be all that so low-mid range PCs, those with GPUs in the price range of xx50 to xx60, will catch up and the true low-end will be able to limp behind in performance not long after.

Honestly, in 3 or so years years I think many new and nice looking games using raytracing will run on sub-$100 cards, at 1600x900 up to 1080p. Something that doesn't change is that the chips improve greatly, the GPU computes more, the VRAM is faster, but price is always an indicator of the bus width between GPU and VRAM.

With that said, the 10GB 3080 definetely has a 320bit bus width and that's a nice step above mid-range, in my opinion the mid-range is usually 256bit.
 
It's too good to be true, so it's probably not true, but these things have happened in the past. I don't believe their numbers straight up, they're probably leaning heavily on RT/Tensor performance, but like I said it's pretty amazing if they could put out a high-end card at... a 2009 price.

What interests me more is what will be available under $400 and $300. Ray tracing isn't something for sub-200 yet, hitting that price point means wide and soon to be universal adoption of it in games. 2023 is my guess for that golden egg, a sub-200 card that is able to both support and adequately run RT-only games - console ports in other words. Consoles will 100% drive RT adoption and AMDs solution will likely not be all that so low-mid range PCs, those with GPUs in the price range of xx50 to xx60, will catch up and the true low-end will be able to limp behind in performance not long after.

Honestly, in 3 or so years years I think many new and nice looking games using raytracing will run on sub-$100 cards, at 1600x900 up to 1080p. Something that doesn't change is that the chips improve greatly, the GPU computes more, the VRAM is faster, but price is always an indicator of the bus width between GPU and VRAM.

With that said, the 10GB 3080 definetely has a 320bit bus width and that's a nice step above mid-range, in my opinion the mid-range is usually 256bit.

Yep, the price cut make it to weird, Digital foundry already have the first units and are benchmarking, waiting for Linus and the peeps of PCMR


80% gain on performance they say...
 
Yep, the price cut make it to weird, Digital foundry already have the first units and are benchmarking, waiting for Linus and the peeps of PCMR


80% gain on performance they say...

I have to watch that, it's absolutely a restricted NDA preview though. I'm waiting for Steve Burke to present more statistics than I know what to do with.

Something I saw with the shot of the 3080 in the first few seconds of the DF video: only four outputs and no USB-C for VR.
 
Back