GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

Discrete GPUs will never go away, there's just too many orders of magnitude difference in performance. But igpus are increasingly 'good enough' for many games.

Nintendo Switch: 393.2 GFLOPS
Intel UHD 770 iGPU: 793.6 GFLOPS
Intel Iris Xe iGPU: 2.2 TFLOPS
AMD 8700G iGPU: 4.5 TFLOPS
PS5: 10.3 TFLOPS
Nvidia 3060: 12.7 TFLOPS
Nvidia 4080: 48.7 TFLOPS
You say this looking at existing technology. There was a period when we thought integrated MMUs and FPUs and sound cards were never going to be able to match the performance of discrete solutions but those also just got absorbed. In the long-term, deeper integration between the CPU, the GPU, and system RAM is probably going to pay a lot of dividends in performance to the point where having a discrete GPU will actually not be the optimal choice. We actually saw this with FPUs - the original x87 discrete FPUs were actually quite slow because moving stuff between the FPU registers and the actual x86 registers incurred a performance penalty. Moving FPU stuff onto the main CPU not only improved x87-level floating point shit but also enabled future tech like MMX.

PCIe is fast but it's still several orders of magnitude slower than what AMD, Apple, and even Intel are cooking up long-term for what they want to do with integrated graphics. Our biggest issue right now is the added cost of making larger dies and thermal considerations, but the materials science for solving a lot of that is already in the pipeline and I wouldn't be surprised if the discrete GPU as we understand it is gone by the early 2030s.
 
Last edited:
The thing with iGPUs, especially the modern ones, is that they LOVE having fast RAM to play with. Even early with the M1s, the 16 GB models could give 10-15% more performance than 8 GB, and part of the M2's performance boost was also due to the faster RAM, again by about 10%.

An interesting example is the recent MateBook, which this year had the Intel Xe G7 80EU, compared to the 96EU from the 2021 MateBook. Despite it being a slower GPU, it actually gives a slight step up in terms of performance, granted only by about 2 to 15%. Granted, some of that is probably due to the processor itself (i5 1240P vs i7-1165P), but it's quite likely that it's enjoying the faster RAM (DDR4 3733 vs DDR4 3200)

Granted, you could easily argue that 15% isn't exactly fantastic in 3 years, but it's an interesting thing to see.

I'm certainly not going to say that iGPUs are going to beat a 40 or 50 series, but... they've come a pretty bloody decent way in a short period of time.

Hell, we've got iGPUs capable of hardware ray tracing now. 5 years ago, that would have been unthinkable.
 
The thing with iGPUs, especially the modern ones, is that they LOVE having fast RAM to play with. Even early with the M1s, the 16 GB models could give 10-15% more performance than 8 GB, and part of the M2's performance boost was also due to the faster RAM, again by about 10%.

An interesting example is the recent MateBook, which this year had the Intel Xe G7 80EU, compared to the 96EU from the 2021 MateBook. Despite it being a slower GPU, it actually gives a slight step up in terms of performance, granted only by about 2 to 15%. Granted, some of that is probably due to the processor itself (i5 1240P vs i7-1165P), but it's quite likely that it's enjoying the faster RAM (DDR4 3733 vs DDR4 3200)

Granted, you could easily argue that 15% isn't exactly fantastic in 3 years, but it's an interesting thing to see.

I'm certainly not going to say that iGPUs are going to beat a 40 or 50 series, but... they've come a pretty bloody decent way in a short period of time.

Hell, we've got iGPUs capable of hardware ray tracing now. 5 years ago, that would have been unthinkable.
I will admit I chose somewhat slower ram (DDR4 3200, 32gb), but I'd be surprised if my 12 gen i7 core couldn't achieve some type of ray tracing. The tech is actually fairly old, I still remember them talking about the Halo 2 demo in the bonus features disc in the Halo 2 collectors edition, it used it, it just had to be scrapped because the Xbox OG couldn't handle it.

It's been a long time since that hardware came out, over 20 years. If far faster and exponentially more powerful CPUs and RAM can't handle at least a basic form of ray tracing, when it was possible enough back then to at least make a pre rendered version using what we're big computers back then, I'll be shocked.
 
  • Like
Reactions: seri0us
MMUs and FPUs
Discrete solutions here were always slower because you needed additional cycles for the CPU to "talk" to them. With this old tech there also were pretty hard limits how fast that talking could happen before all the ICs involved would just see noise. Then you start needing more infrastructure, have to be careful about noise etc.. that's why discrete solutions never can be as fast as if everything is just on the same die. IIRC there was some clever engineering with FPUs with running them asynchronously to the CPU for the x86 at least with a dedicated clock and in 68020 designs you could use several FPUs in the same system but you only saw such stuff in high end custom equipment e.g. for scientific use, if even there. This tech was mostly straightly planted in the 80s and didn't really exist long enough for there to be tons of examples of it, it all disappeared into the CPU fairly quickly when mass adoption of personal computers happened. The oldest FPU I own myself is from 1981 and from AMD. (NMOS, needs both 5V and 12V) Sure it existed but you barely saw that thing in the wild outside of custom designed hardware for specific purposes.

People still get discrete sound cards even today because they see a fancy heatsink or tons of fancy-looking capacitors and think "yeah that's high quality". I blame the audio voodoo movement. Sound is a solved problem. To be fair though, early integrated Audio really was kinda noisy and crappy. This usually was almost exclusively the fault of a poor mainboard design though.
The thing with iGPUs, especially the modern ones, is that they LOVE having fast RAM to play with.
Agreed. A lot of lower end CPUs perform almost uselessly in cheap notebooks etc. because they usually get paired with shitty RAM, usually single channel even if the SoC supports more. If you pair iGPUs with the fastest RAM you can find, they can perform surprisingly well, even low-end ones. It's all about the bandwidth with them. That's why you often can't really compare low-end Craptops with the high end "business-line" ones performance-wise, even if they might have similar SoCs.
 
Last edited:
People still get discrete sound cards even today because they see a fancy heatsink or tons of fancy-looking capacitors and think "yeah that's high quality". I blame the audio voodoo movement. Sound is a solved problem. To be fair though, early integrated Audio really was kinda noisy and crappy. This usually was almost exclusively the fault of a poor mainboard design though.
Nowadays enthusiasts get USB DAC/AMPs since motherboard sound, while good enough for most people, wont really drive high impedence headphones or have a top tier DAC, or be completely free of electrical interference, but it's certainly not like the old days when you needed a Creative Sound Blaster.
 
Discrete GPUs will never go away, there's just too many orders of magnitude difference in performance. But igpus are increasingly 'good enough' for many games.

Nintendo Switch: 393.2 GFLOPS
Intel UHD 770 iGPU: 793.6 GFLOPS
Intel Iris Xe iGPU: 2.2 TFLOPS
AMD 8700G iGPU: 4.5 TFLOPS
PS5: 10.3 TFLOPS
Nvidia 3060: 12.7 TFLOPS
Nvidia 4080: 48.7 TFLOPS

Consumers don't care about TFLOPS. They care about whether the toy they are buying makes pretty enough pictures to entertain them and isn't absurdly expensive. That's why the $450 PS5 outsells the $2000 4080 by orders of magnitude. There was a time when you could not play any games at all on an iGPU. Now, all the mainstream consoles have them, and you can play most current games at 1080p and a mix of medium & low settings on an iGPU.

The trend in dGPUs is for fewer and fewer to be sold every year. NVIDIA's managed to achieve revenue growth by raising the price faster than unit sales are falling, but eventually, that strategy ends like squeezing blood from a stone.
 
I will admit I chose somewhat slower ram (DDR4 3200, 32gb), but I'd be surprised if my 12 gen i7 core couldn't achieve some type of ray tracing. The tech is actually fairly old, I still remember them talking about the Halo 2 demo in the bonus features disc in the Halo 2 collectors edition, it used it, it just had to be scrapped because the Xbox OG couldn't handle it.

It's been a long time since that hardware came out, over 20 years. If far faster and exponentially more powerful CPUs and RAM can't handle at least a basic form of ray tracing, when it was possible enough back then to at least make a pre rendered version using what we're big computers back then, I'll be shocked.
A fair point. I should say real time ray tracing.

Granted, pre-rendered's a different kettle of fish, since that's not being generated by your system. Ray tracing's been around for ages with movies, but a cinema or DVD player's not actually making it from scratch, it's showing a pre-made product.
 
Discrete GPUs will never go away, there's just too many orders of magnitude difference in performance. But igpus are increasingly 'good enough' for many games.

Nintendo Switch: 393.2 GFLOPS
Intel UHD 770 iGPU: 793.6 GFLOPS
Intel Iris Xe iGPU: 2.2 TFLOPS
AMD 8700G iGPU: 4.5 TFLOPS
PS5: 10.3 TFLOPS
Nvidia 3060: 12.7 TFLOPS
Nvidia 4080: 48.7 TFLOPS
Nvidia (because AMD is a follower, not the leader) has to hype up computationally expensive features and new targets to keep APUs and older dGPUs from catching up. Ray tracing, path tracing, 8K reslolution, frame gen to get to 1000 FPS... Raster performance is already too good and will stop being the focus within a few years.

But the mainstream APUs are already in a good place, since most gamers are still using 1080p, and console gaming is locked to Xbox Series S for years to come. It won't stay that way forever since the market is flooded with cheap 4K displays, but APUs can also benefit from the same upscaling techniques as dGPUs, as long as it's not proprietary.

With the 8700G at $330 on AM5, it will be easy for cheaper CPU+dGPU combos to outperform it. However, that price can drop and you are getting a faster CPU.

The thing with iGPUs, especially the modern ones, is that they LOVE having fast RAM to play with. Even early with the M1s, the 16 GB models could give 10-15% more performance than 8 GB, and part of the M2's performance boost was also due to the faster RAM, again by about 10%.
It appears that:
  • AMD's Strix Halo will put a quad-channel APU in the hands of consumers. It will be expensive enough to not be a budget dGPU killer, can't be socketed on AM5, it's hypothetically cheaper than APU+dGPU in gaming laptops, and it could tackle 1440p/4K.
  • Based on current leaks, AMD has no plans to bring larger L3 cache accessible to the iGPU (Infinity Cache) to mainstream APUs like Strix Point, or even Halo. Infinity Cache allowed RDNA2 dGPUs to perform well even with smaller memory buses. The iGPU can't even access the L3 cache in APUs like Phoenix (16 MiB) or Strix Point (24 MiB, maybe 16+8 split). It's for the CPU cores only.
  • Intel will introduce Adamantine, a stacked L4 cache, at some point. It was reported to be coming to Meteor Lake but that obviously never happened. The iGPU should be able to use Adamantine like VRAM, presumably with higher bandwidth than the external DRAM.
The future looks bright for APUs, but amazing advancements will be delayed for years due to cost/market concerns. They could make "mega APUs" tomorrow but nah.
 
A fair point. I should say real time ray tracing.

Granted, pre-rendered's a different kettle of fish, since that's not being generated by your system. Ray tracing's been around for ages with movies, but a cinema or DVD player's not actually making it from scratch, it's showing a pre-made product.
Agreed. Again, that Halo 2 E3 demo WAS prerendered by computers significantly more powerful than a OG Xbox. I was just making the point that the tech and theory for ray tracing is old as fuck.
 
Nvidia (because AMD is a follower, not the leader) has to hype up computationally expensive features and new targets to keep APUs and older dGPUs from catching up. Ray tracing, path tracing, 8K reslolution, frame gen to get to 1000 FPS... Raster performance is already too good and will stop being the focus within a few years.

Ironically, the two technologies that I expect to accelerate the decline of dGPUs, frame generation and smart upscaling, were invented by NVIDIA. DLSS isn't particularly useful on a 4090, but it's extremely useful on a 3050.
 
Ironically, the two technologies that I expect to accelerate the decline of dGPUs, frame generation and smart upscaling, were invented by NVIDIA. DLSS isn't particularly useful on a 4090, but it's extremely useful on a 3050.
That's kind of why I still have a 3060 on my list. Sure it's weaker, but DLSS can make up at least somewhat for raw processing power. Same with the 4060.
 
1706696785134.png

There's now a water cooler for the Raspberry Pi
 
Finally ordered my wifi chip after having canceled it before. Comes February. Also got a 2tb portable hdd. Debating to use it on my laptop instead of the PS4 I bought it for. Probably gonna use it on the ps4, and save for a ssd on my laptop
 
Something worth thinking about regarding NVIDIA's GPUs being a bit stingy in the RAM...I got a Radeon 6700 XT 12 GB in part because of how much memory it had, and I believed 8 GB cards wouldn't be able to use the full might of a modern chip. I have found, pretty consistently, that nearly any game that uses over about 5 GB of VRAM with high-quality textures runs like shit. The ones I've tried are Diablo IV, Darktide, and MWII, and with the highest texture setting, they have sporadic juddering and shuddering. It's pretty clear that 384 GB/s is not enough bandwidth for 12 GB to even be useful. Even at 1440p with upscaling (FSR or XeSS, doesn't matter), same deal, the fps shit the bed all the time. All of NVIDIA's 30 series GPUs except the 3060 non-Ti have higher bandwidth.

Overall, this is a pretty disappointing piece of hardware.
 
Last edited:
It's honestly a shame that we haven't gotten a good value APU since the 2200G. They are capable sure, it's just the at msrp most of them have been bad value because at the same prices you could still buy a new CPU and GPU combo at the same prices (or less) and gain way more performance. The 2200g was pretty much the only Ryzen APU that actually had zero competition at its price point. There were also the athlon APUs for under $60 but even for the time 2 CPU cores just weren't enough. Maybe the story would be different if the 5300g and 8300g weren't limited to OEMs only.
 
  • Like
Reactions: Vecr
"good value" as in low end? the 5700G basically made the 5700X/5800X irrelevant for gaming and the difference isn't that big for programs either

https://www.techspot.com/review/2293-amd-ryzen-5700g/
about 10% slower than the 5800X at 1080P ultra with an RTX 3090, doesn't mention what memory kit was used

I think you miss my point. Sure the 5700G is great value with a GPU, but I'm talking about with the integrated graphics only which is supposed to be the main appeal of these APUs.
 
You always pay more for less in this space. An iGPU lets you avoid the dGPU and for some people, that exactly is the draw. dGPUs are also absurdly awful at power consumption, sometimes drawing more power displaying a simple desktop than the entire rest of the system consumes.

It'll be interesting to see how the mini PCs establish themselves, from what I've seen of them I wasn't really impressed though. Cool idea, poor engineering. The cooling solutions often seem somewhere between "refurbished fan of notebook from 2008" to "lol let thermal throttling figure that one out".
 
"good value" as in low end? the 5700G basically made the 5700X/5800X irrelevant for gaming and the difference isn't that big for programs either

https://www.techspot.com/review/2293-amd-ryzen-5700g/
about 10% slower than the 5800X at 1080P ultra with an RTX 3090, doesn't mention what memory kit was used
5700X/5800X were already irrelevant since if you want a Zen 3 gaming PC, get a 5600(X), and if you're doing real work or need the absolute best performance, get more cores.

The processing power should be less, if AMD was targeting the "gaming PC as cheap as possible" demographic with these APUs. Which I assume they're not, and prebuilts that can be sold as "gaming capable" are the main purpose.
 
  • Agree
Reactions: Leaded Gasoline
Back