GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

VR is definitely still in the gimmick range now. Gaming is still just these short games you play for 30 minutes and then don't touch again. Porn has a long way to go to really be viable beyond a cool thing to try. Other than driving games, it's just not there yet.
 
I think the reason PC manufacturers just make a giant hotbox is it's cheap.
To be fair, a lot of the build-your-PC market depends on aesthetics and most clever cooling solutions will hide components people apparently wanna look at and also would generally just be ugly. The by me suggested funnels also wouldn't really be one size fits all as mainboards are somewhat different. But yeah, there could be smarter solutions but it's the same like with notebooks and smartphones, they're not interested in lasting and good (and yes, expensive) solutions, they're interested in shovelware and the customer buying the new shiny next year and it shows. I could also imagine some of them panicking as PC components are much more of a long-term investment than they used to be. Maybe we'll start seeing planned obsolescence in components like mainboards. All these performance-intensive-to-fix CPU vulnerabilities are that already, in a way.

I knocked off about 10-15C from my desktop CPU by cutting some foam. Since PCs usually stand in areas where also people feel well environmentally air cooling them isn't hard. ~75C under prolonged full load with a low profile heatsink+fan on a 65W TDP SoC. The case has no fans otherwise. On idle, I even manage to drop below 30C if the room is cool enough. All you gotta do is basically making sure that air just touches heatsinks once and then giving it no way to go except out of the case so the heatsink/fan doesn't keep recycling the hot air. That's literally all you gotta do.

I put my antique A4-5000 ITX board into my NAS system today and all is well. I only need very little performance in that particular computer. (no fancy transcoding, frontends or VMs for me, all I need is, nfs, wireguard, ssh and "run that torrent client", "store that file", "send that file over the network") Using that old thing feels weird in times where the N100 exists, it's an 28nm chip from 2013 after all. Then again I did the math and in the most optimistic scenario of numbers I could actually find, buying a N100-based system with all the fixings would probably take about 7 years of 24/7 usage to break even on the initial investment and actually start saving money via electricity, so it simply isn't worth it to buy anything. This is assuming the new mainboard in question actually uses less power in idle after all is said and done, which isn't guranteed as when we're starting to talk in ranges of ~+/-5W things come down to regulators/other components used and firmware programming and stuff stops being straightforward. The worst offenders are the mechanical drives anyways and a new SoC is not gonna fix that.
 
VR is definitely still in the gimmick range now. Gaming is still just these short games you play for 30 minutes and then don't touch again. Porn has a long way to go to really be viable beyond a cool thing to try. Other than driving games, it's just not there yet.
I feel like it won't become prevalent unless it's augmented reality stuff with a very lightweight headset. Like having massive virtual monitors for your computer or 3d elements added to your workspace.

Full VR probably won't catch on unless we figure out full immersion systems like Sword Art Online and that will be expensive.
 
Mm. Apple Vision Pro's an interesting step forward for VR/AR, or to use their term 'spatial computing', but I'm not exactly a believer.

I've been hearing rumours of Demeo coming to AVP, which could be interesting. It had a release on Mac in December, with a quickly edited post which said it was coming to other upcoming Apple platforms.
 
  • Like
Reactions: Brain Problems
Honestly the cool factor is why I want the noctua. It seems by all accounts to handle the heat well, AND it looks like a sci-fi radiator. I really can't complain too much
I like Noctua, and I even have the D15, but as of late, they seem to be falling behind in raw performance.

Maybe in the next release cycle, they'll be the top dog again.

It is great that they support their products for a long time. According to some users, they could get new mounting kits even for the D14.
To be fair, a lot of the build-your-PC market depends on aesthetics and most clever cooling solutions will hide components people apparently wanna look at and also would generally just be ugly.
I hate this shit so much. I remember when RAM heatspreaders were just starting. The most unneccesary shit ever affecting important stuff like cooling.

Then side windows became a thing which evolved into the unholy glass side panel.

Then they decided to mess with front fans like a bunch of mongoloids.

Then RGB. FUCK THIS FUCKING SHIT.
 
like Noctua, and I even have the D15, but as of late, they seem to be falling behind in raw performance.

Maybe in the next release cycle, they'll be the top dog again.

It is great that they support their products for a long time. According to some users, they could get new mounting kits even for the D14.
I will say that they're support is impressive, I can fit that cooler ( which i have now) on multiple types of AMD and Intel rigs. I'm pretty satisfied with it honestly, it's built like a monster. Even if their performance is a bit behind, nothing indicates it won't work for my core.

Side note, my Intel 12700k was built in Illinois oddly enough. Found that cool.
 
I forgot the 12700K core count briefly, it's 8P + 4E.

It made me think of this recent leak that claims we'll see a 6P + 16E Arrow Lake die:

  • 8 P-Cores + 16 E-Cores (24 cores)
  • 6 P-Cores + 16 E-Cores (22 cores)
  • 6 P-Cores + 8 E-Cores (14 cores)
Weird if those are all desktop dies. But 6P + 8E at the low end could mean we finally see a dirt-cheap 4P + 4E i3.

I feel clairvoyant. I expected Intel to go bigger on E-cores following Alder Lake for a couple reasons. One is that any threaded application that can profitably make use of more than about 6 cores scales well enough that it will benefit more from increasing the number of threads than increasing clock speed and instruction parallelism. For an efficiently threaded application on Alder Lake, four E-cores outperform a single P-Core by about 1.4x. The second is that the trend in computing is to have more and more background processes all the time, which is specifically what E-cores were originally designed for.

However, purely opinion of course, I think Intel should stop at 8 E-cores, and the largest configuration should be 8+8. Past that, they should be increasing the size of the GPU. Now that they've largely shaken the kinks out of the Arc drives & architecture, they could be delivering APUs that make NVIDIA's low-end GPUs irrelevant and drive them upmarket, eventually squeezing them out of the desktop space. AMD should do the same.
 
However, purely opinion of course, I think Intel should stop at 8 E-cores, and the largest configuration should be 8+8. Past that, they should be increasing the size of the GPU. Now that they've largely shaken the kinks out of the Arc drives & architecture, they could be delivering APUs that make NVIDIA's low-end GPUs irrelevant and drive them upmarket, eventually squeezing them out of the desktop space. AMD should do the same.
Intel is likely going to go for 8 P-cores + 32 E-cores, possibly as soon as Arrow Lake Refresh. Actually: 34+ E-cores if they end up also including "LP E-cores" on desktop.

They haven't shown interest in going big on the desktop iGPU. They could make desktop APUs the same way that AMD does, putting their best mobile chip on a socket, but they haven't done that either.

I like your strategy since people may remain wary of buying low-end/mid-range Intel Arc discrete GPUs, but many will buy an Intel CPU. Perhaps Intel will reconsider the "F" models lacking an iGPU now that their desktop CPUs will be using chiplets/tiles instead of monolithic dies, and AMD is including graphics in almost all AM5 SKUs (with the exception of the Ryzen 5 7500F). They could either omit the GPU tile on some models, or use a smaller tile that is also partially disabled to provide the equivalent of low-end Intel UHD Graphics 710.

Interestingly, if Arrow Lake is similar to Meteor Lake, video decode/encode will be located on a "SoC tile", not graphics. All desktop CPUs could enjoy Intel Quick Sync Video regardless of iGPU presence.

If I'm not mistaken, laptops are outselling desktops about 3-to-1. That's where Intel and AMD are actually giving a damn about integrated graphics, with Meteor Lake performing more or less as well as Phoenix.

AMD's Strix Halo is now officially confirmed by a ROCm update. That chip would definitely be capable of forcing Nvidia dGPUs out of some premium laptops, assuming it gets any adoption. It needs new laptop designs built around it. Outside of laptops, Strix Halo needs to be cheaper to make for interesting mini PCs (competing against conventional CPU/APU + dGPU options that are around $600-1000).
 
  • Thunk-Provoking
Reactions: The Ugly One
They haven't shown interest in going big on the desktop iGPU. They could make desktop APUs the same way that AMD does, putting their best mobile chip on a socket, but they haven't done that either.

I like your strategy since people may remain wary of buying low-end/mid-range Intel Arc discrete GPUs, but many will buy an Intel CPU. Perhaps Intel will reconsider the "F" models lacking an iGPU now that their desktop CPUs will be using chiplets/tiles instead of monolithic dies, and AMD is including graphics in almost all AM5 SKUs (with the exception of the Ryzen 5 7500F). They could either omit the GPU tile on some models, or use a smaller tile that is also partially disabled to provide the equivalent of low-end Intel UHD Graphics 710.

Notably, the only growth industry in the desktop space is all-in-ones and minis, which tend to use laptop parts...which is exactly where, as you pointed out, both AMD and Intel are putting a lot more investment into iGPUs. I think the desktop space is worthwhile not so much because of the money, but because hastening the secular decline in dGPU sales will hurt NVIDIA and make them scramble more to find other sources of revenue. Basically, imagine GamersNexus doing videos like, "NVIDIA's 5060...a pointless product???" with his screwball expression on the splash image because building a PC with a 5060 costs $300 more than just getting an APU, and you're only going to see maybe 10% better frame rates.
 
Would these arrow lake CPUs be better then a i3-12100 for a Debian media server in any way?
 
Farms and youtube crash my graphics driver. Ctrl win shift B fixes it, but why the fuck is it crashing on those sites specifically?

Nvidia 1650 GTX.
 
Would these arrow lake CPUs be better then a i3-12100 for a Debian media server in any way?
I think most CPUs are probably overkill for that, so there's no point waiting until late 2024 and an expensive new DDR5-only socket to get Arrow Lake.

There was a leak that showed the Lunar Lake mobile chips getting H.266/VVC hardware decode. I haven't seen anything like that for Arrow Lake yet.

I'm predicting the low-end will be better, such as a 4+4 i3.
 
Notably, the only growth industry in the desktop space is all-in-ones and minis, which tend to use laptop parts...which is exactly where, as you pointed out, both AMD and Intel are putting a lot more investment into iGPUs. I think the desktop space is worthwhile not so much because of the money, but because hastening the secular decline in dGPU sales will hurt NVIDIA and make them scramble more to find other sources of revenue. Basically, imagine GamersNexus doing videos like, "NVIDIA's 5060...a pointless product???" with his screwball expression on the splash image because building a PC with a 5060 costs $300 more than just getting an APU, and you're only going to see maybe 10% better frame rates.
You're not wrong about iGPU's getting good. My laptop, while it may get hot, doesn't really have problems rendering games and such, and it's only a i5 with 4 cores
 
I've lived off the iGPU in the 4650G Pro for the longest time. It wasn't quite enough for GPU expensive AAA titles, but for most other things it was more than enough. You could tell the poorly written games by the fact that a pixel 2D game would suddenly gobble up 6 GB of system RAM for whatever atrocity it'd normally commit to the VRAM. (16384x16384 pixel art downscaled on-the-fly to 32x32 would be my guess) Video game programming is just the bottom of the barrel these days.

I could very well imagine that dGPUs are dead for most users one or two gens down the road. I feel companies like nvidia will shift towards making dedicated AI hardware, more than already. This is something nvidia basically has a monopoly on right now. AMD has recently woken up to the developments (still flailing around like retards but at least flailing) there. I don't know what the situation with Intel or Apple is in that regard. There's *a ton* of money poured into AI right now.

My incredibly hot take is that AR (augmented reality) will become big, in combination with AI. Not for entertainment purposes either. The future will have jobs where you put on your AR headset and then an AI will watch and guide you what to do for the next 6-8 hours, turning almost unskilled labor that might not even speak the language of the country into somewhat skilled labor on the fly. No skills necessary, everyone will be truly exchangeable on the fly as nobody knows anything important anyways and you'll be able to perfectly surveil your workers every minute of the job and sidestep a lot of privacy concerns because it's just "computer software" and you never need to have a human see the recordings if you don't want to. If I had to wager a guess I think that's Meta's et al. long term goal in investing into that technology. This will be a lot cheaper than building humanoid robots which are technologically much farther off to ever become viable. Admittedly, it's all still ways off though.
 
Last edited:
You're not wrong about iGPU's getting good. My laptop, while it may get hot, doesn't really have problems rendering games and such, and it's only a i5 with 4 cores

The last two generations of consoles have iGPUs; they're just much larger than what you find on PCs. DDR5 finally has the bandwidth to sustain 1080p without much fuss, so I'm speculating that putting an iGPU of this caliber in a PC could drop the bottom out of the dGPU market.


I don't know what the situation with Intel or Apple is in that regard. There's *a ton* of money poured into AI right now.

Ponte Vecchio has been met with a big yawn, a day late and a dollar short. Intel's Gaudi accelerators have been generating more interest. They're technically not GPUs, so they've sold quite a few to the Chinese.

Apple has never shown any interest in selling HPC datacenter hardware.
 
The last two generations of consoles have iGPUs; they're just much larger than what you find on PCs. DDR5 finally has the bandwidth to sustain 1080p without much fuss, so I'm speculating that putting an iGPU of this caliber in a PC could drop the bottom out of the dGPU market
Yeah, I guess the last few consoles have had cards integrated into the board, haven't they? Ones fully capable of 4k gaming.

Edit: I have basically everything now to get my rig ready. There's still a few parts left, but they aren't nessicary for boot up
 
Last edited:
Yeah, I guess the last few consoles have had cards integrated into the board, haven't they? Ones fully capable of 4k gaming.

They haven't had cards at all. The CPU and GPU are on the same chip, making the PS5 a SoC. Most people don't realize that GPUs have gotten much larger than CPUs. A 4070 die is about 3x larger than a 7800 X3D. This is the PS5 chip:


PS5.png

The PS5's GPU is the same architecture as the Radeon 6000 series and about 10% smaller than the same-generation 6700 XT. Now, you couldn't practically go this big for a PC SoC, because DDR5 doesn't have the bandwidth to keep a GPU that big busy. Note those GDDR6 memory interfaces, console makers know that standard PC memory isn't fast enough to keep both a high-end CPU and a midrange GPU busy. But you could go smaller - the 6600 XT is 25% smaller than a PS5, and maybe if you cut just a little more weight, you're now producing an entry-level gaming PC for $500.
 
They haven't had cards at all. The CPU and GPU are on the same chip, making the PS5 a SoC. Most people don't realize that GPUs have gotten much larger than CPUs. A 4070 die is about 3x larger than a 7800 X3D. This is the PS5 chip:


View attachment 5662409

The PS5's GPU is the same architecture as the Radeon 6000 series and about 10% smaller than the same-generation 6700 XT. Now, you couldn't practically go this big for a PC SoC, because DDR5 doesn't have the bandwidth to keep a GPU that big busy. Note those GDDR6 memory interfaces, console makers know that standard PC memory isn't fast enough to keep both a high-end CPU and a midrange GPU busy. But you could go smaller - the 6600 XT is 25% smaller than a PS5, and maybe if you cut just a little more weight, you're now producing an entry-level gaming PC for $500.
Wow. It's a big indicator where the gpu market is going then. If the igpu can actually do 4k, that's most of the consumer market honestly that'll be satisfied. I guess the use for cards anymore will be AI, things that really need that extra computational horsepower
 
I guess the use for cards anymore will be AI, things that really need that extra computational horsepower

Consumer-level AI runs on the CPU. You can't assume the presence of a dGPU for a mass-market application, which is why AMD, Intel, Apple, and Qualcomm are all putting NPUs on their CPUs.

There is a fundamental rule of computer architecture that it is always more efficient to put everything on the same die if you can. In the 1980s, your floating-point unit, if you had one, was on a separate die from your CPU. Back in the day, if you wanted to do floating-point math (i.e. calculate with decimals), you had to buy a separate accelerator like this. It cost over $1000 in today's money. Today, this is on the CPU die.

1706114370502.png

If you wanted sound, you needed a card like this. Today, virtually every motherboard has integral sound. It's not on the CPU, but integral to the mobo is the next best thing.

1706114653691.png

Any graphical output at all required a video card. This is a CGA video card, and the graphics it could output. For years now, mainstream CPUs have had integral GPUs capable of far better graphics.

1706114824374.png
1706114861685.png

If you wanted to use the internet, you needed a card like this. Today, pretty much every motherboard has integral ethernet that's zillions of times faster than this 14.4 modem.

1706115090559.png

You want a quad-core machine? Be prepared to shell out to put four Pentium CPUs on the same board. Multiple cores on the same die? Now that's madman talk!
1706115394808.png

The reason I expect 3D accelerator cards to die is every single other accelerator card is dead or relegated to a meaningless niche. At some point, the tech is good enough that paying big money for an expensive add-on is just not something consumers want. The most logical place for the GPU to end up is on the CPU rather than the motherboard.
 
Last edited:
Back