GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

So I have a friend who is claiming that my 3080 GPU will bottleneck a 7600 CPU which I seriously doubt it considering the 3080 is still pretty solid AFAIK, but what do you guys think? I do plan to upgrade the GPU next year but I don't know.

Also, I am considering the 7600X, but would a 7800X3D be a better option for a near 150$ premium? Or would a 7800X be a better option? Going to wait until black friday/cyber Monday to buy on Amazon to see if I can get a deal.
(Sorry if this has been already discussed, if it has been I skimmed it).

It sounds like your friend is trying to sound smart. That CPU will have no problem keeping up with the frame rates that GPU is capable of delivering, nor will any of the other CPUs you listed. And on the flip side, that GPU will have no trouble delivering smooth frame rates and stunning visuals in any game out now or in the next five years. So you can't really go wrong. 3D V-Cache is not something I'd pay extra for unless your computer is also serving as a workstation.

GN says one thing, Ugly One another. I'm going to listen to whichever one has the better hair.

Also because my personal experiences with Intel's heterogeneous architecture was that it is awful and not ready for desktop computers. Seriously, my 12900K ran better with the e-cores off than with them on, it was ridiculous. The e-cores put out ridiculous amounts of heat for what little actual work they bothered doing. I've heard that's mostly fixed now, but calling them cinebench cores kind of aligns with my experience. Looks good in benchmarks, and technologically impressive, but Linux/Windows are not ready for it. I'd rather have chiplets and core parking. Only need one CCD (ie, office work or gaming)? Park the other three. Running engineering software/compiling? Power up all four CCDs and go at it. Idling? Run one CCD at minimal levels.
Squeezing so many cores into so little die area is impressive, but given a choice I'd rather have a processor with four zen4c dies than one with lots of Intel's benchmark accelerators that can't even do AVX512 (not that I ever use AVX512, and now thanks to Intel undermining it as an instruction set we should expect processors to have, probably never will).

So what annoys me about the GN video is he says a bunch of stuff that makes no sense, like "the E-Core gets in the way" or something. He also seems to think P-cores run demanding workloads "more efficiently" than E-cores. The fundamental problem is you don't actually know what the problem is until you know what the software is doing. He doesn't figure out what the software is doing, so he doesn't have enough information to say any of the things he's trying to say. He provides enough information that if I were in his shoes, I know the next things I'd look at to make conclusions (probably run in Vtune and track L2 DCMs), but he stops there. And that is what perpetually annoys me about all these reviewers - it's like they assume the software is already doing the right thing, and how it performs on different hardware comes down to design successes or failures on the hardware.

At the risk of writing another 5000-word spergout, a good example is how the AMD X3D chips perform much better in BGIII than just about anything else. Now, the game hardware sites were jumping for joy, but this result sounded weird to me, since games are usually not memory bound, so I did a little digging and discovered BGIII has a memory leak. Even when sitting there, doing nothing, somewhere in the code, the executable is constantly allocating and losing track of objects without deallocating the memory. So this explains why all that extra L3 cache improves performance. Larian's buggy software is constantly hitting the main memory and calling unnecessary mallocs, those newly malloc'd objects eventually go unused, and so the memory controller shoves them back to RAM, where they float in a zombie-like state until you close the game. Extra L3 means they can sit around a little longer in cache before getting kicked out. But this doesn't tell you in general about "gaming performance."

Now, it's absolutely true that if you ship software that's full of memory leaks, sure, it will run better on machines with larger caches and more memory bandwidth. But that's not what advanced memory technology is for. AMD didn't develop 3D V-Cache so it could wipe irresponsible developers' asses. There are lots of legitimate needs for larger L3 cache, but "I didn't fix my memory bugs before shipping" isn't one of them. Larian's software is basically shitting itself constantly, and the game sites are evaluating hardware's general fitness for gaming based on how much Larian can shit on it before it chokes, instead of calling out Larian for shitting itself.

My i9-12900 (didn't get the k because 5 GHz is a meme) does very well under Windows 11 with a game in the foreground and alt-tabbing to the browser regularly (so I can be MATI here or look at game guides or Discord or whatever). However, my 8-core Ryzen does just fine with that as well...which is why my stance on gaming CPUs is "get whatever is in your price range, it will be fine, ignore the memes and the YouTube spergs."

To be fair a lot of the problems I was having were probably because the 12900K overdosed on e-cores in a vain attempt to compete with the 5950X. All the e-cores struggling at 120W to write the system log in the background got the die really hot indeed and forced the p-cores to downclock.

I wonder if that's due to the warping issue 12th gen has. I got about +15% performance from a $10 contact frame and significantly lower temps all around.

Power consumption
i9-12900: 65W - 202W
i9-12900k: 120W - 240W

So yeah.
 
Last edited:
He also seems to think P-cores run demanding workloads "more efficiently" than E-cores. The fundamental problem is you don't actually know what the problem is until you know what the software is doing. He doesn't figure out what the software is doing, so he doesn't have enough information to say any of the things he's trying to say.
He could be referring to hyperthreading and tighter pipelining? “P-cores run more efficiently” seems like a pretty safe assumption to make when you look into why (CISC) moved in that direction to begin with, so I’m sure he’s guessing along the same lines I am.
He provides enough information that if I were in his shoes, I know the next things I'd look at to make conclusions (probably run in Vtune and track L2 DCMs), but he stops there. And that is what perpetually annoys me about all these reviewers - it's like they assume the software is already doing the right thing, and how it performs on different hardware comes down to design successes or failures on the hardware.
He’s a YouTube personality, you’re an engineer. I’m pretty sure he’s pointed out in the past that he moved into the “tech news” world from just being a video game enthusiast, and that he’ll ask actual engineers for help whenever he needs someone to actually understand the subject beyond the marketing blurb he got with the box. He might just not know where to turn to begin investigating cache tuning. You should bear in mind all of these tech YouTubers are more on my level than they are on yours. I’ve recently taken a few programming courses, and one thing they kept droning on about was how the compiler is smarter than we are, and as long as the code isn’t super dumb, the compiler will make it run efficiently. So I 100% buy that a guy who doesn’t even have a university degree, let alone any programming experience, would trust the software over his own knowledge, because he doesn’t have any.
What he does have is contacts at Intel, which is where he’s getting this information from. It’s diluted through his ignorance, but Intel engineer are clever (duh, they’re Jewish) and what they say is probably correct, even before you look at how the heterogeneous architecture isn’t performing as well as it ought to be in benchmarks, even recently.
My i9-12900 (didn't get the k because 5 GHz is a meme) does very well under Windows 11 with a game in the foreground and alt-tabbing to the browser regularly (so I can be MATI here or look at game guides or Discord or whatever). However, my 8-core Ryzen does just fine with that as well...which is why my stance on gaming CPUs is "get whatever is in your price range, it will be fine, ignore the memes and the YouTube spergs."
Yes, for games CPUs barely matter. You want eight cores, but beyond that almost everything games do depends on the GPU. As long as you can match the consoles in core count and clock speeds, you’re good. And that’s quite easy to do, consoles are essentially AMD laptop chips with beefy iGPUs.
I wonder if that's due to the warping issue 12th gen has. I got about +15% performance from a $10 contact frame and significantly lower temps all around.
Could be, but I wasn’t about to pay some German idiot twenty euros to machine me an aftermarket CPU mount. It’s Intel’s problem and I’m blaming every bit of it on them, just as I blame AMD for the ridiculously thick AM5 lids, whether they’re compatible with old coolers or not (and just to be clear, almost every single AM4 cooler used a custom backplate, the only coolers AM5 was really backwards compatible with were AMD’s stock coolers, which is a silly thing to sacrifice so much thermal headroom for).
 
What are your recommended hardware requirements if someone wants to run a llama 70B large language model in your own home, much less a 7B model.
 
What CPUs have the most powerful integrated graphics right now?
The 7840HS, an eight-core AMD laptop processor. I have one, it's pretty sweet. Make sure you get it with 6400MT/s memory, iGPUs are super dependent on quick memory and the 4800MT/s junk most laptop manufacturers use cripples it.
 
  • Like
Reactions: Vecr
Since we're dabbing on tech review youtubers, what's everyone's favorite vapid catchphrase that space likes to use?

Mine is, "You're leaving performance on the table."
 
Amazon has the 12600k for $153 and the 12600kf for $140 if anyone is looking for a deal on an older Intel processor. I'm going to be slapping together a new TrueNAS or Unraid build soon, and this looks perfect.


i7-12700k is cheap too and i5-13600kf. Almost went for one of the above but gotta buy the cooler and not many deals on motherboards I like for throwing one in. By the time I get around to building it will be summer and maybe Ryzen 7xxx cpus and boards get a bit softer by then.
 
  • Like
Reactions: Vecr and Post Reply
He could be referring to hyperthreading and tighter pipelining? “P-cores run more efficiently” seems like a pretty safe assumption to make when you look into why (CISC) moved in that direction to begin with, so I’m sure he’s guessing along the same lines I am.

It seems to me he's jumping to conclusions about architecture based on how Intel advertises the cores, nothing more than that.

He’s a YouTube personality, you’re an engineer. I’m pretty sure he’s pointed out in the past that he moved into the “tech news” world from just being a video game enthusiast, and that he’ll ask actual engineers for help whenever he needs someone to actually understand the subject beyond the marketing blurb he got with the box. He might just not know where to turn to begin investigating cache tuning. You should bear in mind all of these tech YouTubers are more on my level than they are on yours.

The problem is he and other reviewers' entire gig is running games at settings that no sane developer would ever invest in optimizing for and draw all kinds of conclusions based on how games run at that extreme edge - which is precisely where optimization can matter a lot. In this case, no, Intel did not advertise Alder Lake as automatically delivering best results when you just spawn as many threads as you have logical cores. However, many games do exactly that, although Intel's Alder Lake dev docs explicitly warn you not to. There are also ample tools to optimize thread affinity if you need to. And why do devs largely ignore Intel's advice? Because they don't need to put any effort in to hit 100+ fps in games. They target 30-60 fps on the PS5, and the fact that it hits 100+ fps on PC with nearly zero effort from them is just gravy.

But this brings us full circle - the reality is these frame rates are already so high that it would be a massive waste of time for the game devs to optimize their code even further. The fact that Rainbow Six Siege can't hit 600 fps on the latest CPU unless thread affinity is hand-tuned by Intel just shows...Ubisoft spent zero effort hand-tuning thread affinity themselves. It doesn't show E-Cores are somehow a "failure," nor does it show that devs are "lazy," which are the two conclusions you see online the most often.

Which is why I said at the beginning APO is a marketing gimmick designed to win pissing matches on YouTube. GN doesn't seem to see through the gimmick, though.

What CPUs have the most powerful integrated graphics right now?

Apple. On the x86 side, AMD (and it's not close).
 
Last edited:
Yeah, for iGPU, it's Apple than AMD.

There's been some early benchmarks of Intel's latest iGPU catching up to the M2 and Radeon 780M, but they were being compared using two different sets of data.

In short, I think it's about the M2 being faster than the 780M, about 10-15%. And both are about 40-60% faster than Intel's current offerings.

(The M3's still being benchmarked, and is showing some odd results. It's faster than the M2, but it's starting to hit the limits of port quality affecting fps, rather than hardware itself.)
 
  • Informative
Reactions: Brain Problems
What's the most hashing power you can run on a standard 15 amp, 120V circuit without throwing the breaker?

I'm flailing like a fish out of water trying to figure out which GPU's and how many. It's not just about hash power per GPU it's how much power each individual card draws and how many dedicated power lines they need from a PSU.
 
What's the most hashing power you can run on a standard 15 amp, 120V circuit without throwing the breaker?

I'm flailing like a fish out of water trying to figure out which GPU's and how many. It's not just about hash power per GPU it's how much power each individual card draws and how many dedicated power lines they need from a PSU.
It really depends on the power efficiency of your GPU. If you're using really old 64nm chips it will be a lot lower then if you're using 4nm chips.
 
(The M3's still being benchmarked, and is showing some odd results. It's faster than the M2, but it's starting to hit the limits of port quality affecting fps, rather than hardware itself.)
Apple also has its own local retardation of deprecating OpenGL and not supporting Vulkan in favor of its Metal API. Metal isn't bad (a few shortcomings due to it being mobile-first but nothing really major) but almost no games support it natively other than stuff from Mac-friendly studios like Larian, so pretty much every game is bottlenecked by moltenvk or the extremely experimental d3d->metal layer Apple has in its Game Porting Toolkit.

All this talk about Apple getting serious about gaming is bullshit imo until they properly support vulkan (and preferably also bring back opengl). Hell, even the pro-Apple troon marcan has called them out on this.
 
Meteor Lake's performance is disappointing to OEMs, allegedly:

Intel contact that interfaces with OEMs a lot:
Just saw your livestream, and I'd like to confirm that at least in some scenarios Meteor Lake is definitely performing below what our partners expected.

I am not sure what went wrong, it's not my department, but in the past month my days have been filled with meetings with angry OEMs asking us why Meteor Lake is barely better than Raptor Lake in their testing.
OEM partner of Intel:
In some form factors Meteor Lake IS better than Raptor Lake, but it isn't always better...and that's just not good enough. We were planning for an Alder Lake moment for 2023 Back To School sales, and instead we seem to be getting an Ice Lake moment that misses the holidays.

Your viewers need to understand why that's a problem for OEMs. Many of us don't have a stockpile of AMD CPUs ready to use as alternatives. We have to sell Intel, and we've been waiting YEARS for something exciting and new that isn't just another refresh of Alder Lake.

Now we're going to have to try to convince consumers they need some expensive "AI Laptop" this January that effectively does nothing new with tiles besides cost more than what they could buy 2 years ago.
Person from "major OEM":
In our final testing there's no appreciable CPU performance uplift with Meteor Lake over Raptor Lake, and thus you're going to see us lean into AI, Graphics, and Perf/Watt for marketing.

Look, I don't know what the exact percentage performance expectations were a year ago, but I can promise you that everyone I personally work with (including me) expected a general performance uplift to happen in addition to major efficiency improvements.

Actually, on that note, AI was only ever going to be a bonus selling point of MTL, and it's only in the past few months Intel suddenly told us to focus on that as the main selling point...
Major OEM very negative:
Meteor Lake? Don't make me roll my eyes, and don't waste your time talking about it anymore. It's a waste of sand.

Or, let's put it this way - we won't be supporting Meteor Lake in our products above what's necessary to keep a satisfactory relationship with Intel, and we will be utilizing Hawk Point and Strix Point as quickly as possible next year with as much volume as they will give us...

We see the writing on the wall, it's time to go 50/50 Intel/AMD in laptop as quickly as possible - we don't want to get screwed over like this ever again.

Swear to fuck if either model of the Sapphire 7900 XTX doesn't get an actual price drop this weekend I will 9/11 3.0, I have been on a 5700 XT for four years waiting for GPU prices to go back to sanity and instead all I see right now is $1000 for this generation's top of the line (more than double what I paid for the Nitro+ 5700 XT in 2019) and a blatantly obvious push to clear inventories of 6000 series cards no one wants to buy now because the 7000 series is out. Already painful enough knowing AMD teased a 5900X3D at the start of the year only to go "lol piss off" and make 3D versions of far lesser parts instead of giving me a decent tradein path for the 5900X I'm using now, then I got this hovering overhead too.
Rumor: AMD will skip competing at the high end with the RDNA4 generation, and instead bring around 7900 XT performance to the $400-600 price level, with 16 GB VRAM.

It's too bad that the 5900/5950X3D prototypes with dual cache stacks never materialized, and instead AMD might be flinging 5500X3D and 5700X3D CPUs into the third world. Learn to love your 5900X because it's probably fine.

What CPUs have the most powerful integrated graphics right now?
Outside of Apple, and on desktop sockets specifically, the Ryzen 7 5700G. Much better 8000G Phoenix desktop APUs will likely be announced at CES in January:

November 10, 2023 –GIGABYTE TECHNOLOGY Co. Ltd, a leading manufacturer of motherboards, graphics cards, and hardware solutions, released the latest AGESA 1.1.0.0 beta bios for AM5 next gen APU support on X670, B650, A620 motherboards. The forthcoming AM5 next gen APU will be launched at the end of January 2024.
 
What are your recommended hardware requirements if someone wants to run a llama 70B large language model in your own home, much less a 7B model.
I'm looking into this myself at the moment so I'll post back the results of my research. But my preliminary reading is that actually training an LLM is prohibitively expensive. Like 96GB VRAM as a starting point. AMD have just launched the new generation of Threadripper which if you were going to do something like this at home, would be an excellent platform to start from:

Anyway, to be helpful rather than answer your question (not the same thing always), I'd strongly recommend getting an account on something like https://www.runpod.io/ which is very easy to do, and spending $50-100 dollars to rent some time on really powerful hardware. The cost will be multiple orders of magnitude lower than actually buying such hardware and it will give you a chance to learn and really accurately tell what sort of hardware you need to do this at home. Which with the sort of money you'd be spending $50 or something to make sure you do it right is well worth the investment.

EDIT: Oh, you said you just want to run it. Maybe you could get away with a 3090. In an ideal world you'd bump it up to something with even more VRAM but I think you could get away with it. My recommendation to just try stuff out on Runpod for a fraction of the cost still stands, though. You can actually run on a 3090, 4090 or bigger set ups on there. If it works, get your 3090. If you need more, you'll know.

idk whatever is the latest apu by amd i guess (i dont know what cpus with integrated graphics amd has been releasing for the past 3 years)
AMD (or Apple if you can stomach MacOS which I can't). But either way AMD have a new line of APUs right around the corner. @The Mass Shooter Ron Soye just squeaked in before me on that. Personally I'm fine with the integrated graphics on the current gen AMD chips but if you want to actually do gaming on them and you can wait, give it a few months. They're going to be pretty competitive. I really believe the days of the discrete consumer GPU for other than prosumer and high end gaming are nearing their end.
 
Last edited:
Amazon has the 12600k for $153 and the 12600kf for $140 if anyone is looking for a deal on an older Intel processor. I'm going to be slapping together a new TrueNAS or Unraid build soon, and this looks perfect.
Below.
i7-12700k is cheap too and i5-13600kf. Almost went for one of the above but gotta buy the cooler and not many deals on motherboards I like for throwing one in. By the time I get around to building it will be summer and maybe Ryzen 7xxx cpus and boards get a bit softer by then.
Honestly looking at that i7. Sure it's 12 gen, it gets hot, but if worst comes to worst, I get a liquid cooler block and be done with it. It's actually in my price range, and still fucking fast. Considering I really just want to play games, which it can out of the box with its iGPU, it'll be sufficient for now.
 
Below.

Honestly looking at that i7. Sure it's 12 gen, it gets hot, but if worst comes to worst, I get a liquid cooler block and be done with it. It's actually in my price range, and still fucking fast. Considering I really just want to play games, which it can out of the box with its iGPU, it'll be sufficient for now.


Should be fine as long as the VRMs on the motherboard are well made and under a good heatsink. Lot of rebates and sales on low end and mid unpopular 12th/13th gen capable boards now. But the not so good ones will take out your CPU at the very least if the VRMs go.

Overnight the i5-13600k is $20 off which is well, not the steep discount I was hoping for. Some Ryzen7 bundles dropping locally this morning now but nothing so far that will make me run around like a mad dog to get.
 
Should be fine as long as the VRMs on the motherboard are well made and under a good heatsink. Lot of rebates and sales on low end and mid unpopular 12th/13th gen capable boards now. But the not so good ones will take out your CPU at the very least if the VRMs go.

Overnight the i5-13600k is $20 off which is well, not the steep discount I was hoping for. Some Ryzen7 bundles dropping locally this morning now but nothing so far that will make me run around like a mad dog to get.
I've never had too many troubles with motherboards, stuck with AsRock for years, pretty solid. Probably will just get a Noctua air cooler and call it there. I saw the Ryzen 7 discounts too. Not bad. Price wise still not beating the i7 cores, which for me, putting together a basic rig, eh, money talks
 
  • Informative
Reactions: SilentDuck
Looks like I'm moving back to team green, I was able to get a great deal on a 4090 (1200USD). I'll admit I've been wanting to for a while, the 6900XT is a great card but with the reset bug and the issues Stable Diffusion gives me every time I try to update, it's actually pretty annoying to work with. Hopefully having the card installed won't break Wayland the way my 3080s used to (which was why I moved to AMD in the first place), but if it does I guess I could keep the Radeon and the RTX installed at the same time. I don't think that will happen, since the monitor will still be plugged into my iGPU and the Nvidia won't have any connected outputs, but you never know with Nvidia's drivers.

Anyway I now need to source a water block, because there's no way I'll be able to fit a card this long and thick into my case.
 
Back