GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

So stupid question: I bought a 550W PSU but the 6700XT says the minimum requirement is 650W. PCpartbuilder says I'm good but should I still get a bigger PSU?
 
So stupid question: I bought a 550W PSU but the 6700XT says the minimum requirement is 650W. PCpartbuilder says I'm good but should I still get a bigger PSU?
Yep. Chances are your computer will average around 300W during use, but for brief periods that can surge up past 1000W. PSUs are designed to handle these surges, but bigger PSUs have more of a buffer, which is why the recommendations are generally much higher than the actual requirements. What can happen with an underdimensioned PSU is, the surge will draw more current than the capacitors in the power supply can handle, which will cause voltage to drop, which causes a brownout, wreaking all sorts of havoc. If you're lucky the system just crashes. If you're unlucky, memory addresses get corrupted and you can wipe the whole system, forcing a reinstall. Power supply efficiency also operates along a bell curve. The PSU is most effective around the mid-point of its capacity, drawing too close to max power will use more electricity and create waste heat.
Kilowatt PSUs like the BeQuiet PurePower are cheap enough now that I would basically never go for less. More capacity than you strictly need means the system will be more stable, even with graphics cards suffering ridiculous surges (cough Nvidia cough), and cooling (and therefore noise) will also be better.
 
So stupid question: I bought a 550W PSU but the 6700XT says the minimum requirement is 650W. PCpartbuilder says I'm good but should I still get a bigger PSU?

FWIW I had a 500W supply in my PC when I got my 6700 XT. It didn't do anything terrible, but not long afterward, it developed a really noisy inductor whine. I replaced it with a 750W PSU and then everything was fine.

Electricity is magic to me, so see the above post for science facts.
 
Running a game at 1080p on a high-end GPU + high-end CPU to drive it to 600 fps and proclaiming it, "Fastest gaming performance available" is like loading various vehicles with bricks until their suspensions give out, dividing by the weight of the average child, and declaring that the F-350 Super Duty is the best vehicle for driving your children to soccer practice because it can "carry 20% more children than the next best vehicle."
The ultimate school bus, able to carry and deliver many tons of children! And convenient to park, just put it anywhere!

belaz75710.jpeg
 
Actually the best vehicle is the most versatile one. Ability to traverse land, sea, marsh, snow, and even road. Also superb at parallel parking. They are the ideal vehicles for every situation and all other vehicles should be replaced by them.
ЗиЛ-29061_-_учение_поисково-спасательного_отряда_по_обнаружению_и_эвакуации_экипажа_космическо...jpg
СВП_Кайман-10.jpg
 

Attachments

  • ЗиЛ-29061_-_учение_поисково-спасательного_отряда_по_обнаружению_и_эвакуации_экипажа_космическо...jpg
    ЗиЛ-29061_-_учение_поисково-спасательного_отряда_по_обнаружению_и_эвакуации_экипажа_космическо...jpg
    620.8 KB · Views: 9
So stupid question: I bought a 550W PSU but the 6700XT says the minimum requirement is 650W. PCpartbuilder says I'm good but should I still get a bigger PSU?
I would in general, for reasons snov said. Optimally, you want some headroom with your PSU.

But, if you have a moderate CPU with the 230w 6700 XT and you're not doing much overclocking, and that 550w isn't some trash tier one, you'll probably be fine.
 
holy fuck this guy is a retard, how does he even manage to breathe unaided

If the E-Cores were "actively hurting the efficiency," the way you would show that is by disabling the E-Cores in the BIOS, which you can do, and demonstrating that you get better overall performance per watt with them disabled across a broad range of workloads. Except if he did that, he'd discover that both peak performance and efficiency dropped.

The reason he doesn't do this is he doesn't actually understand what CPUs do.
You're an absolute fucking retard, holy shit.
 
You're an absolute fucking retard, holy shit.

Nah. While I overracted to GN's usual baiting headline, he clearly doesn't know what he's talking about and talks around the issue, since he never got the data he would have needed to figure out what the original problem was and how APO solved it.

Intel distributes the tool he would need to use for free btw.
 
Nah. While I overracted to GN's usual baiting headline, he clearly doesn't know what he's talking about and talks around the issue, since he never got the data he would have needed to figure out what the original problem was and how APO solved it.

Intel distributes the tool he would need to use for free btw.
GN says one thing, Ugly One another. I'm going to listen to whichever one has the better hair.

Also because my personal experiences with Intel's heterogeneous architecture was that it is awful and not ready for desktop computers. Seriously, my 12900K ran better with the e-cores off than with them on, it was ridiculous. The e-cores put out ridiculous amounts of heat for what little actual work they bothered doing. I've heard that's mostly fixed now, but calling them cinebench cores kind of aligns with my experience. Looks good in benchmarks, and technologically impressive, but Linux/Windows are not ready for it. I'd rather have chiplets and core parking. Only need one CCD (ie, office work or gaming)? Park the other three. Running engineering software/compiling? Power up all four CCDs and go at it. Idling? Run one CCD at minimal levels.
Squeezing so many cores into so little die area is impressive, but given a choice I'd rather have a processor with four zen4c dies than one with lots of Intel's benchmark accelerators that can't even do AVX512 (not that I ever use AVX512, and now thanks to Intel undermining it as an instruction set we should expect processors to have, probably never will).
 
One thing I really don't understand is Intel boasting the efficiency cores can turbo boost for heavy workloads.

...That's what the PERFORMANCE cores are for. They're the big bastards that do the work. Efficiency cores can help out, sure, but having them turbo boost is like a supercharged Volkswagen Beetle.
 
Mm. And removing 32 bit support in Catalina.

I'm genuinely in two minds about it. In one sense, I can understand. It makes things simpler just by going "FUCK IT. IT'S DEAD. USE THE NEW SHIT."

But sometimes software can't be updated, be it because of how it was made, if the dev loses interest/company goes bust/etc. Software preservation's important.
wrt to 32-bit vs 64-bit, my hot take is that I think we've long since passed the point where 32-bit applications need to be placed apart from the standard OS and run in their own special tardbox. MacOS famously discontinued 32-bit support a few years ago but most Linux distributions have been 64-bit only by default for years now with you having to opt-in to special repositories and/or packages to re-enable support for 32-bit applications.

The 32-bit x86 ABIs are not good. They're extremely inefficient due to the assumption of having fewer general-purpose registers. Supporting them back in the mid-2010s made sense but we're at the point now where stuff that hasn't been rebuilt for 64-bit needs to be put on life-support. I wouldn't be surprised if Windows 12 or 13 starts moving towards this.
 
Last edited:
wrt to 32-bit vs 64-bit, my hot take is that I think we've long since passed the point where 32-bit applications need to be placed apart from the standard OS and run in their own special tardbox. MacOS famously discontinued 32-bit support a few years ago but most Linux distributions have been 64-bit only by default for years now with you having to opt-in to special repositories and/or packages to re-enable support for 32-bit applications.

The 32-bit x86 ABIs are not good. They're extremely inefficient due to the assumption of having fewer general-purpose registers. Supporting them back in the mid-2010s made sense but we're at the point now where stuff that hasn't been rebuilt for 64-bit needs to be put on life-support. I wouldn't be surprised if Windows 12 or 13 starts moving towards this.
There is one thing Microsoft does really, really well in Windows, and that's backwards compatibility. You can theoretically still run 16-bit Win3 executables in 11, the super old APIs to draw those GUI elements are still present and if the compatibility shims aren't enough there's a built-in virtual machine.
5xo6c8xmst571.png
Win3 file picker in Win11.

So no, Microsoft probably aren't going to drop 32-bit any time soon. 16-bit hasn't been relevant for a decade and a half longer than 32-bit, and remains supported, even though basically the only use case is super old industrial and military computers that run DOS anyway.
Where Windows gains incompatibilities with older software is with the drivers, which Microsoft can't really do anything about. If Nvidia decides to drop support for whatever ancient Voodoo WinG API was used in some pioneering early 90s game, that game isn't going to run. It's where Linux suddenly becomes more compatible, since wine never drops translation layers and you can just bang bits in software rendering a million times faster than GPUs of the time could.
 
GN says one thing, Ugly One another. I'm going to listen to whichever one has the better hair.

Also because my personal experiences with Intel's heterogeneous architecture was that it is awful and not ready for desktop computers. Seriously, my 12900K ran better with the e-cores off than with them on, it was ridiculous. The e-cores put out ridiculous amounts of heat for what little actual work they bothered doing. I've heard that's mostly fixed now, but calling them cinebench cores kind of aligns with my experience. Looks good in benchmarks, and technologically impressive, but Linux/Windows are not ready for it. I'd rather have chiplets and core parking. Only need one CCD (ie, office work or gaming)? Park the other three. Running engineering software/compiling? Power up all four CCDs and go at it. Idling? Run one CCD at minimal levels.
Squeezing so many cores into so little die area is impressive, but given a choice I'd rather have a processor with four zen4c dies than one with lots of Intel's benchmark accelerators that can't even do AVX512 (not that I ever use AVX512, and now thanks to Intel undermining it as an instruction set we should expect processors to have, probably never will).
Interesting. Very interesting. I'm glad I haven't chosen a core or motherboard for my PC build yet. I like the specs of Intel, certainly, but again, everything I've built has had a Ryzen for a good reason, it runs cold and is stable. It does seem like you've said, they've gotten better with newer versions, but things like Windows just don't know what to do with the power. I think I'll observe for now. Work on buying the other shit
 
So no, Microsoft probably aren't going to drop 32-bit any time soon. 16-bit hasn't been relevant for a decade and a half longer than 32-bit, and remains supported, even though basically the only use case is super old industrial and military computers that run DOS anyway.
I'm actually not sure Windows can run 16 bit code on 64 bit installs anymore. It still runs on 32 bit installs though.
 
Interesting. Very interesting. I'm glad I haven't chosen a core or motherboard for my PC build yet. I like the specs of Intel, certainly, but again, everything I've built has had a Ryzen for a good reason, it runs cold and is stable. It does seem like you've said, they've gotten better with newer versions, but things like Windows just don't know what to do with the power. I think I'll observe for now. Work on buying the other shit
To be fair a lot of the problems I was having were probably because the 12900K overdosed on e-cores in a vain attempt to compete with the 5950X. All the e-cores struggling at 120W to write the system log in the background got the die really hot indeed and forced the p-cores to downclock. Something like the 12700 had less e-cores and the same amount of p-cores, and probably would have overheated a lot less (letting the p-cores run faster instead). If you're just going to play video games, Intels are fine (and pretty good value). Less PCIe lanes, and e-cores are no good for compute, so don't try to build a work computer with Intel right now, but for light tasks like browsing the web or playing video games, Intel are fine (probably actually quite good, they clock really high).
 
You can theoretically still run 16-bit Win3 executables in 11, the super old APIs to draw those GUI elements are still present and if the compatibility shims aren't enough there's a built-in virtual machine.
16-bit mode applications cannot run on modern 64-bit Windows. The legacy Win16 APIs are available but this only ensures source compatibility, not binary compatibility. Actually running 16-bit executables requires something like winevdm to emulate an 8086 and translate win16 calls to win32.

And yes, virtual machine is right. There should be a special 32-bit virtual machine tardbox for 32-bit applications to run in.
 
Back