The case's firmware defaults to keeping things quiet and the fans off as much as possible. With purely natural convection, it idles at around 55C. If I force the fans on, it stays around 40 C. When the fans kick on under load, they keep the CPU down to around 85 C.
That's a huge thing with many of the recent processors, (SoCs really, nobody really does discrete CPUs anymore) you can actually define a long term and burst TDP for them, so you can make sure they stay inside a specific thermal envelope and the system can intelligently decide itself where to sacrifice performance to stay inside the defined TDP, probably down to instruction level. That's a cool and greatly overlooked feature. A lot of real use these SoCs see is doing intense bouts of processing in short bursts, sustained high loads do happen, but they're not common for the average user. So with the right themal envelope set you can have a fairly high TDP SoC even in a system which doesn't have the strongest cooling as RAPL etc. will limit the processor before overheating happens (and significantly slows down the SoC) and you will still walk away with subjectively superior performance compared to a lower-end processor which has a lower TDP. Of course this also makes benchmarks worse so tons of x86 mobile OEMs don't bother using it and rather rely on thermal limiting to not damage the machine, not caring if users drive on thermal handbrake outside of these benchmarks. (Also contrary to most semiconductors you find in these machines, the batteries actually really don't like the heat) Reason #3943993 mobile x86 OEMs will never catch up to Apple. Again, the hardware can do it but the firmware implementation is shit. Luckily you can set these limitations from the OS even on the fly but I'm not sure many people are aware of this. You could even make different usage profiles for this, e.g. low-TDP fanless operation for normal desktop usage, higher TDPs for CPU intensive gaming etc.. yet It's rarely used. It makes sense to do this at the firmware level as pretty much all x86 OSes are awful at truly rationing how much CPU time a program is allowed to take. Even more baffling consdering that power consumption in these ICs is never linear but usually raises dramatically at the very upper end of the speed rating.
But well, cooling is a giant industry in this DIY market. These massive heatsinks also just look cool. Never mind that it'd probably literally be cheaper to just burn out your SoC (which realistically, will never happen anyways) and replace it after four years than to buy a $200+ cooling solution. The only thing I am really willing to shell money out for is quality fans so I don't get drowned in noise. (They're also a long time investment, good fans basically last forever) That said, with a huge heatsink and intelligent RAPL settings, you could probably reach good desktop performance at low to off fan settings with the right SoC.
That makes sense when Mercedes stuffs their ECUs between the V of their V8s and plumbs them with an elaborate water cooling loop connected to the car's cooling system that runs at ~95C.
Cooling with air is kinda difficult in some environments because air can be difficult to direct and carries the nastiest shit which will end up caking your components in an insulating layer (and/or your filters) and plugging up your cooling system. Even if you cool with hot water it's just a lot more consistent *if* your system is good.
If they wanted to make air cooling a lot more efficient in desktop systems they'd include tunneling to direct the air. Some old desktop OEMs in the 00s had this. The modern cases don't have this because you gotta have a window to see the rainbow lights on your RAM and it doesn't look nearly as cool as a huge Noctua heatsink.