GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

They're all somewhat at fault on this one. Especially after AMD got hit by this issue with mobo vendors.

Intel plays the ignorance card because hey, it makes our performance look better in benchmarks.

*Edit* I've actually ran into simps online claiming that if you buy overclocking mobo/CPU then you should be looking at and changing default settings 1st thing anyways.

No. Factory settings should be fairly conservative and stable. Same as ram. You have to enable a profile to get the oc settings.
I would assume that the factory defaults would always be to a guaranteed stable configuration, something that the processor informs the motherboard of so that motherboards don't have to be preprogrammed with unreleased chips
 
Makes sense, right?

Apparently we're just not hardcore enough to appreciate a cpu getting shot to the moon out of the box.
it might also be idiots that are too stupid to tweak their motherboard settings then complaining that they have no overclocks
 
NVMe does run hot generally.
depends on the model, was looking at some last week and one of them was like 70° celsius.

Just buy something and get it over with. Nothing new is going to massively obsolete what you can get now and that extra few months you get to enjoy having a PC are probably going to outweigh the extra couple hundred you might hypothetically overpay if there does happen to be some kind of unprecedented downward shift in component prices.
also depends where you're coming from, sitting on hardware that's become "too old" and then have to upgrade NOW does hurt more depending on prices then look for price development and deals over the course of 1-2 years. also helps not getting fleeced when companies think they deserve more money, like with SSDs right now.
 
I would assume that the factory defaults would always be to a guaranteed stable configuration

I would too, especially with a company that's been around as long as MSI. Every company is run by retards.
 
Have ASUS, MSI, and Gigabyte ever been known for their fantastic service and aftermarket support? I mean, things like chips catching on fire and caps exploding has been an issue for as long as I can remember. It’s not an excuse, but I think part of being a PC builder is not trusting your vendors at all and taking some level of responsibility as being your own OEM. Absolutely get your RMA and bitch about it online, but don’t have a shocked Pikachu face about it.

Are any of the OEMs having issues with this? I have a laptop from an OEM on 11th gen that thermal throttles to hell. No way to fix it other than castrating the i7 CPU by disabling turbo boost.
 
Are any of the OEMs having issues with this? I have a laptop from an OEM on 11th gen that thermal throttles to hell. No way to fix it other than castrating the i7 CPU by disabling turbo boost.
You could connect a water-block to the i7 with a Frankenstein monster dock
:suffering:
 
  • Islamic Content
Reactions: George Lucas
Sorry. Intels are known to thermal throttle in laptops. Especially the smaller ones.

I have an 18" that seems to hold up decently enough, but it still gets pretty hot.
 
Culturally, it seems like AMD is also a lot more of a "ignore things that don't bring in big bucks" kind of company, while NVIDIA is more of a "win everywhere we are, regardless of how big or small it is, salt the earth with the ashes of our enemies" kind of company
One of AMD's first big fuck-ups when it came to compute (and by extension, getting their foot in the AI door) was the same reason Apple fucked it up: they ignored the gaming/normie space when hyping OpenCL up, while also acting like it was a 1:1 competitor for CUDA performance/feature-wise, which it wasn't. They announced that everything R700-and-up would support accelerated OpenCL, but their target audience was entirely devs, datacenters, and video production, with a big focus on how it's free - unlike the proprietary CUDA, and that's about it. They were nowhere to be found when devs wanted to know exactly HOW the tools, libraries, and polyglots made this a better option than CUDA, and expected them to magic it into something themselves. The only time I remember ANYTHING vidya taking advantage of GPU-accelerated OpenCL was prehistoric Dolphin, where media-side, lots of video encoding apps didn't play nice with it, and on the decoding side, MadVR only briefly used it for NNEDI3. The most the average gamer was using accelerated OpenCL for was a trickle fund via mining Bitcoin, followed by Ether, followed by fried VRM's

Meanwhile, when Nvidia announced they were turning every Geforce from the 8800-and-up into a CUDA accelerator, they made sure to throw gamers a bone with PhysX now that they had all of Ageia's shit, and further down the line, Gameworks. This was all definitely proprietary and relied on cozying-up to devs, and people weren't wild about splitting the GPU ecosystem via exclusive grafix deals, but you can't deny they were a great advertisement for Nvidia GPU's, the Arkham games especially. Nvidia's way of cozying-up also involved being involved in development and providing devs the exact shit they needed to make this look good in a marketing sense. Even crypto miners moved to Nvidia once CUDA caught-up, ensuring that no GPU was safe from the may-make-money-in-the-future tax added to the pricetag. Nowadays, PhysX is more CPU-based when games aren't using Havok, but during the peak, you actually got something out of CUDA in the vidya space

Not to say that OpenCL didn't have promise though, Regeneration - the IDF madlad who made the hybrid PhysX mod - managed to get PhysX working on AMD cards, and was preparing a public release of the wrapper with Nvidia's blessing, but not AMD's. Despite testers confirming it was indeed real and working great, Regeneration dropped off the planet entirely, and there was no public release whatsoever. The two rumors are that A) AMD refused to provide help/support for newer drivers a la ATI Tray Tools' demise, even though this left them without a Rivatuner/Inspector rival. This was a problem due to the fact the drivers Rege was working with were pre-CCC overhaul (aka the .Net 2.0 abominations), and quite possibly introduced locks that killed his methods the same way they did with MPT, or B) AMD eventually lawyered-up to say "no" so they could get him on their team exclusively, which I believe when you consider they did the exact same thing to JohnM (for the failure that was fucking RAPTR) after he built a better CCC via Radeonpro, and then refused to update it at all out of sour grapes when Raptr/his cushy position was killed

https://web.archive.org/web/20080723012756/http://www.tgdaily.com/content/view/38137/135/
https://web.archive.org/web/2016032...-physx-gpu-acceleration-on-radeon-update.html

In any case, this was definitely a cool thing that'd definitely get people to pay attention to OpenCL, and AMD blew it, although at the same time, this was basically just getting you into Nvidia's club, which is another reason AMD probably killed it, as it was more or less free advertising for cool tech that wasn't yours and conceding defeat. In terms of pushing shit that stuck, I appreciate they pushed DX12 back when nobody wanted to use Windows 10, even though it was because it was one of the few ways they could beat the DX11-focused Nvidia. Mantle too was basically Vulkan Jr, even though Star Swarm and DA:I were NOT the killer apps to advertise it, that was Doom after the Vulkan update, which thankfully lit a fire under everyone's asses to slowly get off of DX11
 
they ignored the gaming/normie space when hyping OpenCL up

I had a front row seat to all of this, as in, I had engineering samples of hardware and was developing software in this era - trust me, gamers & normies had nothing to do with the success of CUDA or the failure of OpenCL. OpenCL failed as a production API because it was fucking dog crap. You had to write your compute kernels as strings, which IDEs simply have no ability to parse and assist in debugging. The reason CUDA succeeded is that NVIDIA iterated a lot on both it and their hardware in collaboration with major software vendors and National Labs high performance computing experts. AMD wasn't really interested in what anyone had to say in this era and just assumed "open = good." OpenCL is actually still around, but today, you write your code in something that isn't retarded and pass it to an interpreter or something that converts it OpenCL. The minimal impact that NVIDIA's CUDA tools for gaming GPUs had is that geeks got their feet wet learning to code in CUDA, and brought that knowledge to their jobs. We actually hired a guy who'd started playing with CUDA on his own machine in grad school, and had become an expert in it by the time we hired him. It didn't hurt NVIDIA any, but it's not what made them successful.

I, personally, was a Luddite who refused to learn CUDA at the time, but at the time, everybody I knew said once they gave up on trying to write cross-platform code in OpenCL and just accepted NVIDIA was the future, it was comparatively fantastic to code in and much easier to support.

Not to say that OpenCL didn't have promise though

It didn't. It's fucking dog crap and I and many others refused to touch it because it promised to turn your code into an unmaintainable nightmare. However, lots of people continued to resist CUDA because, as you rightfully pointed out, nobody wants to write in a vendor-specific API. However, what ultimately sealed the deal was the NVIDIA Tesla series of cards, such as the K80:


This was eventually capped off with the V100, which went into Summit:


AMD's cards were just not in the same league as NVIDIA's. If they'd had the performance, maybe more of us would have bit the bullet and started OpenCL projects. The MI60 and MI50 didn't have the tensor performance NVIDIA's offerings did. e.g.

1714875715891.png

I also hear from people who know people that they were pieces of trash that liked to burn up and die a lot. So what happened is people who buy $50,000 server nodes by the truckload bought NVIDIA, because AMD's cards sucked. End of story. They bought what little software that was already running on CUDA, and told everyone who was still unwilling to jump in the pool to get with the program, because AMD is going nowhere, so it effectively doesn't matter that CUDA's proprietary. AMD wasn't exactly winning before, but Tesla really made NVIDIA take off.

AMD now has an uphill battle. They finally, as of the MI300, have a compelling product in terms of performance. But ROCm is an absolute shitshow. In some ways, it's actually worse than OpenCL. It's easier to code in but nightmarishly bad in terms of documentation & support. Plus, a lot of the vendors they're courting now have tons of code written in CUDA. At least all that Python AI stuff is vendor-agnostic now.
 
The title says "enthusiast hardware" and I couldn't find a HOTAS thread. Either way this thread is the right crowd and you'll enjoy!

Looks like GVL is actually starting to make stuff again instead of just selling parts and board components. This looks pretty neat and I can't wait to see how he develops it further. I was fortunate enough to get a couple custom kits of his before the war started and the prospect of him getting back to production is awesome. If you send him drawings and design documents (and cash, he loves cash), he'll make just about whatever you want. But I don't think he's quite back to custom production yet, at least according to a comment he made in his previous video.
 
Mantle too was basically Vulkan Jr, even though Star Swarm and DA:I were NOT the killer apps to advertise it, that was Doom after the Vulkan update, which thankfully lit a fire under everyone's asses to slowly get off of DX11
don't forget battlefield 4, that's where most people will remember mantle from.
 
  • Like
Reactions: Leaded Gasoline
12th -> 13th, a bit more cache, using higher yields to add E-cores & clock speed
13th -> 14th, using higher yields to add a bit more clock speed.

It's not much, but it's also not nothing. However, if you are building a PC, unless you are going to put a bomb ass cooling solution on your board, there isn't that much merit to one "generation" over another, just get whatever gets you the best deal for cores & GHz vs dollars.
That's basically what I took into account when I chose my 12th gen 12900k core. It's already blasting at over 5ghz, which is the fastest core I've ever had, and it has some cache, and just in terms of my needs, it's still overkill at 8P+ 8E cores, even if those are 10nm. It'll play Cyberpunk alright, with gusto, even some new CPU heavyweights like Dragons Dogma 2 I'd be willing to test out on it. It'll also let me have fun with AI if I so choose.

13 and 14th gen just seemed like power hogs for significantly less juice to squeeze, not to mention MONEY. Money was a significant factor in choosing 12th gen. I could still get a ton of performance for less cost.



I'm going to buy a 5090 on launch
Your money lol, but you're also insane.
 
  • Feels
Reactions: gagabobo1997
I upgraded my 1060Ti to a 3060Ti and I think we'll be good for another six or seven years.

Nvidia is releasing these new versions way too quickly.
A --60 card probably won't have that much longevity when it comes to new releases, but in general, yes, the new graphics cards are well positioned to last many years (as long as game devs don't go full retard and stop shipping baked lightmaps entirely because "you can just shade rays, it cuts down our development costs so much!" or whatever). DLSS/FSR/XeSS are really good at upscaling, so that even a game rendering at 1080p will look alright on a 4k monitor, and with frame generation 20fps will feel more like 40fps, entirely playable, particularly with Reflex/AMD Anti-Lag (assuming AMD's version isn't getting people banned from steam anymore or whatever that controversy was about) reducing the input lag to barely perceptible even at very low rates.
If I weren't into running AIs I'd probably be counting on my 4090 to last the decade out, or more.
 
Back