- Joined
- Jul 8, 2025
They sell desktop versions of the 5050? I thought those were for laptops only
"Jesus has come to Hong Kong"
Intel's 272mm^2 die (B580) has the same MSRP as Nvidia's 149mm^2.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
They sell desktop versions of the 5050? I thought those were for laptops only
"Jesus has come to Hong Kong"
Intel's 272mm^2 die (B580) has the same MSRP as Nvidia's 149mm^2.
Yes, at the same price as the better 9060xt 8gb (which is 30-50% faster than the 5050). It's a objectively shit product. A 50 tier gpu msrp should not be at 250 dollars, but here we are today.They sell desktop versions of the 5050? I thought those were for laptops only
Modern gpu needs external modification to beat an 8 year old card what a fucking jokeYes, at the same price as the better 9060xt 8gb (which is 30-50% faster than the 5050). It's a objectively shit product. A 50 tier gpu msrp should not be at 250 dollars, but here we are today.
Somebody managed to get a 17% fps uplift with their 5050 after he nigger-rigged a CPU cooler onto it after trying to beat 1080ti performance. They managed to hit 3300MHZ gpu clock with the 5050. The 50 series lineup is really good with overclocking
It's crazy how the 1080ti is still relevant 8 years after release, and the entry level cards are STILL ON PAR with it.
Stealth launched $250 card. Desktop version uses GDDR6 to cut costs, while the laptop version uses GDDR7 for better power efficiency.They sell desktop versions of the 5050? I thought those were for laptops only
What gets me is that the 5050 (GB207) is 149mm^2 for 2,560 CUDA cores, while the 5060 Ti (GB206) is 181mm^2 for 4,608 cores. That's 80% more cores for 21.5% more die area.It's a objectively shit product. Why is a 50 tier graphics card's msrp at 250 dollars? Crazy
The funniest part about the 5050 is that it can't even beat the 4060Either it's not using the TSMC "4N" node like the others (unlikely), or it's a consequence of both dies using 32 MiB of L2 cache, which could be dominating the die area. If it's not on a cheaper node, the GB207 die is objectively shit. If it is, then it's even more overpriced than we thought.
Refresh rate is generally not bound by vram speed but speed of ramdacs or PTY of modern equivalent and it's frame buffer.This is wrong as an absolute statement. There are lots of operations and whole applications that are bandwidth bound. All kinds of operations on large arrays of data, from copying to solving large sparse linear systems, are bandwidth-bound. It's pretty easy to saturate bandwidth with graphics operations, too. The 9700XT's 644.6 GB/s sounds like a lot. But if you want to run your monitor's refresh rate of 180 Hz, you're limited to a total 3.58 GB of data processed per scene, and that includes anything that has to be looked up multiple times (although larger caches mitigate this). 3.58 GB goes real fast in an era where textures can be up to 64 MB. Even if you use compression, it goes fast.
These connectors are absolutely dumb and they also require an insane amount of force. When you are bending the mb to insert it, something's not right.All this time, my dumb ass wasn't applying enough force for these two little, evidently present bumps to latch onto the connector, which is why it didn't work and why it kept constantly falling out at the slightest movement.
I highly recommend getting some North Bayou monitor arms, improves ergonomics and frees up desk space. Ideally avoid the ones with a straight pipe since there is no tightening on those and they like to move around.looking forward to turning the old one vertical
Put on SteamOS and an Oled screen and now we're talking.AYANEO NEXT 2 announced: Ryzen AI MAX+ 395 “Strix Halo” APU, two fans, and built-in battery
View attachment 7703998
A second Strix Halo handheld has appeared.
That'll be $1,500 sir.Put on SteamOS and an Oled screen and now we're talking.
You are overthinking this. If you want to run a game at any given frame rate, the max total data that can be processed in that frame, reads and writes combined, all data in aggregate (from vertices to the framebuffer) is just your memory bandwidth divided by the frame rate. This is simply a theoretical upper bound; it is physically impossible for the machine to go faster. Real-world performance will always be lower. This is just true of software in general. How fast do you want your software to run? How much memory bandwidth do you have? Well that tells you the simple formula to get the max amount of data your software can process, doesn't matter if you're running a database, a FEA crash simulation, or a video game.Refresh rate is generally not bound by vram speed but speed of ramdacs or PTY of modern equivalent and it's frame buffer.
If image is not prepared in time it will just resend the original frame.
4k frame is 33MB/slice, so even at 180Hz that's 6GB/ without any compression. Nvidia has patented integrated bus/ram compression as a part of the compression is used as encode/decode scheme, so you increase effective bus width as well as reducing vram requirements. The modern renders only render what is seen, so the rest of scene behind you can sit in vram compressed.
I don't know if it would help / be what you are looking for, but on my pc I made 2 power plans with Shortcuts to switch to each one to limit max cpu frequencyHowever, I can make my ASUS laptop last longer with better software, I'm all ears. I have a Ryzen 7 6800U that refuses to run below 2 GHz most of the time unless I limit it in the ASUS software. Even then, I've never gotten over 90 minutes.
amd did a intel with HBM though by just mehing and dropping the fucking thing head first onto some sharp metal spikes.AMD's vega couldn't be saved even with HBM. Memory is a lot, but it isn't everything.
Nvidia's reputation for reliability is overblown and outdated.like why bother picking a R9 or a Radeon VII when it's unexplored territory that you basically have to beta test when you can just fork money for a 2070ti instead and have reliable speeds and stability?
So far my 50 series are more stable than my iGPU AMD laptop... Kernel 6.12 and/or a new AMD firmware made it all go to hell.Nvidia's reputation for reliability is overblown and outdated.
NVIDIA Investigates GeForce RTX 50 Series "Blackwell" Black Screen and BSOD Issues
NVIDIA GPUs continue to face driver woes as RTX 5000 launch is called 'absolutely abhorrent'
Latest Nvidia drivers boost synthetics by 6-8% on RTX 50 GPUs — Users still report stability woes
Probably because gaming is a side gig for them now.
But at least you can super size it:
GeForce RTX 50 SUPER Series on track for a holiday 2025 release
Best I can do is cow urine saar.Just pour some water on the CPU are you stupid?
seeing as hbm2 and 2070 are way over 5 years old and in no way shape or form the fake frame cards were mentioned by me... yeah it's outdated.Nvidia's reputation for reliability is overblown and outdated.
chinkshit is always 50% chance of being some heavily modified inferior hardware to make it look and have the same price as the thing it's advertised or it's actually the thing advertised and you got lucky with the price.Is it just me or is it absolutely insane that you can get 8c/16t ryzen mini PCs with a 780m in aliexpress for around $300?
Consumer HBM was derailed by it being expensive (maybe 5x per GB of GDDR?) and in extremely high demand by enterprise, especially after AI took off.though amd ditching HBM can be considered a historic L moment because it was good shit, the 4096 bus made things extremely smooth and that's something even budget builds mention on his R9 nano video but... fucking hell, retardation due to cost, something everyone that has tasted from the HBM cup seem to still crave today.