GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
His natural instinct is to say "who did this to me?".
In some fairness to him, he's built the company from the ground up and I guess sees it as part of him. So any damage to the company is damage to him personally.
 
"It's got 48 gigs of RAM, so suck it Nvidia"
Steve really got pissed at them this time eh.
ae6eb65524edcf3f20fbdd389350f0e041b62647f4d7106c8bbd0271b7bcce4c.gif
Now now, Mama Lisa would never hit Papa Pat like that. The x86 family is too fragile for domestic violence.
papa pat and mama lisa.webp
 
I remember the Detonator drivers back in the day being an absolute shitshow. Back to the good old days of hyperoptimization it is!
Absolute shit shows wasn't uncommon at that time. It was how S3 became the out and proud performance king, for a hot minute, because they were the only ones with drivers that supported multitexturing in D3D6.
And with hyperoptimization you mean cheating in benchmarks, right?
 
Just circling back to this:

I think the funniest thing is that he hates everything about UE5's feature set and development priorities and yet he's stubbornly still using UE5 and if you try to suggest to him to not use UE5 he just ignores you.

He's performatively refuting all his own arguments for the superiority of the Source Engine, isn't he? If all of these ways you can use to get high-quality results out of the Source engine by using bespoke methods of hand-tweaking every shadow map and vertex placement were really so easy, he'd be using them. Instead, he's using UE5 because it gets him results many times faster.
 
Jensen did the Crysis joke again.
Also, look at this RTX PRO server rack. 8 96GB GPU's per unit. 4 units in the example rack. That gives you 3072GB, or ~3TB of VRAM. That's your VRAM demand in enterprise ML, meanwhile Nvidia has taken away NVLink in consumer cards where it only ever went up to 4, gives 32GB in the 5090 and 96GB in the RTX PRO 6000. It's not that consumer GPU's would undermine their enterprise ML business, it's that they're this fucking greedy to not even give you the ability to have more than 96GB at once in your home system.

The whole keynote is surprisingly technical for a Jensen stand-up honestly.
 
Also, look at this RTX PRO server rack. 8 96GB GPU's per unit. 4 units in the example rack. That gives you 3072GB, or ~3TB of VRAM. That's your VRAM demand in enterprise ML

That's a multi-million dollar rack, and only about 10% of enterprise market is deployments that big or bigger. These kinds of massive setups are what you find at hyperscalars and research instititutions. Your typical enterprise AI deployment is 2-8 GPUs total. Like the AI inferencing for fraud detection might be a grand total of two GPUs for a bank doing business all over the country. Jensen Huang absolutely does not care about the 0.001% of the broke nerd market that wants to train an AI to generate anime porn on their local machine. He's a douchebag, but don't flatter yourself, the hobbyist market is not important, and nobody is having C-suite meetings to figure out how to stop you from having fun, just because they're that obsessed with you. What they're worried about is countless companies around the country that are hurling money at AI deciding that they don't need to spend money with AWS or Azure, because a 2-way 5090 server is so comparatively cheap that their worthless AI help-bot can run on it instead, which would then mean AWS will reevaluate its plan to spend another billion dollars on GPUs.

It's not that consumer GPU's would undermine their enterprise ML business

It absolutely would and did. They killed off FP64 compute in their consumer GPUs because the GTX Titan cannibalized so much of their data center business, and I know of at least one company that was making good money shipping 4-way 3090 boxes for AI.
 
Last edited:
There's an ongoing antitrust investigation into NVIDIA:

https://videocardz.com/newz/nvidia-hit-with-u-s-doj-antitrust-investigation-over-alleged-market-dominance-abuse
I won't say any details for PL reasons, but I haver personal knowledge of things Jensen Huang has done to scuttle AMD deals that are at best in the gray area. One minor, public thing is they require business partners to refer to them as "the inventor of the GPU." NVIDIA didn't invent the GPU by even the most lenient definition. The first 3D accelerator with integrated T&L was developed in Japan, I think by Motorola. NVIDIA even lost a patent lawsuit in South Korea, I think against Samsung, over this.
"ongoing investigation"

Better than nothing I guess, but I wouldn't expect anything to come of it, as long as Jensen/Nvidia continues to follow the bidding of the US government, such as limiting the tech that can be sold to China as an example.
 
Better than nothing I guess, but I wouldn't expect anything to come of it, as long as Jensen/Nvidia continues to follow the bidding of the US government, such as limiting the tech that can be sold to China as an example.
NVIDIA isn't the only company on the Fortune 500. There are 499 other companies with a lot of pull, and none of them are really thrilled with how NVIDIA does things.
 
  • Informative
Reactions: Brain Problems
I wonder what would actually happen if this was the last generation of nvidia gpus and they moved strictly to data centers and AI markets. Would intel and AMD gobble up a similar market share or would AMD come out on top?
 
  • Thunk-Provoking
Reactions: Brain Problems
I wonder what would actually happen if this was the last generation of nvidia gpus and they moved strictly to data centers and AI markets. Would intel and AMD gobble up a similar market share or would AMD come out on top?
I don't think NVidia will ever pull out entirely, but at worst the GeForce brand would just be reduced to launching a token number of cards with minimal performance increases at a premium price. Kinda like how things are now, but somehow even worse.

Between just AMD and Intel AMD would come out on top. Intel just doesn't have the supply, product range, stability, and maybe not even the interest to really do much to AMD.
 
I don't think NVidia will ever pull out entirely, but at worst the GeForce brand would just be reduced to launching a token number of cards with minimal performance increases at a premium price. Kinda like how things are now, but somehow even worse.
They made $11 billion off gaming last year. Granted, data center $115 billion...but 10% of your revenue is too much to just ignore.
 
why they tried to do this shit to gamersnexus of all outlets is baffling
It's been amplified by a ton of other YouTube channels too. Not as much by the tech websites AFAICT. Here's TweakTown and Notebookcheck.

I don't think it will hurt Nvidia with consoomers much, but who knows. A cheaper and faster 9060 XT 8GB with FSR4 and stable drivers sounds like a better buy than a 5060 wait-for-12GB-edition. Maybe antitrust scrutiny is in Nvidia's future, looking at stuff like the above post and existing 2024 DOJ investigation. But Jensen can wine and dine his way out of it.
 
Back