GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

1768951497433.png
Serverpartdeals drives have exploded in price. The 22TB drives are now almost $400. The 22TB drives used to be $220. Most of my drives for my Plex server are 22-24TB. I have about 600TB od drives for my Plex server right now. Guess I better buy more before they go up even higher.
 
View attachment 8451320
Serverpartdeals drives have exploded in price. The 22TB drives are now almost $400. The 22TB drives used to be $220. Most of my drives for my Plex server are 22-24TB. I have about 600TB od drives for my Plex server right now. Guess I better buy more before they go up even higher.
What do you even have on your server? I have like 50TB of stuff and am runnign out of things to download. Or do you have a good source for reality tv?
 
Not really. R&D typically has huge operational losses. OpenAI is essentially a giant research project moneywise, but fundamentally isn’t any different.

Yes, really. R&D is investment in future revenue streams. You spend a million on R&D today to make $2 million tomorrow. OpenAI's losses come from the cost of its current revenue--electricity, datacenter space, and hardware merely to serve its current customers. It spends a million today to make half a million today. This results in an opex growth curve that fundamentally always outpaces its revenue growth. That's why it's screwed.

It spends more on electricity, data center space, and asset depreciation to simply maintain operations than it brings in via fees. This contrasts with Amazon, which technically lost money on a short time scale, but wasn't actually selling goods below cost of revenue. It was simply always investing in expanding operations, and it could show investors how every $1 in expanded operations could result in more than $1 in future revenue.

Edit: There are a lot of very good articles showing that it's mathematically impossible for AI to generate enough revenue just to cover the 2024 capex alone. I can't find the one I'm thinking of, but this one is all right: https://medium.com/@truthbit.ai/the...can-ai-revenue-catch-up-to-capex-df8c5c3c52fb
 
Last edited:
Ed Zitron wrote a whole piece about OpenAI and how fucked they are.
Based on my calculations, in the next six months, OpenAI needs at least $50 billion to build a gigawatt of data centers for Broadcom — and to hit its goal of 10 gigawatts of data centers by end of 2029, at least another $200 billion in the next 12 months, not including at least $50 billion to build a gigawatt of data centers for NVIDIA, $40 billion to pay for its 2026 compute, at least $50 billion to buy chips and build a gigawatt of data centers for AMD, at least $500 million to build its consumer device (and they can’t seem to work out what to build), and at least a billion dollars to hand off to ARM for a CPU to go with the new chips from Broadcom.

That’s $391.5 billion dollars! That’s $23.5 billion more than the $368 billion of global venture capital raised in 2024! That’s nearly 11 times Uber’s total ($35.8 billion) lifetime funding, or 5.7 times the $67.6 billion in capital expenditures that Amazon spent building Amazon Web Services!

On top of all of this are OpenAI’s other costs. According to The Information, OpenAI spent $2 billion alone on Sales and Marketing in the first half of 2025, and likely spends billions of dollars on salaries, meaning that it’ll likely need at least another $10 billion on top. As this is a vague cost, I’m going with a rounded $400 billion number, though I believe it’s actually going to be more.

And to be clear, to complete these deals by the end of 2026, OpenAI needs large swaths of this money by February 2026.
I hope everyone involved in this scheme gets slaughtered like pigs.
 
Losing money is completely normal for a business starting out. Ideally a business in growth stage should have a profit of $0.
Iirc OpenAI was openly begging for a bailout at end of 25.
Also i saw a number somewhere that for every dollar they’re making they spend almost $8 and it looks like their big plan to recoup these costs is subscription services like “Vine but AI generated content”.
(I dont think AI is going anywhere i just think OpenAI specifically is not long for this world.)
 
Iirc OpenAI was openly begging for a bailout at end of 25.
Also i saw a number somewhere that for every dollar they’re making they spend almost $8 and it looks like their big plan to recoup these costs is subscription services like “Vine but AI generated content”.
(I dont think AI is going anywhere i just think OpenAI specifically is not long for this world.)
Of all the smarmy fucks that have been ruining the tech industry, and the world with it, for the whole of their careers, it would bring me the most pleasure to see Mr “Psychopathic Messiah Complex Personafied” Himself humbled. Hopefully if and when this happens Mr Altman crashes out so hard he kills himself. That would be at least one good thing to come out of this whole saga.
 

(This is a 2h17m3s podcast)
I just can't listen to the Hardware Unboxed dudes. They've always got this fuckin' attitude I can sense that just grates on me. Like they think they're a major influence and clout broker in this space that everyone should take seriously in between bouts of getting into fights with way smaller channels like Hardware Cajeets on twitter.

Like seriously, the way people gas them up you'd think they were a Gamer's Nexus in terms of viewership but they're way closer to a Paul's Hardware.

Iirc OpenAI was openly begging for a bailout at end of 25.
Also i saw a number somewhere that for every dollar they’re making they spend almost $8 and it looks like their big plan to recoup these costs is subscription services like “Vine but AI generated content”.
(I dont think AI is going anywhere i just think OpenAI specifically is not long for this world.)
Sam was saying, "we don't need a bailout" while also trying to couch it in language to convince you why giving them a bailout is actually a good idea.

I hate this gay Jewish lizard.
 
Iirc OpenAI was openly begging for a bailout at end of 25.
Also i saw a number somewhere that for every dollar they’re making they spend almost $8 and it looks like their big plan to recoup these costs is subscription services like “Vine but AI generated content”.
(I dont think AI is going anywhere i just think OpenAI specifically is not long for this world.)
people: needing a $400 GPU to do real time raytracing is just insane really
also people: needing a $400,000 GPU server to run a semi-retarded chatbot makes perfect economic sense
 

(This is a 2h17m3s podcast)
Lies, lies, lies...

Meanwhile, I dropped the hammer on an Arc B580 (Intel Limited Edition) last week and got it today. Currently running it in a Frankenstein eGPU setup because the lame Ryzen 3400G has a bug that makes GPUs stuck at x1 instead of making them run at x8, but OCuLink for some reason manages to bump this to x4. I will upgrade to a 5700X by the weekend which will hopefully solve this issue and give me some PCIe 4.0, too.

A few things I can already say about the Intel GPU, in case anyone considered buying one (or a possible future B770/B780...):
- As expected, VRAM is not an issue. 12 GiB are more than enough for this type of GPU. I didn't see it go above ~9.5 GiB in Cyberpunk even with Path Tracing. Very happy with that.
- HDR looks better than on NVidia. The colors are a tad warmer and more on point.
- HDR is, however, a bit buggy overall, and there seems to be no way to keep it for games that do not have native HDR support, unlike with NVidia where I can just drop a few files into the game's folder to get HDR support.
- SDR colors and gamma also seem a bit better tuned.
- MSI Afterburner does not support Intel GPUs beyond pure monitoring, but Intel's own app allows tuning to quite some extent, so that's not an issue.
- At PCIe 3.0 x1, it has all kinds of issues with scrolling and other things. At x4, all is fine. For comparison, the GTX 1070 has no issues with either.
- Not sure about CPU overhead as I cannot measure it reliably ATM due to the limitations mentioned above, but 4K Ultra settings non-RT w/XeSS Auto and XeSS framegen would have CPU and GPU run at around 80% load, respectively, producing 80 fps (40 real, 40 fake).
- DSC didn't seem as crappy as on NVidia at first, but it does come with certain limitations as well, like disabling integer scaling.
- There are fewer QoL features than on NVidia. There is no equivalent to RTX video enhancement, for example, so 1080p video doesn't look as great on a 4K screen.
- The card is very quiet and cool running, and it didn't get above 60°C even at full load inside the PC case.
- VRR causes constant desktop flicker on my OLED. Unlike my NVidias, the Arc does support it for the whole desktop properly (but I don't recommend it).
- Some games do not offer fullscreen mode, while they do on NVidia. Because of that, you cannot use VRR, if you've disabled it for the desktop. Intel really need to fix this.
- Total board power consumption is mostly around 150-160 watts when gaming. The power limit can be raised from the default 190 to up to 228 watts. When run as an eGPU, this card needs at least a 250W PSU for it not to crash. It did crash with my 200W PSU that is otherwise sufficient for the 1070. I now run it with the 8-pin connector connected to the PC's PSU.
- Board power draw on idle is quite high at almost 40 watts (Windows desktop at 4K 240 Hz via DP).

Obviously, these are just some preliminary findings from a shoddy setup. There's good, bad, and ugly. For the €220 I paid for it, it seems good enough, though.
 
Iirc OpenAI was openly begging for a bailout at end of 25.
kind of weird for him to be asking for bailouts that early, isn't it?
1768997964168.png
nevermind, just jewish shit...
Meanwhile, I dropped the hammer on an Arc B580 (Intel Limited Edition) last week and got it today. Currently running it in a Frankenstein eGPU setup because the lame Ryzen 3400G has a bug that makes GPUs stuck at x1 instead of making them run at x8, but OCuLink for some reason manages to bump this to x4. I will upgrade to a 5700X by the weekend which will hopefully solve this issue and give me some PCIe 4.0, too.
good going, just mind the TDP, also picking a amd apu if you are going to use a dgpu is not even redundancy because of the issues you are having, it's far better to use the x16 slot for some shit like a extra nvme board with sata connector to get better storage but then again clankers have been upping the prices of literally everything so there's that.

i memba seeing on preddit of all places that if you pick a APU, you are making a build that doesn't need a gpu and the guy was right for once, however the speeds and limits is abysmal for now so i got a 3400G kit built for a fren instead of myself because the prices were good and he paid anyway...
 
good going, just mind the TDP, also picking a amd apu if you are going to use a dgpu is not even redundancy because of the issues you are having, it's far better to use the x16 slot for some shit like a extra nvme board with sata connector to get better storage but then again clankers have been upping the prices of literally everything so there's that.

i memba seeing on preddit of all places that if you pick a APU, you are making a build that doesn't need a gpu and the guy was right for once, however the speeds and limits is abysmal for now so i got a 3400G kit built for a fren instead of myself because the prices were good and he paid anyway...
Not sure if I'm following you correctly, but the APU should work with up to two dGPUs, and it may be a bug with the mainboard that it doesn't even correctly work with one. The new CPU may solve this, if I'm lucky, but if I'm not, I'll have to buy a new mobo. Oh, well.

I've read that ASUS were replacing boards that had the x1 bug occur, but this one was used since 2019 until last year without a dGPU, at an office, so no one would notice. I thought at first the 1070 may be just a bit picky, because the APU only gives it 8 lanes, but the B580 is an x8 card, so it should work without issues, yet here we are. Now, I have another desktop APU, one of AMD's first from 2011, and its mobo has one PCIe 2.0 x16 slot which enables the 1070 to run on all 16 lanes (the 1070 is a PCIe 3.0 x16 card), so that's how I know that card is in order.

And BTW, the simple fact that OCuLink will let the card initialize at x4 when you give it a few seconds after turning on the eGPU tells me that all of this could be a simple timing error on the mobo side, because immediately turning on the PC right after turning on the eGPU will make the card initialize at just x1. So I'm not too hopeful regarding the new CPU, but we'll see. I got this PC for free, I didn't pick the APU or anything specifically.

I ought to receive a few JET-5608 SO-DIMM adapters next month so I can test how my laptop RAM will perform in the desktop PC. AM4 lives, and there's still a lot to discover.
 
VideoCardz: Kioxia exec says this year’s NAND is already sold out, ‘cheap’ 1TB SSD bargains are over

VideoCardz: Intel set to launch Arc Pro B70 with BMG-G31 GPU

To recap: Intel Arc Pro B50 is the low profile 70W card with 16 GB VRAM, that launched at $350. While it's $377 on Newegg, B&H Photo still has it in stock at $350. It has 16 Xe cores, less than the B570's 18.

Intel Arc Pro B60 is the B580 equivalent (20 Xe cores), with 4x the L2 cache of the B50, and 24 GB VRAM. This has no MSRP but has been in the $600-800 range. AIBs have been allowed to make dual card versions with 48 GB VRAM.

b70-b65.webp

If it's real, the Arc Pro B70 would be using a larger BMG-G31 die and 256-bit bus, with 32 GB VRAM. VideoCardz thinks it will have 40 Xe cores.

A different leaker claims there will be a B65 underneath it. My guess is that it would have 32 Xe cores if the BMG-G31 die has 40.
 
Yes, really. R&D is investment in future revenue streams. You spend a million on R&D today to make $2 million tomorrow. OpenAI's losses come from the cost of its current revenue--electricity, datacenter space, and hardware merely to serve its current customers. It spends a million today to make half a million today. This results in an opex growth curve that fundamentally always outpaces its revenue growth. That's why it's screwed.
The theory why it's viable would be the idea that they "win" the AI race and once they reach saturation point and don't need to build more, the rest is sitting back and letting the money roll in as AI usage increases and network effects lead to everyone just staying with them.

But the most obvious flaw in this is that there's no real evidence this is happening. For a start, model development is still so much in flux that at any moment some new pretender to the throne might appear with something better (and probably already has). For a second, the network effect isn't very strong with this. If today I want to use Claude and tomorrow something different, that's not a big wrench to move. I lose a little bit of long-term memory of my interactions or have to change a few API endpoints? Frankly, I don't care.

All of which is to pretty much agree with you, but to explain how they think it'll work. They're not viewing this as a long-term competitive market. They're viewing this as like the European powers "Scramble for Africa" in the 1800's. They're the Europeans. And we're all the Africans.

Unfortunately for them, the analogy is not a good one.

To recap: Intel Arc Pro B50 is the low profile 70W card with 16 GB VRAM, that launched at $350. While it's $377 on Newegg, B&H Photo still has it in stock at $350. It has 16 Xe cores, less than the B570's 18.
I'd be in the market for that if it wasn't $350.
 
IMG_3912.jpeg
Hmm… Scam, mistype, or what?

But the most obvious flaw in this is that there's no real evidence this is happening. For a start, model development is still so much in flux that at any moment some new pretender to the throne might appear with something better (and probably already has). For a second, the network effect isn't very strong with this. If today I want to use Claude and tomorrow something different, that's not a big wrench to move. I lose a little bit of long-term memory of my interactions or have to change a few API endpoints? Frankly, I don't care.
The more I’ve looked into seriously running AIs at home, the less I’m convinced that any of these AI datacenter companies are going to come out on top. Smaller models are able to be much more power efficient without sacrificing much in terms of quality of the results. You could imagine a company running a bunch of them locally on their employee’s workstations, or even spending $5-10k on a really high end AI server in their building, plus however much on a support contract, instead of $100/month forever for all of their engineers. At some point, local just makes way more sense than remote. Especially with how hungry these companies are for user data, I wouldn’t expect any of them to respect their enterprise customers’ privacy expectations. Facebook broke the law downloading LibGen to train its servers, who’s to say they won’t break their contracts to absorb client data into their servers?
 
Last edited:
Back
Top Bottom