- Joined
- Aug 28, 2019
(This is a 2h17m3s podcast)
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
What do you even have on your server? I have like 50TB of stuff and am runnign out of things to download. Or do you have a good source for reality tv?View attachment 8451320
Serverpartdeals drives have exploded in price. The 22TB drives are now almost $400. The 22TB drives used to be $220. Most of my drives for my Plex server are 22-24TB. I have about 600TB od drives for my Plex server right now. Guess I better buy more before they go up even higher.
Probably lots and lots of 4K movies. That, or a lot of empty space.What do you even have on your server? I have like 50TB of stuff and am runnign out of things to download. Or do you have a good source for reality tv?
Not really. R&D typically has huge operational losses. OpenAI is essentially a giant research project moneywise, but fundamentally isn’t any different.
I hope everyone involved in this scheme gets slaughtered like pigs.Based on my calculations, in the next six months, OpenAI needs at least $50 billion to build a gigawatt of data centers for Broadcom — and to hit its goal of 10 gigawatts of data centers by end of 2029, at least another $200 billion in the next 12 months, not including at least $50 billion to build a gigawatt of data centers for NVIDIA, $40 billion to pay for its 2026 compute, at least $50 billion to buy chips and build a gigawatt of data centers for AMD, at least $500 million to build its consumer device (and they can’t seem to work out what to build), and at least a billion dollars to hand off to ARM for a CPU to go with the new chips from Broadcom.
That’s $391.5 billion dollars! That’s $23.5 billion more than the $368 billion of global venture capital raised in 2024! That’s nearly 11 times Uber’s total ($35.8 billion) lifetime funding, or 5.7 times the $67.6 billion in capital expenditures that Amazon spent building Amazon Web Services!
On top of all of this are OpenAI’s other costs. According to The Information, OpenAI spent $2 billion alone on Sales and Marketing in the first half of 2025, and likely spends billions of dollars on salaries, meaning that it’ll likely need at least another $10 billion on top. As this is a vague cost, I’m going with a rounded $400 billion number, though I believe it’s actually going to be more.
And to be clear, to complete these deals by the end of 2026, OpenAI needs large swaths of this money by February 2026.
Iirc OpenAI was openly begging for a bailout at end of 25.Losing money is completely normal for a business starting out. Ideally a business in growth stage should have a profit of $0.
Of all the smarmy fucks that have been ruining the tech industry, and the world with it, for the whole of their careers, it would bring me the most pleasure to see Mr “Psychopathic Messiah Complex Personafied” Himself humbled. Hopefully if and when this happens Mr Altman crashes out so hard he kills himself. That would be at least one good thing to come out of this whole saga.Iirc OpenAI was openly begging for a bailout at end of 25.
Also i saw a number somewhere that for every dollar they’re making they spend almost $8 and it looks like their big plan to recoup these costs is subscription services like “Vine but AI generated content”.
(I dont think AI is going anywhere i just think OpenAI specifically is not long for this world.)
I just can't listen to the Hardware Unboxed dudes. They've always got this fuckin' attitude I can sense that just grates on me. Like they think they're a major influence and clout broker in this space that everyone should take seriously in between bouts of getting into fights with way smaller channels like Hardware Cajeets on twitter.
(This is a 2h17m3s podcast)
Sam was saying, "we don't need a bailout" while also trying to couch it in language to convince you why giving them a bailout is actually a good idea.Iirc OpenAI was openly begging for a bailout at end of 25.
Also i saw a number somewhere that for every dollar they’re making they spend almost $8 and it looks like their big plan to recoup these costs is subscription services like “Vine but AI generated content”.
(I dont think AI is going anywhere i just think OpenAI specifically is not long for this world.)
All 4k remuxes/highest bitrate versions of all movies and TV shows.What do you even have on your server? I have like 50TB of stuff and am runnign out of things to download. Or do you have a good source for reality tv?
people: needing a $400 GPU to do real time raytracing is just insane reallyIirc OpenAI was openly begging for a bailout at end of 25.
Also i saw a number somewhere that for every dollar they’re making they spend almost $8 and it looks like their big plan to recoup these costs is subscription services like “Vine but AI generated content”.
(I dont think AI is going anywhere i just think OpenAI specifically is not long for this world.)
Lies, lies, lies...
(This is a 2h17m3s podcast)
kind of weird for him to be asking for bailouts that early, isn't it?Iirc OpenAI was openly begging for a bailout at end of 25.

good going, just mind the TDP, also picking a amd apu if you are going to use a dgpu is not even redundancy because of the issues you are having, it's far better to use the x16 slot for some shit like a extra nvme board with sata connector to get better storage but then again clankers have been upping the prices of literally everything so there's that.Meanwhile, I dropped the hammer on an Arc B580 (Intel Limited Edition) last week and got it today. Currently running it in a Frankenstein eGPU setup because the lame Ryzen 3400G has a bug that makes GPUs stuck at x1 instead of making them run at x8, but OCuLink for some reason manages to bump this to x4. I will upgrade to a 5700X by the weekend which will hopefully solve this issue and give me some PCIe 4.0, too.
Not sure if I'm following you correctly, but the APU should work with up to two dGPUs, and it may be a bug with the mainboard that it doesn't even correctly work with one. The new CPU may solve this, if I'm lucky, but if I'm not, I'll have to buy a new mobo. Oh, well.good going, just mind the TDP, also picking a amd apu if you are going to use a dgpu is not even redundancy because of the issues you are having, it's far better to use the x16 slot for some shit like a extra nvme board with sata connector to get better storage but then again clankers have been upping the prices of literally everything so there's that.
i memba seeing on preddit of all places that if you pick a APU, you are making a build that doesn't need a gpu and the guy was right for once, however the speeds and limits is abysmal for now so i got a 3400G kit built for a fren instead of myself because the prices were good and he paid anyway...

The theory why it's viable would be the idea that they "win" the AI race and once they reach saturation point and don't need to build more, the rest is sitting back and letting the money roll in as AI usage increases and network effects lead to everyone just staying with them.Yes, really. R&D is investment in future revenue streams. You spend a million on R&D today to make $2 million tomorrow. OpenAI's losses come from the cost of its current revenue--electricity, datacenter space, and hardware merely to serve its current customers. It spends a million today to make half a million today. This results in an opex growth curve that fundamentally always outpaces its revenue growth. That's why it's screwed.
I'd be in the market for that if it wasn't $350.To recap: Intel Arc Pro B50 is the low profile 70W card with 16 GB VRAM, that launched at $350. While it's $377 on Newegg, B&H Photo still has it in stock at $350. It has 16 Xe cores, less than the B570's 18.

The more I’ve looked into seriously running AIs at home, the less I’m convinced that any of these AI datacenter companies are going to come out on top. Smaller models are able to be much more power efficient without sacrificing much in terms of quality of the results. You could imagine a company running a bunch of them locally on their employee’s workstations, or even spending $5-10k on a really high end AI server in their building, plus however much on a support contract, instead of $100/month forever for all of their engineers. At some point, local just makes way more sense than remote. Especially with how hungry these companies are for user data, I wouldn’t expect any of them to respect their enterprise customers’ privacy expectations. Facebook broke the law downloading LibGen to train its servers, who’s to say they won’t break their contracts to absorb client data into their servers?But the most obvious flaw in this is that there's no real evidence this is happening. For a start, model development is still so much in flux that at any moment some new pretender to the throne might appear with something better (and probably already has). For a second, the network effect isn't very strong with this. If today I want to use Claude and tomorrow something different, that's not a big wrench to move. I lose a little bit of long-term memory of my interactions or have to change a few API endpoints? Frankly, I don't care.
chink mystery PCB, A770 exists.Scam, mistype, or what?
Ah shit, I’m blind.chink mystery PCB, A770 exists.