GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Server nerd bros. The farms is getting upgraded soon and Josh is getting a massive budget. If he were to upgrade the core, what to get? The newest Xeon or AMD Epyc?

Power. Best build quality in the business, 8 threads per core, bulletproof security, most memory bandwidth per socket. Just stupidly expensive.
 
I'm pretty sure his top priority is reliable storage, not coares, though he is building a nice wish list with the extra cash.

AMD’s Unreleased Radeon RX 7500 Prototype Leaked: A 6GB Card That Never Made It Into The Market (archive)

AMD Claims TSMC’s 2nm Process Is Superior To All Alternatives Out There; Reveals Possibility of Adopting Samsung As Well (archive)

1347696-728x459-1.webpae6eb65524edcf3f20fbdd389350f0e041b62647f4d7106c8bbd0271b7bcce4c.gif
Shit like this keeps me coming back to Wccfkek.

I have always wanted to see AMD make some cheap garbage APUs at Samsung so they can have a real Alder Lake-N/Wildcat Lake competitor with volume.

Computex 2025: Watch NVIDIA CEO Jensen Huang deliver the opening keynote today
Huang’s keynote is scheduled for 11PM ET/ 8PM PT on May 18 (11AM on May 19 in Taiwan Time)
 
which would use systems with more cores.
No we really don't.

It's all about optimization. Fast fileservers usually don't need many cores since the I/O is the bottleneck. Some data processing parallelizes well and can use more cores but if we need more RAM then sometimes we'll need to run in parallel across multiple systems to get enough RAM and there's no sense buying a ton of cores. Sometimes it parallelizes like shit and we need fewer but faster cores. Virtualization workloads can be all over the place. For things like VMWare and Kubernetes it's often just RAM as whatever is happening, especially in dev environments, just needs to sit there and needs very little CPU much of the time.

I had one client who bought very small servers that would each run a single task. It was a super critical app(in their mind) and wanted the minimum blast radius if a server went down.

The first thing we do when someone comes and says "I want to move my datacenter into the cloud." is to tell them they're an idiot. The second thing is to actually figure out what they need as far as cores/ram/storage. Buying a new server is exactly the same exercise in figuring out what you need and optimizing.

Source: I've been making this shit up for over 20 years now.
 
No we really don't.

It's all about optimization. Fast fileservers usually don't need many cores since the I/O is the bottleneck. Some data processing parallelizes well and can use more cores but if we need more RAM then sometimes we'll need to run in parallel across multiple systems to get enough RAM and there's no sense buying a ton of cores. Sometimes it parallelizes like shit and we need fewer but faster cores. Virtualization workloads can be all over the place. For things like VMWare and Kubernetes it's often just RAM as whatever is happening, especially in dev environments, just needs to sit there and needs very little CPU much of the time.

I had one client who bought very small servers that would each run a single task. It was a super critical app(in their mind) and wanted the minimum blast radius if a server went down.

The first thing we do when someone comes and says "I want to move my datacenter into the cloud." is to tell them they're an idiot. The second thing is to actually figure out what they need as far as cores/ram/storage. Buying a new server is exactly the same exercise in figuring out what you need and optimizing.

Source: I've been making this shit up for over 20 years now.
Why not all? Fuck optimization. Pure raw power. Best core, max out the ram, max out all the drives. Make the most kino server for the most kino website.
 
He says he has an unfilled M.2 NVMe slot that he wants to put fast storage in for the database, that would greatly speed up the site.
This would be a bad idea, databases are write-intensive and M.2s are consumer garbage. You want SLC NAND for a database (actually you want at least two, so you get redundancy), and that pretty much means you need U.2 or U.3.
 
This would be a bad idea, databases are write-intensive and M.2s are consumer garbage. You want SLC NAND for a database (actually you want at least two, so you get redundancy), and that pretty much means you need U.2 or U.3.
Take it up with him, that's what I heard him mumble on the podcast.

"I have an empty M2 drive that I want to put an NT...MN...VNE... in, and then move the database over to that so that it is blazing, instead of fighting with the file server for disk I/O. That is my first... that's the top of my fucking list."
 
Last edited:
Take it up with him, that's what I heard him mumble on the podcast.
He can do as he likes, I'm not his mum. I think highly enough of him that I expect he'll research the issue before actually committing to anything.
I suppose he could stick a pair of optanes in there. Those are available in M.2 and have good endurance (and exceptional 4k random performance). You do still want two of them, just in case.
 
This would be a bad idea, databases are write-intensive and M.2s are consumer garbage. You want SLC NAND for a database (actually you want at least two, so you get redundancy), and that pretty much means you need U.2 or U.3.
This post is a little more up-to-date and detailed on the drive hardware

Though I think he's had a drive failure and some sort of partial rebuild since then? So who knows what it looks like now. Those U.2 drives are not really "consumer" but they're also not really full blown datacenter drives either. I've had strange things happen with lower end U.2 drives, they are a real mixed bag.
 
Last edited:
This would be a bad idea, databases are write-intensive and M.2s are consumer garbage. You want SLC NAND for a database (actually you want at least two, so you get redundancy), and that pretty much means you need U.2 or U.3.
you can get m.2 to u.2 or u.3 adaptor so you can still use the connector. also m.2 is just a protocol and you can get m.2 drives with SLC NAND, but the 2.5" form factor of a u.2 drive could have better heat management.
 
Back