Build a Farm: Hardware considerations

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
For those who have never been inside of a datacenter: Datacenters, at least halfway decent ones, have raised floors with perforated tiles. There’s a large air conditioning unit that pushes very cold air underneath the the floor which rises from the perforated tiles. That cold air is then sucked in through the front of the servers, and exhausted in the back (hot and cold “rows”). Most modern servers are going to have sufficient cooling, coupled with the cold air intake, I wouldn’t consider this a major concern. I keep three older Dell PowerEdge R710s in my Garage which stay under decent load that don’t have any thermal issues, and I live in a part of the world where summers are incredibly hot.

I read something about anti-DDOS above. That’s a myth. Companies such as Arbor push out hardware appliances that cost hundreds of thousands of dollars that act as a temporary surge protector at best. Recall 2013 and the ntpdc monlist (ntp amplification attack) impacting tons of FreeBSD hosts, and all of the ISPs claiming they could withstand gigabit attacks. If someone wants to bring you down and have the means to do so, they will.
 
10GB/40GB network speeds. Instead of 1GB connectivity between your devices you have 10GB or 40GB ports that pass traffic. This speeds up the shuffling of data between front/back ends. You can get 10GB/40GB switches fairly inexpensively these days (brocade/arista) on ebay and ESPECIALLY in Vegas.
We're not anywhere near the 1 Gbps currently

I may have a line to a person who could help you out if it depends on the datacenter you are in as I know they are in Vegas as well. I'll have to see what is out there for Dell/HP/SM equipment.
Sounds promising. Let me know if he wants to get in touch. A payment plan opens up a lot of options BTW.
 
My current home server for work is as such.
AMD Epyc 7301
SuperMicro MBD-H11SSL-NC-B
Crucial 64GB ECC DDR4 2133
ConnectX-4 Lx EN
x3 ironwolf 10tb

I have old server rack I haven't had to upgrade from, it's one of the old 20 year-old models that could act like a load bearing wall if you needed it to just due to all of the steel. Case for it is a iStarUSA D-407PL

I mostly needed hard drive space for my work and a good network card to transfer data around my house as fast as possible and not have to wait, waiting means less work to be done that day and less profit. RAM is only at 64gb's because it was cheap at the time and figured I may as well go for it in case I ever need it, otherwise I could have easily settled for 16gb. I would have gone for a dual processor motherboard but I don't need that kind of power so it would have been a waste, and currently they run like ass.

Still can't upgrade to 16tbs NAS drives, no one allows me to buy them in bulk and anytime I can get them they go on to my other systems.

System is still a work in progress, but for now I just use it for home backup/ storage for work. Most of the time it's just archiving older programs I've worked on for my job in case I ever have to go back to them. Eventually one upgrade will be a Tesla for the other work I do. I just can't justify getting one right now due to the pricing. If I do get one, it would probably be the K80. p100 and v100's are way too much. The other thing that needs to be changed is the case, but I can customize the chassis to allow liquid cooler for the processor. Currently it just doesn't work. Right now I'm using a Noctua NH-U12S to keep it cool. Temps tend to stick around 55 (it's in a terrible location, but it's the only place I could put my server, temps should be 49).
 
  • Informative
Reactions: JoshPlz
Here's my take as someone who works in the server space with a lot of very high end HA sites:

I'll echo the sentiment someone else had, this is not the place for all the Threadrippers or whatever you run your home Minecraft server on.

Hardware RAID is almost mandatory and thankfully most HP or Dell servers have some sort of integrated RAID controller in the system board. Also these servers generally have pretty decent built-in tools for remote monitoring in case shit starts getting fucky.

A decent HP server like a 300 series (1U-2U in height) from Gen8 that runs 2 Xeons (E5-26xx) with 128GB of memory shouldn't run you more than $2-3k with hard drives included. Same goes for Dell 20-30 series (720/730).

Feel free to DM me or whatever and I can talk shop in a little more detail.
 
Here's my take as someone who works in the server space with a lot of very high end HA sites:

I'll echo the sentiment someone else had, this is not the place for all the Threadrippers or whatever you run your home Minecraft server on.

Hardware RAID is almost mandatory and thankfully most HP or Dell servers have some sort of integrated RAID controller in the system board. Also these servers generally have pretty decent built-in tools for remote monitoring in case shit starts getting fucky.

A decent HP server like a 300 series (1U-2U in height) from Gen8 that runs 2 Xeons (E5-26xx) with 128GB of memory shouldn't run you more than $2-3k with hard drives included. Same goes for Dell 20-30 series (720/730).

Feel free to DM me or whatever and I can talk shop in a little more detail.

I said "Fuck home PC shit".

N00l has a budget of $5k, try to weave some equipment around that price point. Could probably do this whole shindig on 4 R6x0's and a pair of R7x0's. Fill in X to model that is affordable. SuperMicro is also a vendor worth considering as you can still get parts of them inexpensively and they are as reliable as Dell/HP.

This site:

Will scour ebay for enterprise servers. The kicker here is networking as I'm trying to convice N00l to upgrade to 10gb/40gb switching using SFP's to saturate the fuck out of the SSD's (or NVME's if applicable) .
 
I said "Fuck home PC shit".

N00l has a budget of $5k, try to weave some equipment around that price point. Could probably do this whole shindig on 4 R6x0's and a pair of R7x0's. Fill in X to model that is affordable. SuperMicro is also a vendor worth considering as you can still get parts of them inexpensively and they are as reliable as Dell/HP.

This site:

Will scour ebay for enterprise servers. The kicker here is networking as I'm trying to convice N00l to upgrade to 10gb/40gb switching using SFP's to saturate the fuck out of the SSD's (or NVME's if applicable) .

10K SAS drives are fairly inexpensive (900GB drives available for $40-$60 a pop) and most of these servers have ~8 slots which should give some room to work with when it comes to RAID (6 or 10). SSDs are nice but will eat up the majority of any server budget very quickly.

As for the switches I'd be curious to see the current network saturation.
 
I guarantee you, with the HDD being the bottleneck, you really need to switch it to SSDs, preferably RAID1 or RAID10 (if you can somehow afford 4x). RAID1 alone will (theoretically) double read speed. They are costly, but they are awesome at completely random access, which is what those attachments are. Just a switch from a mechanical HDD to an SSD can easily quadruple drive read speed, if not more, when non-sequential access is regularly in use.

https://www.newegg.com/samsung-4tb/p/0D9-0009-008X3 - An example, if you're doing consumer grade, which might be fine forever on a RAID1. Comes in 1TB, 2TB and 4TB.
 
  • Like
Reactions: Shoggoth
There is no way I am buying enough SSD to cover the existing consumption (1.3 TB) + RAID it properly. That is at least 2n, but probably 4n for the redundancy we need. A single 2TB SSD is too much and isn't much more spacious than what we already have. I'd much rather haven 4 x 10k 4TB and RAID10 them. If we ever consume 4TB it's probably time for a proper dedicated storage device.
 
As for the switches I'd be curious to see the current network saturation.
We average about 100Mbps ~ 200Mbps. Without Cloudflare we'd probably do a lot more, but all of our shit is cached and we don't stream much video.
 
4 x 4TB SSDs would be $4k or so - if the bottleneck REALLY IS disk access time (seeks or throughput - there's a difference) it would be the way to go (upgrade the existing machine to SSDs across the board). 4n mirroring is of course no replacement for actual backups, it just gives you more time to replace the failed device. 4x mirror WOULD increase READ speeds by up to 4x however.

200Mbps is about the maximum for sustained reads from a spinning disk - but it would be much much less if there's lots of seeking/thrashing.

Or 4TB of RAM to cache it all. Can the existing server take more RAM?

With Cloudflare on does the existing server max/readline RAM or CPU?
 
There is no way I am buying enough SSD to cover the existing consumption (1.3 TB) + RAID it properly. That is at least 2n, but probably 4n for the redundancy we need. A single 2TB SSD is too much and isn't much more spacious than what we already have. I'd much rather haven 4 x 10k 4TB and RAID10 them. If we ever consume 4TB it's probably time for a proper dedicated storage device.
There are bunches of older enterprise SSDs floating around for affordable prices, assuming whatever hardware you end up with supports full height PCI. Here's an example, 3.2TB for under $300.
 
  • Informative
Reactions: JoshPlz
Just gonna add this here from the Error 2020 thread, so I don't have to type it out again.
Suggesting using it as a long term, fast storage solution for holding static data.

Just as a suggestion, you could research into Single Layer Cell (SLC) Solid Storage drives.

SLC is "generally used in commercial and industrial applications and embedded systems that require high performance and long-term reliability.
SLC uses a high grade of flash technology that provides good performance and endurance, but the tradeoff is its high price.
SLC flash is typically more than twice the price of MultiLayer Cell (MLC) flash [memory]." (Source)

Getting a 10TiB SLC Drive would not only solve storage issues for many years to come, it would also last for many years (expected lifespan of these drives is 20-30 years under moderate load).

It's the most expensive solution, but will have the least headache (in theory). But you sail this boat, so you determine a Final Solution for The Error 2020 Question.
 
There are bunches of older enterprise SSDs floating around for affordable prices, assuming whatever hardware you end up with supports full height PCI. Here's an example, 3.2TB for under $300.
I like the idea in theory, but you'd think for that money on something they say is tested working it wouldn't be that hard to attach a SMART printout.
4 x 4TB SSDs would be $4k or so - if the bottleneck REALLY IS disk access time (seeks or throughput - there's a difference) it would be the way to go (upgrade the existing machine to SSDs across the board). 4n mirroring is of course no replacement for actual backups, it just gives you more time to replace the failed device. 4x mirror WOULD increase READ speeds by up to 4x however.
Fair point, and after doing some research I know now how to check disk queue stats on Linux (using iostat). Thanks..
Datacenters, at least halfway decent ones
This reminds me of Jason Scott talking about running an ISP out of a chickenwire cage in the storage area of a Pizza Hut. I hope Null has considered this excellent money-saving solution.
 
Last edited:
  • Like
Reactions: JoshPlz and twozero
Back