Build a Farm: Hardware considerations

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
You are, obviously, more privy to the needs as far as technology goes to keep the ship afloat, but for server equipment, especially the chassis and processors, I’d always try to shop around for used stuff before throwing down on new. Server equipment depreciates in value incredibly quickly, and I’m sure funds could be put to other uses. The difficulty with used stuff is always receiving it.

You’ve outlined the specs of the hardware, so I’ll try to piece together something for you. I’d also suggest reaching out to your colo provider to see if they have any servers or misc. hardware a available for purchase. You’d not believe the trove of goods datacenters reclaim due to nonpayment.
 
We don't colo, I have a half-cab because I do my own networking.

I emailed my guy about it but he hasn't gotten back and the server feels like it's actively deteriorating every day. I'm starting to turn off important site features just to keep it from falling apart.
My bad re: colo, but it was worth a shot. Mirroring what @DNA_JACKED said, it’s solid. It’s very new. Personally, I’ve saved thousands by recycling old chassis’s and still relatively powerful Xeon procs (and RAM) but this was a very time consuming process, which you can not afford at the moment. I’m assuming Thinkmate also includes some form of warranty which would be good to have.
 
You could try this, but it requires re-booting your system because you need to add a kernel flag, and you also need to tweak your udev rules.


Don't expect anything magic.
 
Discuss

Thinkmate® RAX-1208-SH 1U Chassis - 8x Hot-Swap 2.5" SATA/SAS3 - 600W Single Power
Intel® C246 Chipset - 6x SATA3 - 2x M.2 - Dual Intel® 1-Gigabit Ethernet (RJ45)
Six-Core Intel® Xeon® Processor E-2286G 4.0GHz 12MB Cache (95W)
2 x 16GB PC4-21300 2666MHz DDR4 ECC UDIMM

LSI MegaRAID 9341-8i SAS 12Gb/s PCIe 3.0 8-Port Controller
4 x 240GB Intel® SSD D3-S4510 Series 2.5" SATA 6.0Gb/s Solid State Drive
4 x 1.8TB SAS3 12.0Gb/s 10000RPM - 2.5" - Seagate Exos 10E2400 Series (512e/4Kn)

View attachment 1133069

CONFIGURED PRICE:
$4,160.00

($377/mo)
At a glance, the single power supply stands out. I would definitely have redundant power supplies on any hardware I want to keep up 24/7.
 
Six-Core Intel® Xeon® Processor E-2286G 4.0GHz 12MB Cache (95W)

AMD Ryzen 7 3800X - 3.9 GHz (4.5 GHz Turbo) - 8 cores (16 threads) - 32MB cache - 105W - $300 if you live near a Micro Center, otherwise $340
 

AMD Ryzen 7 3800X - 3.9 GHz (4.5 GHz Turbo) - 8 cores (16 threads) - 32MB cache - 105W - $300 if you live near a Micro Center, otherwise $340
Not available in a 1u server. Also, look how close that is to a 4 GHz intel core, now compare that to the 5 GHz that Null has spec'd. Ryzen is not the right application for this.
 
2.1 GHz CPU with 3.0 GHz turbo, AKA WORSE than what we have now. We want something Faster, not Slower.
We'd get some IPC gains for single core performance coming from an old V2 from 2012. Besides which, look at the rest of those fucking specs. It's an order of magnitude beyond what the site is running on now, and that's just one example.

Look at this one with 8 SSDs. There are a zillion servers like this out there.
 
Last edited:
Discuss

Thinkmate® RAX-1208-SH 1U Chassis - 8x Hot-Swap 2.5" SATA/SAS3 - 600W Single Power
Intel® C246 Chipset - 6x SATA3 - 2x M.2 - Dual Intel® 1-Gigabit Ethernet (RJ45)
Six-Core Intel® Xeon® Processor E-2286G 4.0GHz 12MB Cache (95W)
2 x 16GB PC4-21300 2666MHz DDR4 ECC UDIMM

LSI MegaRAID 9341-8i SAS 12Gb/s PCIe 3.0 8-Port Controller
4 x 240GB Intel® SSD D3-S4510 Series 2.5" SATA 6.0Gb/s Solid State Drive
4 x 1.8TB SAS3 12.0Gb/s 10000RPM - 2.5" - Seagate Exos 10E2400 Series (512e/4Kn)

View attachment 1133069

CONFIGURED PRICE:
$4,160.00

($377/mo)

The build looks solid to me, except that the SSD's should be connected to the Hardware RAID Controller too, instead of the Intel Mainboard as shown there. (Unless there is a good reason not to.)

I have done some price checking on the separate components:
Intel® C246 Chipset - 6x SATA3 - 2x M.2 - Dual Intel® 1-Gigabit Ethernet (RJ45)
$219 (Amazon.com) - (but is the form factor correct for a rack chassis?)

Six-Core Intel® Xeon® Processor E-2286G 4.0GHz 12MB Cache (95W)
$489 (Connection.com - but out of stock)

2 x 16GB PC4-21300 2666MHz DDR4 ECC UDIMM
$184 (Amazon.com)

LSI MegaRAID 9341-8i SAS 12Gb/s PCIe 3.0 8-Port Controller
$265 (Newegg.com)

4 x 240GB Intel® SSD D3-S4510 Series 2.5" SATA 6.0Gb/s Solid State Drive
$1136 ($284 each, Amazon.com)

4 x 1.8TB SAS3 12.0Gb/s 10000RPM - 2.5" - Seagate Exos 10E2400 Series (512e/4Kn)
$980 ($245 each, Serversupply.com)

In total: $3273

I couldn't find the price of just the chassis + PSU, so I left that out.
So, the markup would be $887 minus whatever a good rack chassis with PSU costs.
 
The build looks solid to me, except that the SSD's should be connected to the Hardware RAID Controller too, instead of the Intel Mainboard as shown there. (Unless there is a good reason not to.)

I have done some price checking on the separate components:
Intel® C246 Chipset - 6x SATA3 - 2x M.2 - Dual Intel® 1-Gigabit Ethernet (RJ45)
$219 (Amazon.com) - (but is the form factor correct for a rack chassis?)

Six-Core Intel® Xeon® Processor E-2286G 4.0GHz 12MB Cache (95W)
$489 (Connection.com - but out of stock)

2 x 16GB PC4-21300 2666MHz DDR4 ECC UDIMM
$184 (Amazon.com)

LSI MegaRAID 9341-8i SAS 12Gb/s PCIe 3.0 8-Port Controller
$265 (Newegg.com)

4 x 240GB Intel® SSD D3-S4510 Series 2.5" SATA 6.0Gb/s Solid State Drive
$1136 ($284 each, Amazon.com)

4 x 1.8TB SAS3 12.0Gb/s 10000RPM - 2.5" - Seagate Exos 10E2400 Series (512e/4Kn)
$980 ($245 each, Serversupply.com)

In total: $3273

I couldn't find the price of just the chassis + PSU, so I left that out.
So, the markup would be $887 minus whatever a good rack chassis with PSU costs.
Nope, thats an ATX board, not a server motherboard design. You also didnt spec a heatsink. So you would need the case and PSU, cooling fans, heatsink, and the proper server motherboard design, thats a lot to squeeze out of 887, not to mention you dont get any kind of warranty from a system builder. And he would have to build it himself.

We'd get some IPC gains for single core performance coming from an old V2 from 2012. Besides which, look at the rest of those fucking specs. It's an order of magnitude beyond what the site is running on now, and that's just one example.

Look at this one with 8 SSDs. There are a zillion servers like this out there.
Null has already said, more then once, in this very thread, that he was not interested in used servers, even if they are newer then what he has, that have lower clock rates then our current hardware. The CPU has been the bottleneck on PHP operations, throwing a terrabyte of RAM at it wont fix that.

We dont need 256 GB of RAM or 8 PCIE SSDs. We need powerful single core performance. He has said as much. Why so many posters in I&T cant figure that out is beyond me. The server you linked starts at 2.4 GHz with a 3.3 GHz turbo, the one null specd has a 4.0 GHz clock rate with a 4.9 GHz turbo. If he needed massively parallel processing, he would have been more interested in the EPYC server recommendations on page 1. Why would he spend as much on this used server instead of a new server that meets what his needs are?

read it again, his choice is 4 GHz.

oh, that sucks
Read it again, 4.9 GHz TURBO for single core use. The site you linked was not testing with turbo speeds, but rather locked speeds.
 
Nope, thats an ATX board, not a server motherboard design. You also didnt spec a heatsink. So you would need the case and PSU, cooling fans, heatsink, and the proper server motherboard design, thats a lot to squeeze out of 887, not to mention you dont get any kind of warranty from a system builder. And he would have to build it himself.


Null has already said, more then once, in this very thread, that he was not interested in used servers, even if they are newer then what he has, that have lower clock rates then our current hardware. The CPU has been the bottleneck on PHP operations, throwing a terrabyte of RAM at it wont fix that.

We dont need 256 GB of RAM or 8 PCIE SSDs. We need powerful single core performance. He has said as much. Why so many posters in I&T cant figure that out is beyond me. The server you linked starts at 2.4 GHz with a 3.3 GHz turbo, the one null specd has a 4.0 GHz clock rate with a 4.9 GHz turbo. If he needed massively parallel processing, he would have been more interested in the EPYC server recommendations on page 1. Why would he spend as much on this used server instead of a new server that meets what his needs are?


Read it again, 4.9 GHz TURBO for single core use. The site you linked was not testing with turbo speeds, but rather locked speeds.
You can say it till the cows come home, but running this site on a workstation is just not a prudent move. There is used enterprise gear out there for any price point and use case.
 
You can say it till the cows come home, but running this site on a workstation is just not a prudent move. There is used enterprise gear out there for any price point and use case.
Where did I advocate running this site on a workstation? Did you just blank out on the whole fact that servers using E-2xxxg Xeon processors are widely available for this site's use case? There is tons of used enterprise gear out there, however everything you have linked to would be worse then what can be bought new for our use case.
 
  • Like
Reactions: JoshPlz
The set-up is good, though I question your choice on why you have a hardware RAID controller rather than have the Linux kernel deal with that. Other than that, it's good and improvements are always welcome.
 
  • Agree
Reactions: twozero
The set-up is good, though I question your choice on why you have a hardware RAID controller rather than have the Linux kernel deal with that. Other than that, it's good and improvements are always welcome.

It can be more efficient, but the main advantage in this case is that it "just works", and is harder to break.

I think Null is not the most experienced sysadmin ever, and even if he was, it might not be worth his time.
 
It can be more efficient, but the main advantage in this case is that it "just works", and is harder to break.

I think Null is not the most experienced sysadmin ever, and even if he was, it might not be worth his time.
I'm very experienced in everything above the kernel because I've always worked in virtualization. I've been told hardware raids are still important.
 
I'm very experienced in everything above the kernel because I've always worked in virtualization. I've been told hardware raids are still important.
Eh not really software raid can usually do the job well enough without needing a nand or raid controller to the setup. (Edit this depends on what raid level your running and if your running drive virtualization )But bearing that in mind I would however caution on having a dynamic backup in place. As we saw with ED hard disk failure can be a real son of a bitch. Your gonna want to be able to have a robust backup system with the ability to switch to alternate drives automatically as a backup while the main drives are replaced. Sort of a backup to the cloudflair always live system and a bit of redundancy just in case. But other than that this seems good albiet light on ram and processing speed for my tastes.
 
Back