Home Server and Self Hosting General - Technological Self-Sufficiency

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
I built myself a NAS a few months ago. Nothing special, using Unraid, four 8TB HDD's on array and one 4TB NVMe for cache, with HDD's set to spin down after idle time for power savings considering I rarely access them (cache drive helps a lot). But I'm wondering how much wear and tear I'm putting on them spinning them up and down every so often. Is it fine if I'm spinning them up once or twice a week at most?
 
I built myself a NAS a few months ago. Nothing special, using Unraid, four 8TB HDD's on array and one 4TB NVMe for cache, with HDD's set to spin down after idle time for power savings considering I rarely access them (cache drive helps a lot). But I'm wondering how much wear and tear I'm putting on them spinning them up and down every so often. Is it fine if I'm spinning them up once or twice a week at most?
It's better to keep them spinning. The more your disks spin up and spin down, the shorter their lifespan will be.
 
Hi bros. I am in a bit of a self made pickle atm. Long story short, I have a glut of hardware, but I am unsure of the what should be the final build.

So my main dev workstation currently is this:

Fedora
MAG x870E Tomahwak wifi
Ryzen 9 9950x3d
PNY RTX 5090
64 gb of RAM
1tb NVME
2tb NVME
2tb SSD
8tb HDD


My spare hardware is this:

RTX 3090
GTX 1080ti

PC Mate 350
650W psu
Ryzen 7 1700
32gb RAM
250 gb ssd
2tb HDD

Initially this was my thought for my main build:
- Proxmox,
- 5090 passthrough for my AI workload (ollama, tuning, etc), and Steam streaming to steamdeck
- 1080ti passthrough for my dev workstation so I can get the hdmi/dp to my KVM
- other stuff like plex, immich, maybe a local email server, self hosted websites etc

The thing is the 5090 is fucking tremendous, and the pcei slot I would put my 1080ti, I believe, would be too close to the 5090 and really fuck with air flow. I looked at riser cables as well. So now I am considering using the 3090 I bought recently before I found a 5090 for practically MSRP, and what I was going to just sell at a loss.

So a monster dev workstation, and then the 3090 ryzen 7, 32gb as my proxmox machine. Add a couple of hdds, put it in my closet, connect to it via my workstation to do stuff.

I just need advice. I am surprised by the indecision tbh. I feel like a lot of this wasteful, and that having a workstation this powerful is overkill for what I intend to do with it. I haven't even starting considering the network stuff yet haha.

And as for reasoning, its just fun, an awesome project, de-googling obviously. Thanks for reading my gay blog.
 
Really interested in self-hosting shit but currently in a tight budget, I wanna get some savings to hopefully treat myself to this by the end of the year if not this coming year.
This self hosted life guide by the Rossman himself.
Anyone have anything like this or any recommendations on what else could be added?
 
I wanna get some savings to hopefully treat myself to this by the end of the year if not this coming year.
You don't have to start expensive.
My first dedicated proxbox (not a NAS shitbox) was an optiplex SFF that i fished out of a dumpster. it could run a few containers with some effort.
My current system is built around a $270 Erying board with a soldered-on 12th gen laptop CPU and that can run jellyfin/immich/navidrome/monitoring stack/cryptpad and a few other things without breaking a sweat.
The only investment i made this year for it was a 10gbe nic and a sipeed ipmi for it, i might get an intel b50 for it to run n8n flows better and stream av1 if my budget looks good next month.

A lot of this crap is 100% optional. But aliexpress can be your best friend for cutting corners when it comes to expenses.
 
It's new file server day...
Well, ok, it's the day to assemble it. Originally the upgrades were going to go into the existing case, but I found this one which is only a bit longer than I'd really like. Old case: 4x3.5 6x2.5(9mm max) the 6 bay was added by cutting into the case. The new case is just 12x3.5 and a SAS 12gbit backplane, not that I have any drives where it matters.

The power supply I should have done differently. This one will vent through the lid, which for the current setup is fine as there's nothing above it. But I realized I could have used an SFX modular power supply with adapter and let it pull from inside the case. Also, the modular cables means there are fewer cables, but most these days are really long thus the giant pile. Immediately swapped all the fanwall fans for Arctic P8 PWM PST And a P8 Max on the cooler. It has enough clearance from the lid during stress tests it seems to cool fine. And if it gets a little loud, oh well. We'll see how the temps look, may upgrade the fanwall to even higher. The new motherboard has plenty of fan connectors so I can monitor when one gets choked from dust.

MB is a B650D4U-2L2T/BCM. Yes, broadcom but at least I didn't have to find a different MB to take my dual 10G card. Epyc 4004 series CPU and 64GB ECC RAM. Bootdrives are 2 NVMe drives on the card in the picture, bringing my $HOME that I share via NFS to 700GB or so. Added another NVMe to the MB for cache and will move over the 3.5 drives from the other server and hook up the SAS enclosure to an internal to external adapter that's the only non-hard drive part I'm moving from the other server.
Now, further testing, new system is Debian Trixie(13), previous is Buster(10). Moving the couple containers I use over to podman so I have to test all that and make sure exports, etc are all in place, then the drives come over and it moves to it's new rack home.

2025-09-19_18-02.webp
 
Now, it really is new file server day.
Missed a couple minor things, forgot to install curl, forgot to write my own /etc/resolv.conf once I switched it to static networking. Had to do some tweaking to get my nested bonding interfaces working. One LACP pair as primary and then fail over to a non-LACP interface if both the LACP go away.

Also, had a hard time getting my external drive array working. Plugged it in and nothing worked. Re-flashed the controller, power cycled, re-seated the cable. Finally fully un-plugged the cable and went to plug it back in and realized the adapter board has 2 holes, and since I was plugging it in blindly I ended up in the wrong hole. I could do without the color, but it's in a closet no one will ever see.

2025-09-24_17-01.webp

Still need to get my rootless podman containers to auto-start. FYI, rootless podman and file permissions just sucks. But Nextcloud and Mariadb are both happy now.
 
i have a home server, i don't use it for too much. it's an old pentium and an intel board i got for cheap with a rx 550 thrown in for hardware acceleration for my movies/shows and when i need to encode video to/from hevc. i use syncthing to keep all my stuff synced between my devices and it works perfectly. jellyfin is really good for organizing media but is prone to breaking after major updates and getting hardware acceleration working wasn't fun. transmission with a basic web interface for seeding. basic cronjob runs once a week that does a differential backup of my syncthing folders.
my media hdd died last year, bought it used and it lasted about seven years but it was also seeding 24/7 so that's probably why.
 
I broke down and added 2 more drives to the 10x 10T RAID 6 to make an even dozen and fill the 12 slot external chassis.
Took 3 days to restripe. Took about 2 hours to fsck and resize2fs.

After the restripe finished I powered down to upgrade the fan in the fan tray closest to the PCIe slots to a higher speed one to blow more air over the NVMe and array controller. Reassembled, and once again the array wasn't working. After staring it at for a while I noticed I was seeing 2 array drives, the new ones, but not the original 10. Power cycled the whole array and rebooted and it all came up immediately.

Turns out I had set all drives to spindown after 4 minutes. While the server was getting its new fan all the drives spun down except the 2 new ones which hadn't had a reboot to take the settings yet. Apparently the array can't actually deal with SATA drives that are spun down, or maybe the array controller, either way. It's getting closer to winter so I just disabled the spindown and all is well again.
 
I have barely time for hobby projects but lately I was super annoyed at having to manually backup my self-hosted mail server (which is, by the way, much less of a pain in the ass then most proclaim - I can highly recommend mailcow). I run it on a VPS, where automatic backups are created. However, I don't trust the VPS provider not to lose my data, so I regularly pulled the backups to a local machine, encrypt it, and push it to some commercial cloud storage.
Being annoyed with myself for forgetting to manually backup data and because I don't have that much time for shit like that anyway, I finally decided to give n8n a go.

I now host an instance on my Pi and automated the whole backup process over the weekend... It turned out to be a lot of fun and I think I am going to play around more with n8n. Also, I finally don't need to think about backing up data anymore. 👍🏻

Now I just hope that maintaining my n8n flows won't be too much of a hassle...
 
Last edited:
I have barely time for hobby projects but lately I was super annoyed at having to manually backup my self-hosted mail server (which is, by the way much less of a pain in the ass then most proclaim - I can highly recommend mailcow)
The biggest part isn't running the server. It's finding a place that's not already on 300 blacklists and IP blocked, etc.

Personally I gave up years ago after losing the spam battle.
 
The biggest part isn't running the server. It's finding a place that's not already on 300 blacklists and IP blocked, etc.

Personally I gave up years ago after losing the spam battle.
I don't know man... I had to remove my ip from the Spamhaus list once. Since then I did't have any issues... and it has been 3 years now.

To be honest, most of the issues I encountered were due to the VPS provider.
 
Last edited:
Back
Top Bottom