Home Server and Self Hosting General - Technological Self-Sufficiency

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
I am crossposting some of this from the NAS thread because this seems to be more active. Fite me.

I recently built a real, true home server and I'm finally getting it broke in.

Ryzen 5 9600X
2X 1TB NVME drives (for Proxmox)
4x 16TB NAS drives (for bulk storage)
64GB desktop RAM (originally planned 128GB ECC, but due to being waylaid by jackassery I wasn't able to buy RAM until prices had casually quadrupled)
Generic 3U rack case
Fancyass fans
EVGA PSU (750W)

First, may I say, FUUUUUUUUUUUCK Supermicro support. The jackassery I got waylaid by. I switched to them to get away from ASRock's shitty support and somehow they turned out to be EVEN WORSE, taking 4 months and charging me to warranty a DOA board. The product is good when it works though. Nice IPMI.

Proxmox is pretty slick. My only complaint is how absurdly hard FDE was, and even then I could never get TPM. But automatic RAID is a nice option, and it works pretty well for small environments. I like how easy it is to install and upgrade LXC containers, and there are a lot of things available. (Yes, I use FDE on my hypervisor. Even though I'm pretty low on glowie watchlists, I'm definitely on a few.)

TrueNAS is a'ight. Compared to XigmaNAS, better when it works, worse when it doesn't. Virtualizing it and getting an HBA card passed through was mostly pretty easy, and setting up ZFS was a breeze. Good interface, if a bit less flexible in some areas.

With ZFS, the times they are a-changin'. ZFS no longer needs hideous amounts of RAM just to exist. It will use as much as you give it, BUT, due to RAM prices I ended up having to cut corners and go with 64GB of desktop RAM rather than 128 of ECC. Only 32GB of that is available to the NAS VM, for a 4x16 RAID5. Honestly? Works just fine. Hell, I think 16GB would be fine.

However, I have also learned that ZFS native encryption is VERY CPU intensive. I showed it all 6 cores and 12 threads of my new Ryzen, and it can easily spike that thing to max during longer transfers. It does seem to thread well. When I was only showing it 2 threads, performance was bottlenecked at around 40MBs, which is just sad.

Also, ZFS encryption is lacking in some functionality compared to LUKS/GELI. You can't easily backup/wipe a header. There's only one keyslot per volume. Volume names are not encrypted (unless you encrypt the block device directly, but don't, you freak). This is probably not of much interest to 99% of people, but KF may contain some of that 1%. Inherited encryption is pretty cool though.

Currently working on Jellyfin. It seems... fine? I see the benefits, but it's way more autistic about scraping than just using Kodi over SMB/SFTP. Kodi is smart enough to realize that if you have a single subfolder for a show, it's probably the one and only season. Jellyfin flips out and creates duplicate entries. It's also much worse about ignoring extra material like animu openings unless categories according to its own autistic standards. So now I have to reorganize a shitload of media, hopefully for the last time. Plus the scans take a really, REALLY long time compared to Kodi.

Side note, why is TVDB the only metadata provider that correctly categorizes movies for a show as specials? Like say you've got Regular Show: The Movie. I want that categorized as part of the series, not just a movie. ONLY TVDB gives special numbers for stuff like that. It's worse with animu.

Looking at getting the *arr suite set up, but man it's a lot of containers. I need to figure out a way to pipe them through a single image protected by Mullvad. I'm short on licenses.

Also finally will be able to try out Home Assistant. So that will be fun.

Also also looking at putting some of this crap in Ansible so I never have to do it again. I'm trying to decide if that would actually save time or not.

So yeah. That's what I've been doing.
 
Looking at getting the *arr suite set up, but man it's a lot of containers. I need to figure out a way to pipe them through a single image protected by Mullvad. I'm short on licenses.
I have a Dockge container that runs Gluetun with Qbittorrent, FlareSolverr, and Prowlarr piped through it. VPN config options are definitely lacking in TrueNAS.
 
I have a Dockge container that runs Gluetun with Qbittorrent, FlareSolverr, and Prowlarr piped through it. VPN config options are definitely lacking in TrueNAS.
Well, for me TrueNAS doesn't really exist for any purpose aside handling my ZFS shit, so I'm not so concerned about a VPN there. I could set up a Wireguard Mullvad config if I really needed to but ehhhhh.

For containers, I'm just going to run them as LXCs under Proxmox from the community helper scripts. Which are sweet. They're mostly Debian/Ubuntu based and it's easy enough to get the Mullvad CLI on them. For torrents, well, I'm behind a carrier NAT. I need to go bitch out my seedbox provider about their advertised rootless Docker not working, but their customer support is rude and horrible so I don't wanna.
 
@Lord of the Large Pants don't sweat sorting out your media for Jellyfin. When you get Sonarr and Radarr set up they will reorganize your media anyway and Jellyfin will play nice after that.

For my own nas, the new case is on its way. Hopefully it squeaks through customs and arrives in one piece. When/if that turns up I’ll get the other bits.
 
Last edited:
I hate Jellyfin and I hope everyone involved in its creation fucking dies.

Let me explain my current autistic tard rage.

Jellyfin expects, no, DEMANDS that each file contain one and only one episode of a TV show. I am a cartoon collector, and that is often not the case. Now Jellyfin can kind of compromise on this. My previous setup was Kodi with a simple SMB share. Let's say you have a file where the name contains "S02E05E06". So, that means the file contains 2 episodes. How Kodi handles it is to create separate entries in the episode list, and for each of them to point to the beginning of the file behind them, because it doesn't necessarily know when the second episode starts. Jellyfin squishes the episodes into a single entry, which is annoying because it can be harder to find what you're looking for, but fine. It's tolerable.

The problem comes up episodes aren't paired the same way the metadata provider expects. This isn't just broadcast order being different from DVD order, this is actually different pairings. So the metadata expects S02E05E06, but the reality is S02E05E17 and E06 is off somewhere completely different. Most older shows sort of have a settled order, and newer ones are more consistent, but there's a wedge of cartoons around 2010-ish that often got different OFFICIAL releases with wildly different episode pairings. So this isn't theoretical.

Now Kodi handles this just fine. If I have a file that says "S02E05E17", Kodi interprets this as "this file contains episodes 5 and 17". Jellyfin interprets this as "this file contains episodes 5 THROUGH 17", and will not be told otherwise. Worse if the pair is out of order. If I have "S02E19E11", once again, Kodi understands this perfectly well. Jellyfin just shits itself and does nothing, refusing to process the file.

If this were uncharted territory I would be a little more fogiving, but Kodi solved this ten years ago. And the worst part? Jellyfin considers this a feature, not a bug. I'm not the first one to have this problem. Their official, supported, sanctioned solution? "Lol just manually split all your videos files, bro."

Fucking cretins.

I will look into the *arr programs so see if they can do something for me here, but it seems like it's just a fundamental consequence of Jellyfin.
 
Jellyfin expects, no, DEMANDS that each file contain one and only one episode of a TV show. I am a cartoon collector, and that is often not the case.
I've never heard of anyone doing this, so I am not surprised Jellyfin won't split those single files into multiple episode entries. Why not just split the file into individual episode files?
 
I've never heard of anyone doing this, so I am not surprised Jellyfin won't split those single files into multiple episode entries. Why not just split the file into individual episode files?
Mostly because I don't want to do it several hundred times, It can't be that hard to handle this kind of scheme, because Kodi was doing it just fine 10 years ago.

Yes, in theory, there should one episode and one end credit sequence per file, but there just isn't. It's extremely common for official releases to have 2 11 minute cartoons in the same file. It's even worse with shows like classic Animaniacs which were based on shorts, so they could often have 3 or more. Two episodes per file is common even today, it's just that metadata consistency is generally better, so it's not as big a deal.

Why is this? Because media companies packaging shows for Web-DL are lazy niggers I guess. Don't ask me. I don't know why every Phineas and Ferb torrent I find has radically different episode pairings and and numbering, none of which are consistent with each other or with any metadata provider. All I know is Kodi handled it in an entirely sane manner and Jellyfin is going full retard about it.
 
View attachment 8575667
https://www.youtube.com/watch?v=M4xv9ImpBWw

The man set up his own private streaming service with his own server.
AND THAT MAN'S NAME WAS ALBERT EINSTEIN!!!
I didn't entirely realize until now 99% of us only got into media servers much later for convenience after years of free downloads and hoarding. It's a little amusing he's still trying to pay for all of the media he uses. I'm not trying to be elitist and if you're in the YouTube cuckshed you're probably not allowed to be super overt about this kind of a thing but come on, man.

7a19d34c86edb719ea1f8adb31c990318bc89543923a1892ba200a32f3a86a42.webp

Regular families that taped as much off the air as possible could've been looked at as poor compared to first class families who bought all of it. Nowadays those "legitimate" purchases don't even guarantee you anything and tools like yt-dlp that enable you to scrape music videos (but now in-masse and in 4K) are relatively unknown and considered difficult to use

Another microcosm of the peasants being duped into a worse way of life with their self sufficiency taken away
 
I love self-hosting. I am currently doing Plex and Immich on my server. I want to get more stuff self-hosted, but I need to get a biz account from my ISP so I have unlocked ports. They currently block everything under like 8000 on a residential account. At least I have 500/50 though, grandfathered in. All the new plans are 250/15 or 500/20.
 
I've never heard of anyone doing this, so I am not surprised Jellyfin won't split those single files into multiple episode entries. Why not just split the file into individual episode files?
Yeah, that can be done with tdarr I'm fairly certain.

That or take the time to use ffmpeg to split it up. Might as well put in the effort anyways if ya plan on storing that shit indefinitely.
 
I love self-hosting. I am currently doing Plex and Immich on my server. I want to get more stuff self-hosted, but I need to get a biz account from my ISP so I have unlocked ports. They currently block everything under like 8000 on a residential account. At least I have 500/50 though, grandfathered in. All the new plans are 250/15 or 500/20.
I've never heard of that kind of port-blocking going on.
You can probably save money installing a GAN (like tail-/headspace) on a VPS or use a solution like ngrok as your reverse proxy.
 
I have no issues with it using SMB extents to read movies from. Infact you can make it read-only so a taken-over server won't be able to fuck with the collection.

This is even possible in an unpriv LXC with a little plumbing.
To do so. Make an fstab entry on LXC host for the SMB extent that looks like this:

Code:
//192.168.178.20/video/ /mnt/jelly-video cifs _netdev,x-systemd.automount,noatime,uid=100118,gid=100118,dir_mode=0770,file_mode=0440,credentials=/root/cifs-creds 0 0
Where uid/gid are the offset ids for the jellyfin user. The mount mode has to be RW in my tests, just restrict it at the SMB server (for synology, just make a service user specifically for this).
Credentials should be self-explanitory, keep it in root as nothing else except systemd has a business seeing it.

mkdir the folder under /mnt and reload systemd and umount -a to establish the connection.

For the LXC container config add the mountpoint to something in a local /mnt,
Code:
mp0: /mnt/jelly-video/,mp=/mnt/videos
you shouldn't have to chown anything as the mountconfig should deal with it.
 
Last edited:
Just sat down for the last hour to figure out an issue that's been bugging me for a few weeks now. I've got a pair* of lenovo sff computers (m715q to be exact) acting as hosts for various minor things that I don't want clogging up my main machine (small NFS file share for non-critical stuff, home assistant host, k3s agents, a tailscale node so I can VPN in, things like that). They have been singing, for want of a better word, since I set them up. I thought it might be the cheap power supplies I was using, but genuine lenovo supplies didn't make any different. As it turns out, there's a capacitor somewhere in them that buzzes periodically when the CPU runs certain tasks. Read or write to drive, and data coming over the network seems to be the consistent trigger. It's been a constant zzzp.......zzzpzzzp............zzzp all day, every day, for the past three weeks. Anyway it turns out it's the CPU boosting to a higher frequency when these data transfers happen. Not a cracked or loose capacitor as I first thought, but an induced electromechanical vibration. I've reduced the upper limit on the CPU to 2600MHz, which is about 93% of the max frequency, but it's enough to stop the noise without compromising too much performance.

*I actually have four that I managed to snag in an auction. One is used by my wife, two are in the rack, and one is sitting waiting to be a thin client if I can ever figure out how to get all that gubbins working. This persistent noise also killed off a lot of my enthusiasm for that project, so now I might be into it more again.
 
That's likely coilwhine from a poorly sourced inductor that's not glued with rubber putty to another part or the casing (yes they do that).
You could try another PSU.
 
That's likely coilwhine from a poorly sourced inductor that's not glued with rubber putty to another part or the casing (yes they do that).
You could try another PSU.
I thought so too at first, but I tried several power supplies and saw no difference. I narrowed it down a collection of ceramic capacitors on the underside of the motherboard, where the noise was loudest. Managed to muffle the noise a little with a lump of foam, but that wouldn't be a long-term solution. At some future point, I might try covering them with silicone sealer and see if that makes a difference.
 
That's likely coilwhine from a poorly sourced inductor that's not glued with rubber putty to another part or the casing (yes they do that).
You could try another PSU.
No experience with these specific AMD chips, but there is this type of coil whine coming from the CPU that you can't do anything about except limit its clocks.
Even my Celeron-based Fujitsu thin client has a slight coil whine, and it's only 2.4-2.5 GHz max. Being passively cooled makes the coil whine more noticeable than having a fan running.
 
I have no issues with it using SMB extents to read movies from. Infact you can make it read-only so a taken-over server won't be able to fuck with the collection.
Having to mount the smb shares in the underlying os is a cludge because of jellyfin's lack of smb share support tho, not jellyfin supporting them
 
Jellyfin doesn't seem to have any issues reading a folder mounted to an SMB server (,and i doubt nfs is any different). You can also make the links in the container by running the LXC privileged or with smb cgroup rights. But i like to keep my cutouts as small as possible when hosting a public-facing server.
 
Back
Top Bottom