GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

Where am I going to find the 16MB I need to install Transport Tycoon Deluxe?
Proceeds to break Windows 3.1 install on family PC.
"I don't use COMMAND.COM for anything. Let me try deleting it."

Really lucky I had a boot disk from a game.
 
Consider other options. You could have two or three striped pairs for example in a single pool.
I gotta be honest, I don't understand the supposed merits of RAID10. Instead of giving up a drive or two per pool, you're cutting your storage in half. Plus, in a RAID6 you can lose ANY 2 drives. For RAID10, if you happen to lose a mirrored pair (unlikely, but possible), you're boned.

SSD could be used as transaction cache, or read cache.
This would be the smart thing to do, but imagining for a second that I'm an idiot...

Do you care about performance?
Eh. I can't reasonably get 10GB networking in my setup, so that will be the bottleneck. I'm looking at link aggregation, but obviously endpoints will still be 1GB, and it may not even work right. It may matter for security camera footage, but I'm thinking about just building a separate pool for that.

I run kinda sorta equivalent of Raid6 with six drives.
Alright, that's good to know. I was thinking closer to 8 drives before going to RAID6 but maybe that was too ballsy.

What Raid solution are you going to use?
ZFS. So technically RAIDZ1/Z2, but whatever.

To be clear I don't think I'm actually going to do all flash. It's just a wacky idea.
 
Start to archive useful channels from youtube and the like. We have been stuck with 1TB as "standard" for far too long.

The amount of channels that'll seemingly delete How-To videos I find useful whether for projects or cars that I own or plan on owning, I've just started downloading everything. Very quickly fills a drive.
 
  • Feels
Reactions: Brain Problems
You know, I never realized until now why my checkpoint switches were so slow until this post made it click that I have them on my storage HDD for some reason.
I dropped a spare GPU in my file server. It's not designed for speed and the boot drives are SATA SSD, and too small for many checkpoints. I actually have it load off NFS over 10Gbit from my desktop and it's much faster,
A dual 8T drive set up tbh would probably be enough for me, 16tb total in HDD space and I'd be happy.
1x8T is 0
2x8T is 8
3x8T is still 8
4x8T is 16.
Now, if this is all easily replaced(downloaded) stuff, or stuff backed up so it's not a risk if its lost then you can ignore that.

I gotta be honest, I don't understand the supposed merits of RAID10. Instead of giving up a drive or two per pool, you're cutting your storage in half. Plus, in a RAID6 you can lose ANY 2 drives. For RAID10, if you happen to lose a mirrored pair (unlikely, but possible), you're boned.
Write speed, don't need to re-read entire stripes to recompute parity, although modern implementations are better. Rebuild speed, also don't need to read the entire stripe, just copy. Read speed can be better since you have 2 copies of all the data instead of 1 copy but this is highly workload dependent. And in the before-times actually computing the parity in CPU was slower, not a problem today.
 
1x8T is 0
2x8T is 8
3x8T is still 8
4x8T is 16.
Now, if this is all easily replaced(downloaded) stuff, or stuff backed up so it's not a risk if its lost then you can ignore that
If this is for data where it is easy (but possiblity time consuming) to replace any lost data then just use mergerfs. That way if you lose a disk then only the data on that disk is lost, not have every file partially missing. If you're using Sonarr and Readarr they will show you what files are missing and you can just redownload whatever's missing or just delete a series that has too many missing files
 
"100 MB? I can't imagine ever filling up a 100 MB drive!" -Me, 1995
Imagine telling your past self that a DOOM clone would take 5 GB of memory.

1710952506783.png
 
Also, fuck, I just logged into that system to check and realized the drives aren't spun down.
Ahh, so much better.
2024-03-20_09-34.png
Still guess I should check why there are that many still spinning. The 2 boot SSDs are obvious but the others should all be asleep.
 
I gotta be honest, I don't understand the supposed merits of RAID10. Instead of giving up a drive or two per pool, you're cutting your storage in half. Plus, in a RAID6 you can lose ANY 2 drives. For RAID10, if you happen to lose a mirrored pair (unlikely, but possible), you're boned.

I believe that with RAID10, if a drive fails, it doesn't take the system down while it rebuilds. This is useful in the datacenter, not so much at home.
 
I believe that with RAID10, if a drive fails, it doesn't take the system down while it rebuilds. This is useful in the datacenter, not so much at home.
No RAID will take a system off-line while it rebuilds. But in all cases performance will SUCK.

Do you guys use RAID on endpoints/system drives or just NAS?
All my "important" files are on the NAS, so I just do a system backup now and then on the desktops to preserve any customizations, no RAID there.
 
RAID10 is significantly more performant than RAID5/6, both in normal use and during rebuilds.
You've also got a lot more redundancy. In a home array of eight disks or something, that doesn't really matter, and saving money can be important enough to risk low redundancy. But in a professional setting, you absolutely don't want to risk losing the whole pool, so spending a bit extra on disks is more than worth it, you budget for that when you're told to make a proposal for the new pool.
 
I believe that with RAID10, if a drive fails, it doesn't take the system down while it rebuilds. This is useful in the datacenter, not so much at home.
I mean I've done live rebuilds of RAID5 at home and RAID6 datacenter. Hell, on enterprise stuff I've done it in the middle of the day without anybody noticing. If you have hot swap bays you can do it without missing a beat. Not sure 10 would help performance much here. You still have one drive getting hammered on reads while the mirror rebuilds.
 
I mean I've done live rebuilds of RAID5 at home and RAID6 datacenter. Hell, on enterprise stuff I've done it in the middle of the day without anybody noticing. If you have hot swap bays you can do it without missing a beat. Not sure 10 would help performance much here. You still have one drive getting hammered on reads while the mirror rebuilds.

According to this, all the drives get hit when rebuilding RAID5, but only the one that died with RAID10:

So, you may ask: why wouldn’t I use RAID 5 instead? It gives me 6TB of total capacity, a performance advantage, and redundancy that protects me from a single drive failure.

The biggest difference between RAID 5 and RAID 10 is how it rebuilds the disks. RAID 10 only reads the surviving mirror and stores the copy to the new drive you replaced. Your usual read and write operations are virtually unchanged from normal operations.

However, if a drive fails with RAID 5, it needs to read everything on all the remaining drives to rebuild the new, replaced disk. Compared to RAID 10 operations, which reads only the surviving mirror, this extreme load means you have a much higher chance of a second disk failure and data loss.

 
According to this, all the drives get hit when rebuilding RAID5, but only the one that died with RAID10:



Correct. But that drive being read will still be a bottleneck as some percentage of your reads will need it.

In any case the solution to both is to reduce the rebuild rate if it's impacting performance. I sometimes have to do this if I want to watch videos while my system is doing a rebuild. Then when I'm done watching I bump it back up to run overnight.

These days about the only time I really see RAID1 is on a single drive pair or when it's an entire array being mirrored where the underlying arrays are already single or dual parity. Then 2 arrays are mirrored using different paths, power and sometimes even NAS devices so that it can survive almost any failure*.

* Until the idiots who set it up failed to understand "redundant paths" and connected it so they all had a single point of failure, thanks $IDIOT_VENDOR_NAME.
 
  • Informative
Reactions: The Ugly One
According to this, all the drives get hit when rebuilding RAID5, but only the one that died with RAID10:



Yeah, as mentioned above, you're still going to see a performance hit because the data is striped across the mirrors.

I can see where RAID10 would have a read performance advantage. But the thing is... yes, it can lose up to half its drives and still function, so long as none of those drives belong to the same mirror. Theoretically, you could eat shit from losing only 2 drives. I'ma stick with RAID6 where you can lose ANY 2 drives.

On a semi-related note, I bought a Ubiquiti Edge router. It didn't come with the rack mounting equipment. At only $15, it's fine, but... it's out of stock literally everywhere. And since it's a non-standard width, none of the "universal" ears will fit it. HAHAHA VERY FUCKING FUNNY UBIQUITI!

Already regretting not just paying far out the ass for a 1U Qotom.
 
none of the "universal" ears will fit it
There's an easy solution for this. Buy a milling machine that will handle aluminium. Buy some aluminium stock. Fabricate some ears. Mount router. Yes, I did this one with a switch that had a similar issue, but I already had the mill.

Or 3d print some. Or a shelf. Or duct tape.
 
There's an easy solution for this. Buy a milling machine that will handle aluminium. Buy some aluminium stock. Fabricate some ears. Mount router. Yes, I did this one with a switch that had a similar issue, but I already had the mill.

Or 3d print some. Or a shelf. Or duct tape.
First attempt is taking a couple of odd sizes from existing kits and seeing if I can get an asymmetric pair to add up to 19". If that fails I'll probably try to borrow a 3D printer.
 
Ok, hardware questions lads

Ok me and my buddy have been working on his new build for the last few days. It runs, everything is downloaded, it's great. Problem, a drive is "missing". It has 3. A 500 gig boot. That shows up. A 1tb hdd. And a 1tb m.2. The m.2 is gone. If I remember before he brought it home (he called later) the BIOS did say the m.2 was in there.



Is it in RAID? How do I stop that?
 
Ok, hardware questions lads

Ok me and my buddy have been working on his new build for the last few days. It runs, everything is downloaded, it's great. Problem, a drive is "missing". It has 3. A 500 gig boot. That shows up. A 1tb hdd. And a 1tb m.2. The m.2 is gone. If I remember before he brought it home (he called later) the BIOS did say the m.2 was in there.



Is it in RAID? How do I stop that?
Some motherboard use the same sata port for the m.2 slot and one of the SATA ports.
Is the motherboard incompatible with NVMe? Does there need to be a setting changed in the firmware?
 
Back