The Linux Thread - The Autist's OS of Choice

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
So the 2% has to reach 10% for the available spare to start diminishing?
No, the 2% has to reach 100%*

When the available spare pool goes from 100% to 10% then you really really need to replace the drive. I've only ever had 2 SSDs (old old 120GB SATA) that I've used till expiration, everything else got replaced before they hit 90% Available Spare or so.

* Cells can die before their estimated life expires or continue working long after, it's not uncommon to see a few cells remapped sooner than 100% just due to the nature of physics and stuff made out of silicon, things like the drive running hotter than expected will reduce usable lifetimes.
 
things like the drive running hotter than expected
How much hotter?

Also I have a USB that runs at normal temp on an old toaster with USB 2, but on this newer system (likely with USB 3), it feels like it's around 100 F or 40 C, when I eject.

Is that normal? It's a newer flash drive that's likely USB 3.0.
 
How much hotter?

Also I have a USB that runs at normal temp on an old toaster with USB 2, but on this newer system (likely with USB 3), it feels like it's around 100 F or 40 C, when I eject.

Is that normal? It's a newer flash drive that's likely USB 3.0.
It's hard to find good information on absolute numbers, if the drive is in a system with normal cooling it's likely fine, if it's next to a 5090 then additional airflow may be needed.

For external drives, yes they do get warm just like their internal counterparts. As densities grow temps get warmer, 100F doesn't sound like a problem for a normal external drive.
 
  • Informative
Reactions: ToroidalBoat
hey @analrapist (lol @ username)

So if I get this right, it is...

disable: sudo tune2fs -O ^has_journal /dev/[drivename]
enabled: sudo tune2fs -O has_journal /dev/[drivename]

... and not that earlier sudo mke2fs -t ext4 -O ^has_journal /dev/drivename thing?
Keep in mind that, when you disable stuff like this and system logs, you're making it harder for other people to troubleshoot your problems. You're a new-ish user, aren't you? One of the first things people ask in Linux help threads are program and/or system logs.
 
  • Informative
Reactions: ToroidalBoat
It's hard to find good information on absolute numbers, if the drive is in a system with normal cooling it's likely fine, if it's next to a 5090 then additional airflow may be needed.
So does that 2% mean that the cells are 2% used overall, or that 2% of the cells are (nearly) worn-out?

One of the first things people ask in Linux help threads are program and/or system logs.
Hopefully I can get by on simply re-enabling the logging if there's suddenly some BS issue that starts.
 
So does that 2% mean that the cells are 2% used overall, or that 2% of the cells are (nearly) worn-out?
Why not both?

It's supposed to be the first if the wear leveling code in the controller is doing its job. But if the drive is never TRIMmed(discard or a scheduled job) and/or is near full so all the writes are stuck in one small area then it could be the second.
 
  • Informative
Reactions: ToroidalBoat
It's supposed to be the first if the wear leveling code in the controller is doing its job. But if the drive is never TRIMmed(discard or a scheduled job) and/or is near full so all the writes are stuck in one small area then it could be the second.
I've often read that a cell can take 1000 writes before it can't write anymore, but how much of that before the cell has serious issues? Like 50% of the way there? Just 10%?
 
I've often read that a cell can take 1000 writes before it can't write anymore, but how much of that before the cell has serious issues? Like 50% of the way there? Just 10%?
That number should be 100%. Error correction should handle any minor glitches but the idea is the number of usable cycles should be at least that amount. Obviously sometimes a cell may give up early but that should be rare. It's also important to note that manufacturers aren't going to share things like how much error correction is needed for the cycles they expect just that there should be 0 uncorrectable errors for those 1000 cycles. I found a document in 2017 that said the raw error rate could be as high as 1 in 100 bits at the expected lifetime so error correction would need to be that good. It's no wonder that SMART data doesn't tell you raw error rates on flash devices as no one would ever use them.
 
  • Informative
Reactions: ToroidalBoat
The Linux-running computer I'm on is about 5 years old now. The TBW to the SSD is now almost 20 TB.

The main reasons* for such a ridiculous amount of writing...

  • not disabling "Safe Browsing" via Firefox (every 20 to 30 minutes a blacklist of bad URLS of over 20 MB is downloaded from Google)
  • watching videos in Firefox without using Private Browsing with browser.privatebrowsing.forceMediaMemoryCache set to true
  • watching YouTube videos on that main crapsite or on youtube.com/embed/[vid code]
  • using GMail or some other sites in normal browsing (not Private Browsing)
  • not disabling logging with systemd and rsyslog
  • not disabling logging in uBlock

... and all that is despite disabling caching in Firefox.

* (along with whatever the previous user used it for)

With all that, this is what the SMART thing says:

Code:
Available Spare:                    100%
Available Spare Threshold:          10%
Percentage Used:                    2%
I'm so curious, what is your plan once you actually get down to 0 writes?
 
I dont even see the issue, you can just clone the drive onto another ssd if it comes to it. Not that it will because I have never once seen even my cheapest chinesium SSDs max out despite having 1TB torrents running all the time.
 
Is it normal for that jbd2 process to be running nonstop? Shouldn't it finish at some point?

No need to be a dick about it. I haven't disabled journaling (jbd2 is still running), I keep backups, and all that constant writing did make me "autistically care" because I didn't know that much about SSDs. Having to replace a drive or reinstall an OS is a pain. Also I still think it's not a dumb idea to minimize writing to flash memory anyway.

I have never once seen even my cheapest chinesium SSDs max out despite having 1TB torrents running all the time.
That's a lot of torrenting.
 
Last edited:
I am not using a rawhide install but Fedora nonetheless had a system update to Fedora 42 that fucked the kernel for me. I have it mostly working now but it changed basically everything about systemd and how services start.

While I am a retard and I do enjoy this kind of tinkering I specifically did not want any of these experimental updates but the OS prompts you into them so beware I guess.

Such nostalgia going through all the shit I never use and trying to determine where to update things. AI does speed this process up a lot more than in the past though.

These days I have a mac for something I know will just work and linux boxes for all of my old man basement tinkering.
 
Is it normal for that jbd2 process to be running nonstop? Shouldn't it finish at some point?
It is a kernel service. It runs so long as you're using ext4 journaling.

Having to replace a drive or reinstall an OS is a pain.
In Linux, it's not. Boot a kernel with init=/bin/bash then dd if=/dev/olddrive of=/dev/newdrive. Next, tune2fs /dev/newdrive -U $(uuidgen). Next Grub run, change root=UUID=[whatever uuid uuidgen created], boot, run update-grub. Windows is way more irritating.
 
  • Like
  • Informative
Reactions: ZMOT and TJT
Gonna post here cause I don't wanna bump another thread. I recently did a fresh install of EndevourOS on my PC and I noticed that the thumbnails for my video files are fucked.

Screenshot_20250614_205657.webp

Sometimes a file will have a proper thumbnail, but it seems like they only look proper if they were manually set and not the auto-generated ones.

1750945908697.webp

Is this because I switched to using MPV as my player? I thought this would be something with the Dolphin file manager since I had a similar issue with no image thumbnails before but that was a setting about file sizes and for video files it doesn't seem to be a thing. And yes I do have the ffmpg or whatever that is meant to generate thumbnails.
 
Boot a kernel with init=/bin/bash then dd if=/dev/olddrive of=/dev/newdrive. Next, tune2fs /dev/newdrive -U $(uuidgen). Next Grub run, change root=UUID=[whatever uuid uuidgen created], boot, run update-grub.
So what does all that do? Make a usable copy of the OS on another volume?
 
dd copies all the info from one drive to another. That completes the transfer of data: your drive is backed up. The rest is just paranoia/polish. It might be enough, after the dd, to shutdown and pull the old drive.

The stuff with UUIDs just changes the UUID of the new drive so you can refer to it unambiguously. update-grub updates your bootloader (presuming you're using Grub, which you ought if you're newish) so that it knows about your drives. So you'll have both drives online for that boot, so you can be sure nothing funky happened. Shutdown again, pull the drive, boot again, update-grub. Now your new drive will be the only one listed in the next boot.

Again, because I'm just sketching the outline here and the coffee's still kicking in, you have to run tune2fs on your partition, so /dev/newdrive[n] is actually what the tune2fs command will look like.
 
  • Informative
Reactions: ToroidalBoat
Back