The Linux Thread - The Autist's OS of Choice

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
Still, my main concern remains running an operating system from a microSD card. It's fishy at a bare minimum.
Millions of users disagree with you. Including a ton of embedded shit. For super special applications you can run immutable and just a small ramdisk for state files.
It's silly to take a $35 Pi and slap on storage that may cost twice as much as the Pi. My worst case Pi is the one in the attic monitoring a bunch of 433MHz sensors. If it dies, I flash a new SD card. Open the attic hole, open the box it lives in, put the MicroSD in and rsync back the backup from /home/pi and add the startup script to /etc. If I was really worried I'd image the disk so I can flash a new card directly as it keeps no local state. 'proper' storage would be larger, take more POE power and generate more of its own heat and may not even like the attic temps in summer.

The Linux-running computer I'm on is about 5 years old now. The TBW to the SSD is now almost 20 TB.

Code:
Available Spare:                    100%
Available Spare Threshold:          10%
Percentage Used:                    2%
I hate to be the bearer of bad news, but 2% in 5 years means you may have to replace that drive in 2125.
 
Millions of users disagree with you. Including a ton of embedded shit. For super special applications you can run immutable and just a small ramdisk for state files.
It's silly to take a $35 Pi and slap on storage that may cost twice as much as the Pi. My worst case Pi is the one in the attic monitoring a bunch of 433MHz sensors. If it dies, I flash a new SD card. Open the attic hole, open the box it lives in, put the MicroSD in and rsync back the backup from /home/pi and add the startup script to /etc. If I was really worried I'd image the disk so I can flash a new card directly as it keeps no local state. 'proper' storage would be larger, take more POE power and generate more of its own heat and may not even like the attic temps in summer.
Yeah the cost in dollars is minimal. But why even run the risk?
 
lol

Also wouldn't it be bad if that 2% reaches that 10% threshold (so it's really 20% of the way to problems)?
10% is the threshold for the 100%, which is still 100%
Percentage used is the estimate of the total wear available. I guessed 50% was a good number. Once it starts getting close to failure then the Available Spare can drop quickly as cells have to be moved to spare cells with lifetime left.

You can also look at the spec for that drive and compare the '20 TB' used to the manufacturer estimate.
 
So what's the "2% used" then?

:thinking:
Percentage used is the estimate of the total wear available.
Among all the not-spares it estimates how many times they can be re-written. As they get reused more times that number goes up. When a cell finally hits the limit and causes an error on write then a spare cell is used and that 100% number starts to decrease.
 
So the 2% has to reach 10% for the available spare to start diminishing?
No, the 2% has to reach 100%*

When the available spare pool goes from 100% to 10% then you really really need to replace the drive. I've only ever had 2 SSDs (old old 120GB SATA) that I've used till expiration, everything else got replaced before they hit 90% Available Spare or so.

* Cells can die before their estimated life expires or continue working long after, it's not uncommon to see a few cells remapped sooner than 100% just due to the nature of physics and stuff made out of silicon, things like the drive running hotter than expected will reduce usable lifetimes.
 
things like the drive running hotter than expected
How much hotter?

Also I have a USB that runs at normal temp on an old toaster with USB 2, but on this newer system (likely with USB 3), it feels like it's around 100 F or 40 C, when I eject.

Is that normal? It's a newer flash drive that's likely USB 3.0.
 
How much hotter?

Also I have a USB that runs at normal temp on an old toaster with USB 2, but on this newer system (likely with USB 3), it feels like it's around 100 F or 40 C, when I eject.

Is that normal? It's a newer flash drive that's likely USB 3.0.
It's hard to find good information on absolute numbers, if the drive is in a system with normal cooling it's likely fine, if it's next to a 5090 then additional airflow may be needed.

For external drives, yes they do get warm just like their internal counterparts. As densities grow temps get warmer, 100F doesn't sound like a problem for a normal external drive.
 
hey @analrapist (lol @ username)

So if I get this right, it is...

disable: sudo tune2fs -O ^has_journal /dev/[drivename]
enabled: sudo tune2fs -O has_journal /dev/[drivename]

... and not that earlier sudo mke2fs -t ext4 -O ^has_journal /dev/drivename thing?
Keep in mind that, when you disable stuff like this and system logs, you're making it harder for other people to troubleshoot your problems. You're a new-ish user, aren't you? One of the first things people ask in Linux help threads are program and/or system logs.
 
It's hard to find good information on absolute numbers, if the drive is in a system with normal cooling it's likely fine, if it's next to a 5090 then additional airflow may be needed.
So does that 2% mean that the cells are 2% used overall, or that 2% of the cells are (nearly) worn-out?

One of the first things people ask in Linux help threads are program and/or system logs.
Hopefully I can get by on simply re-enabling the logging if there's suddenly some BS issue that starts.
 
So does that 2% mean that the cells are 2% used overall, or that 2% of the cells are (nearly) worn-out?
Why not both?

It's supposed to be the first if the wear leveling code in the controller is doing its job. But if the drive is never TRIMmed(discard or a scheduled job) and/or is near full so all the writes are stuck in one small area then it could be the second.
 
It's supposed to be the first if the wear leveling code in the controller is doing its job. But if the drive is never TRIMmed(discard or a scheduled job) and/or is near full so all the writes are stuck in one small area then it could be the second.
I've often read that a cell can take 1000 writes before it can't write anymore, but how much of that before the cell has serious issues? Like 50% of the way there? Just 10%?
 
I've often read that a cell can take 1000 writes before it can't write anymore, but how much of that before the cell has serious issues? Like 50% of the way there? Just 10%?
That number should be 100%. Error correction should handle any minor glitches but the idea is the number of usable cycles should be at least that amount. Obviously sometimes a cell may give up early but that should be rare. It's also important to note that manufacturers aren't going to share things like how much error correction is needed for the cycles they expect just that there should be 0 uncorrectable errors for those 1000 cycles. I found a document in 2017 that said the raw error rate could be as high as 1 in 100 bits at the expected lifetime so error correction would need to be that good. It's no wonder that SMART data doesn't tell you raw error rates on flash devices as no one would ever use them.
 
The Linux-running computer I'm on is about 5 years old now. The TBW to the SSD is now almost 20 TB.

The main reasons* for such a ridiculous amount of writing...

  • not disabling "Safe Browsing" via Firefox (every 20 to 30 minutes a blacklist of bad URLS of over 20 MB is downloaded from Google)
  • watching videos in Firefox without using Private Browsing with browser.privatebrowsing.forceMediaMemoryCache set to true
  • watching YouTube videos on that main crapsite or on youtube.com/embed/[vid code]
  • using GMail or some other sites in normal browsing (not Private Browsing)
  • not disabling logging with systemd and rsyslog
  • not disabling logging in uBlock

... and all that is despite disabling caching in Firefox.

* (along with whatever the previous user used it for)

With all that, this is what the SMART thing says:

Code:
Available Spare:                    100%
Available Spare Threshold:          10%
Percentage Used:                    2%
I'm so curious, what is your plan once you actually get down to 0 writes?
 
Nigger you better enable journalling again and stop autistically caring about your shitass SSD that's been worn 2% in five years. Also, if you're doing all this shit but not backups, double nigger.
 
Back
Top Bottom