the logical continuation of static linking.
Right, well, in the sense in that it's a workaround to the program written in a way that it only works with a specific set of library versions. It could be because the OS you're running the program on has older or newer library versions than the versions the program was written to work with. And you need a newer or older version of the program instead of the version that was released with compatibility with your OS's library versions.
Or it could be because the program stopped being maintained a long time ago so it depends on old libraries.
Or it could be that the program is written with such strict library dependencies that it can only work on a specific version of a specific distro.
But static linking has more uses and reasons for its use, even in systems which all use the same library versions globally. Can't remember the exact reasons. But in some distros it's possible to statically link everything.
It's also hard to tell if cat has truly completed or if its entirely written only in cache.
That's easy, just run
sync, it will wait for the buffer to be flushed to disk before exiting. You could also monitor
/proc/vmstat,
nr_dirty should be small and
nr_writeback should be 0, if dirty is large or writeback is non-0 it hasn't finished flushing the buffer yet.
As for cat to copy the image directly into the USB drive, the reason it fails is because shitty USBs just stopped responding after chugging along at 2MB/s for 15 minutes if you don't control write/sync sequences.
I'd consider that a hardware defect and not really a fault of how the OS sends commands to it.
When I am writing software I generally write the program while keeping a notebook open (or an Obsidian.md note) in front of me. While I write the program I describe my ideas and thoughts in the notebook, and then afterwards I write the documentation in markdown and publish in on my own documentation site. I keep the notebook around and link the notebook to the documentation so that in the future I know what I was thinking and trying to accomplish.
The notebook is a purely "append only" document, so that I have all the things that didn't work also noted, so that in future endeavors I do not repeat the same mistakes. It generally works very well and it has saved me in quite a few situations. I also like the fact that it slightly slows me down, so that I force myself to think about the problem for a bit longer.
Note taking is fine... but there is one thing I don't like that's become a trend.
I hate the trend of making documentation online-only. Like not even shipping the documentation in the package, but just pointing you to a docs website. Thanks, now I need a fully working web browser and internet to even use the software. Then I have to find the right revision of the docs for the particular version that I have, because it may not be the latest version.
The best place to put documentation is in the manpage. Texinfo is acceptable, many GNU programs use it, and there is a TUI reader for it that's usually part of the base set of packages of most distros. But I don't like html, I may be able to open it in w3m or lynx, but those usually aren't installed, and are large. An entire browser just to read docs? Rst or Markdown I also don't like, you need a dedicated reader or plugin for them too. And the docs have to either be installed into the system's manpage or texinfo locations or in /usr/share/doc.
It's easy to bundle docs with your package. I know how Gentoo's ebuilds work, they scan the tarball for a doc or docs directory, and just copy it into /usr/share/doc, with maybe some filtering based on file type, maybe compressing some file types. You can also easily tell it exactly how to install the docs in the ebuild if you don't like the defaults, tell it where in the tarball the manpages and texinfo docs are, and it'll install them in the right places.