The Linux Thread - The Autist's OS of Choice

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
you mean undo? reading that I thought it would be the opposite of what it was.

Either way. I've seen the people that stuck around with xorg complain so much about the xlibre guys commits. But using xlibre, and using xorg. Xlibre has tended to work better for me. The only thing I've noticed with xlibre is you will get abi warnings for certain things. But I haven't noticed it having any actual effect.
Enrico was the only person even trying to improve X11. They are talking about rolling back to February 2024 because when they deleted all of Enricos commits in their hissy fit they complete fucked up the repository and they can't get the server to compile unless they roll back to before all the deleted commits were made.
 
Enrico was the only person even trying to improve X11. They are talking about rolling back to February 2024 because when they deleted all of Enricos commits in their hissy fit they complete fucked up the repository and they can't get the server to compile unless they roll back to before all the deleted commits were made.
Fucking idiots.
 
Epic CEO Tim Sweeney replies to a post about Linux being compatible with Adobe software:

1768948370598.png

 

Attachments

  • 1768948513401.png
    1768948513401.png
    2.1 MB · Views: 162
Who the fuck cares? His opinion on the matters is irrelevent in every meangful way. Havign Adobe software on Linux will definitly help, and if they figure this out maybe they can figure out other professional software.
I mean, he's not entirely wrong, Adobe is Oracle tier
 
Big if true, means I can move the last of my family off of proprietard malware come proper Linux compatibility for photoshop/indesign/illustrator.
 
Did anyone make some sort of Linux troubleshooter where it scans your hardware and lists potential problems? Like of it sees you have a 6th gen Intel CPU it tells you that disabling vt-x might fix shutdown problems, and if you have a 10th gen turning off Intel idle might help.
 
should I just use arch I've been using mint for a minute but I want better compatibility with apps
I say if you're willing to do a manual install then go for it.

Otherwise. Arch might not be for you in the long run. I'm not saying people should have to install it manually. Just that it's a good test to see if it's the kind of system you will be happy with. I feel like it gives you a slightly better idea of what you might be in for than doing the installer. Also it gives you more control overall.
 
Last edited:
These are just some off-the-cuff thoughts I have experimenting with Bash scripting. Specifically, diving into functions, variables, stuff along those lines. It's been... interesting, to put it mildly. I already know how to write scripts that basically execute like 1-2 commands in sequence. For example, here's my script that I use to initiate PKHeX with.

Code:
#!/bin/bash
WINEPREFIX=$HOME/Desktop/PKHeX wine $HOME/Desktop/PKHeX/PKHeX.exe

This is a really simple script that I just double click on to fire up PKHeX, modify my starter's stats, max out money, max out all the items in my bag, and then get bored of playing Pokemon because I took what little challenge was left in the game to begin with. It's only two lines, but one line is absurdly long. Such is the nature of using WINEPREFIXes.

I wanted to trim it down to make it... "prettier," I guess? I dunno, this is the perpetual cycle of a Linux user: you have your shit working just fine, you're curious about what to do next, so you decide to screw with shit that already works well enough at the risk of minor inconveniences at best or full-on borkage at worst. Spoiler alert: I still think this stuff is unnecessary but hey, learning exercise. That's why I'm doing it.

I went to The Linux Documentation Project and immediately circled over to the Bash Scripting guide that hasn't updated since 2014. Yeah, ChatGPT or Gemini would probably be "better" insofar as giving me the answer flat-out, but "vibe coding" is a cancer that LLMs and their hypemen proliferated and frankly, I spend too much time just rattling off musings into a ChatGPT temporary window to laugh at how it'll overcorrect me and reinforce how it needs to follow safety guidelines (sidenote: why the hell is ChatGPT worse than Google nowadays? It took decades for Google to become shit, ChatGPT speedran that in like 18-24 months. Not good, OpenAI. Maybe you should fix your shitty product instead of chasing investor capital harder than a dope fiend chasing the dragon)

Perhaps it's poor form for me to skim through and pick/mix bits and bobs from various sections of a reference material I never read cover-to-cover once in my entire life, but who cares? I'm just trying to gussy up my PKHeX initialisation script for shits and giggles; this ain't mission critical bash scripting that should've probably been done in a more robust language like Python. The below is what my script came to.

Code:
#!/usr/bin/env bash

run_pkhex() {
   local PREFIX="$1"
   WINEPREFIX="$PREFIX" wine "$PREFIX/PKHeX.exe"
}

run_pkhex "$HOME/Desktop/PKHeX"

It looks prettier, curly braces are the true hallmark of any programming language apparently, it fires up the save editor all the same, but my *God* was getting to this point fucking painful... and even then, it feels like an overengineered solution to such a minor quibble.

As I understand it, this is how the script now functions.

Line 1: You invoke the shebang with #!/usr/bin/env bash to make the script more "portable" rather than just using plain-old #!/bin/bash. Logically, I know that's true because you're invoking env to locate Bash, and who knows? Maybe I'll eventually daily drive FreeBSD where /bin/sh is the default shell and Bash will always install into /usr/local/bin through the Ports collection since it's an external package and not part of the base system. As it stands? I'm daily driving Fedora, Bash is the default shell, no one who runs PKHeX on Linux ever uses the same directory structure as any other jack-off because we all do things ever so slightly different, so like... we're really splitting hairs on "portability" here.

Line 2: White space for readability's sake. That's understandable.

Line 3: run_pkhex() { is the start of the function as noted by the "()" followed by the open curly brace. You could literally call this function anything, even cheater_cheater_pumpkin_eater() { as long as you're consistent.

Line 4: Establishing the prefix variable. There's like a million friggin ways to declare variables in Bash, this is the one that I stumbled across, and honestly? I spent way more time than I should have trying to figure out what the "$1" even means. I'll spare you 2-3 hours worth of staring at TLDP's ancient HTML: the "$1" means "the first argument you pass to the function" where the "$1" calls "$HOME/Desktop/PKHeX" from the final line and passes it into the function as a local variable. Probably could've done that cleaner, probably could've wasted a lot less time, and I'm sure some maladjusted terminally online dork like me that reads this at some point down the line will tell me "oi m8 u could've just gone with local PREFIX="$HOME/Desktop/PKHeX" directly inside the function and spared yourself the headache." Hindsight's 20/20. What can you do?

Line 5: This is where the logic, aka the "command" actually sits... and this is a single command script. Basically just shortened WINEPREFIX=$HOME/Desktop/PKHeX wine $HOME/Desktop/PKHeX into WINEPREFIX="$PREFIX" wine "$PREFIX/PKHeX.exe" Huzzah... I shortened the command and all it took was bashing my head against the wall over how to declare a stupidly inconsequential variable.

Line 6: Closing curly brace, otherwise your function won't ever end and your script will go "oi m8 wot r u doin? shit's invalid m8 not gonna work"

Line 7: White space for readability like Line 2.

Line 8: It ain't enough to write out the fucking function. No, you actually have to *call* the function at the end of the script, and let's not forget, that stupid friggin "$1" that I decided to include needs to pull the variable and pass it into the function somehow, so where else does $HOME/Desktop/PKHeX go but at the end of the script after I call the function?

Long story short: it actually works, I'm able to cheat at Pokemon and then wonder why all the fun disappears after like 10 minutes. All is well with the world. But what if I'm not editing extant scripts for arbitrary code aesthetic concerns? What if I wanna do something that'll make my life marginally "easier" ? How about something that updates my system, both with the package manager and whatever Flatpaks I have installed? Thankfully, the below script was much easier to write after bashing my head against the wall trying to make a save editor for a children's RPG series work.

Code:
#!/usr/bin/env bash
set -e

dnf_updater() {
    sudo dnf --refresh update -y
}

flatpak_updater() {
    flatpak update -y
}

main() {
    dnf_updater
    flatpak_updater
}

main

All the same shit basically applies here too, but with a few clarifications:

Line 2: set -e is just to make sure your shit stops if you get a failure somehow. On Fedora? That can happen as easily as RPM Fusion being a few hours behind with the latest mesa-freeworld update while the Fedora repos themselves are already pushing out the latest kernel.

So here, I have three distinct functions: dnf_updater, flatpak_updater, and main Within the first two functions are the cookie-cutter commands I normally execute manually, and then main just chains the two together. It looks prettier, the curly braces make me feel like I'm programming like some 1337 H4X0R, but like... my first instinct was to write something as below

Code:
#!/bin/bash
sudo dnf --refresh update -y
flatpak update -y

Only 3 lines instead of 17, it's more utilitarian, it's less immediately "portable" but to that end... who cares? This is just for the sake of learning, and a big part of learning is doing things in a more arbitrarily complex way to accomplish simple tasks just to make you feel like you're expanding your horizons. Anyway, that's all I have to say.
 
Last edited:
Bash Scripting guide that hasn't updated since 2014.
I think a guide for bash from 2014 is probably still going to pretty much hold up today.

The only thing that might be missing, is I don't think bash had arrays back then. But most people don't even know, or use bash arrays so I don't think it would matter much.

You invoke the shebang with #!/usr/bin/env bash to make the script more "portable" rather than just using plain-old #!/bin/bash. Logically, I know that's true because you're invoking env to locate Bash, and who knows? Maybe I'll eventually daily drive FreeBSD where /bin/sh is the default shell and Bash will always install into /usr/local/bin
If you really want portability. Use #!/bin/sh The scripts you are writing could definitely run with sh instead of bash. Also if you have dash linked to /bin/sh instead of bash you will get a performance improvement basically for free.

Obviously scripts running with sh need to be posix compliant, but most of the time that's pretty easy to do. I recommend just having shellcheck, and if you have an editor that supports lsp you can set it up right in your editor to have it yell at you when you're doing something stupid. (or disable the warning if you think it's a useless one).

anyway, one thing I've taken to doing is setting up my shell scripts with getopts when I want to do multiple things that are similar, so I can pass flags to them to make them do different things.it's nice having just one script and being able to pass it a flag, instead of having multiple different shell scripts to do slightly different things.

I'm just trying to think of anything else that sticks out as something I would want to point people to as a tip on shell scripting. Most of it seems like general advice that would apply to most programing. Like the power of functions really comes in when it's something you are going to use multiple times in the script. so you only have to pass the function name instead of writing it all out. Also a similar thing for variables.

variables are great to tell yourself what state things are in later in the script. if x thing happens set x variable, then if x variable is set run y function. kind of thing. but again that's just general programming more than bash specific.

And the last just general programming thing, that applies to bash/sh, is take advantage of the different loops the have when you can. You can do some interesting things in shell scripting with for loops, while loops,case statements, and if statements. Because of the way these are high level languages that let you use external commands. Like creating a for loop with all the files in a directory as the thing it's iterating over, and have some command run as an action on each file. There are so many possibilities. Every programming language has loops, but shell really gives you a lot of power being able to easily run external commands in loops and with if statements.

One I can think of that's bash/sh specific, is command substitution is a really nice feature, definitely look into that if you haven't already. It's sort of similar to a function but not exactly. it's this syntax $(command chain inside the parenthesis). this let's you set a variable to a command or set of commands, and you can use this a million different ways. one is just using it similar to how you would use a function. although the behavior is slightly different in some cases. The other is setting a variable dynamically. Like a common thing is if you want to set the current time to a variable to use later in the script like for a file name. You would use the date command in a command substitution set to a variable. like so

Bash:
time=$(date "+%m.%d.%Y")

Also since I mentioned if statements up above. Something else I thought about, that's really important in bash/sh is the test command, which you will normally see as

[ these square brackets ] , and in bash [[ the double brackets ]]

they are basically an alias for the test command, and they let you check if a variable is set, a value is the same greater than or less than, a file exists, etc. etc. etc. taking advantage of this is one of the more important things to get comfortable with in bash/sh. as far as the difference between the single, and the bash double bracket. They are basically the same thing, with slightly differnt behaviour, like you don't need to quote variables inside the double brackets like you should do in the single brackets. the differences are pretty subtle and not that important really (at least most of the time).

Another thing is try to keep the use of external commands to an as needed thing for the sake of performance. sh, and even bash can have better performance than python (in some cases at least), it certainly has a better start up time, but there are things that can quickly cause a shell script to be slower than it needs to be, and using external commands when you don't need to is one way to do it. Obviously that doesn't matter when you are doing something that runs one or two commands and then exits, but if it's something that takes longer, or needs to do a lot of work. That's when it's probably a good idea to try trimming the fat. Like after you get something to work, you can go back through and see where you can cut things out, and swap in built ins.

anyway. I like bash/sh scripting I find it fun. I'm just going to stop here, because I couldn't think of anything to put down as shell scripting tips at first, now I keep thinking of stuff, and it's going to go on forever if I keep doing it.
 
should I just use arch I've been using mint for a minute but I want better compatibility with apps
Contrary to popular belief, since maybe 2017 I've had basically no issues with arch whatsoever on any system. Up to that point you might completely fuck over your system when upgrading basically anything, but that is no longer the case I'd say. As long as you can follow the very simple instructions in ArchWiki to do a manual install, and as long as you dont mind reading an article here and there (on archwiki, again) when you first start out to understand how to do specific things, then you should be fine.

Potential risk is if you have an Nvidia GPU, but that's mostly because Nvidia sucks dick.
 
One of the most obscure hardware platforms ever to be sold. Undocumented and lost to time. FOUND and reverse engineered. By someone who did not even know what a VMLINUX was 3 months ago or what a BUS was. AND got it running in QEMU.
No the NAND still does not work. Same error as before.

Ive gotten to the point to where im so desperate that im making scripts to LOG EVERY SINGLE RAM READ AND NAND READ AND THEN HEATMAP THEM using automation scripts
I know at this point the ENTIRE RAM STRUCTURE from how many times I viewed it. Ive lost it
1769004601406.png 1769004588262.png
You see that? Thats the log of the most READ RAM ADDRESSES in a run(running qemu im calling a run) that are automatically sorted into the most to least read.

Ive done the same with NAND

1769004663455.png

Ive lost it. Ive tried every single thing under the sun. Ive even

This is my fucking command line currently
./qemu-system-mipsel -pflash ./32bx300.BIN -plugin ./contrib/plugins/libexeclog.so -object memory-backend-file,id=mem1,size=256m,mem-path=./test.bin,share=on -machine malta,memory-backend=mem1 -d plugin |& tee -a test.txt
1769004732161.png
You see that plugin. It logs EVERY SINGLE INSTRUCTION THAT HAPPENS. because -d in_asm does not log ENOUGH for me.
Ive scoured the ENTIRE internet for days searching on the most obsecure sites not a single CELL on earth knows about, ive given my email to SO many of those sites im probally being used for some extortion scam in some third world country. I have SO many different files SO many different configs. It could be SO many problems.

But if you think this means im calling it quits or saying "I need to take a break" or "I need to find a better hobby" you are surely mistaken.
Ive been staring at this "Searching for Bootloader.TDF" error or "Searching for boot.TDF" on newer dumps ive found for so long. Every day all I see is a EARLIER error than that because my changes made it worse, OR I see the SAME error I see everyday since December. With NO debugging output or anything.

Ive reverse engineered so much of it. SO fucking much.
 
This is just for the sake of learning, and a big part of learning is doing things in a more arbitrarily complex way to accomplish simple tasks just to make you feel like you're expanding your horizons. Anyway, that's all I have to say.
Another thing to think about is all these commands can sit in a file at ~/.bashrc and when they get sourced, you can run the procedures directly from the shell, like:

$ dnf_updater

They'll even tab-complete this way.

My .bashrc has become a separate system-wide repo full of all the little hacks I develop for specific use cases.
 
Back
Top Bottom