The Linux Thread - The Autist's OS of Choice

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
Putin weaponized Manjaro
putin-computer.jpg
 
In general IMO rolling release makes no sense, unless you want your system to break so you can learn fixing it.
I don't think I've had anything break like people talk about while using rolling release distros. And the only distros I've used for quite a while are rolling release distros even outside of arch. I just make sure to reboot when I update, because the only problems I've had that I can remember were from not rebooting after some major component was updated, and rebooting fixed it.

My experience is that Sid was generally more stable than Arch proper, but not substantially so. My experience may be an outlier. The big problem with Arch always comes back to the AUR. When it works right, it's all right. But given that most of the software I use has to be built from scratch anyhow on Arch, may as well just bite the bullet and use Gentoo. If you're fucking around in the AUR to any substantial extent, just switch to a distro that's properly source-based and stop pulling your hair out. The AUR/official dichotomy in Arch is its worst feature. I never really got into DIYing my own Debian packages with checkinstall, I always just did the default autotools/Makefile PREFIX install to /usr/local.
Yeah. Gentoo is the only other system I've ended up enjoying. I don't actually recommend it for most people though, because I don't think most people will get the appeal of gentoo, and the people that will want gentoo probably don't need people telling them why the gentoo way of doing things can be appealing. I usually just recommend arch, or arch based distros instead, Because for the most part they do just work in my experience. There are a lot of similarities between arch and gentoo, outside of the source based nature (by default at least), but with gentoo it is definitely more advanced in some ways by it's nature. The gentoo documentation does make up for it, but I could see why people wouldn't want to bother using it. It's definitely not for everyone.

The funny thing is, arch has been as stable for me as gentoo. I do keep my aur usage to a minimum, so that might help. I right now I have 959 packages install from the arch repos, and 22 from the aur, and that's on the high side of what I usually use from the aur.
 
The wait is over.


More like - I am sure previous versions were kinda-sorta working, but it's getting better and better by the day.
My understanding is that they fixed it so the DRM server that's always running works properly now. If that's the case, I wonder if software like MasterCAM or SolodWorks will now work?
 
There are a lot of similarities between arch and gentoo, outside of the source based nature (by default at least), but with gentoo it is definitely more advanced in some ways by it's nature.
Did you mean to say rolling-release nature, did you mean to say differences between arch and gentoo, or are you mentally retarded?
 
Did you mean to say rolling-release nature, did you mean to say differences between arch and gentoo, or are you mentally retarded?
I meant outside of gentoo being source based, there are a lot of similarities between arch and gentoo. As in if you take away gentoo being source based, it and arch have a lot of similarities.
 
I meant outside of gentoo being source based, there are a lot of similarities between arch and gentoo. As in if you take away gentoo being source based, it and arch have a lot of similarities.
Then you should have said "despite the source vs binary difference." You didn't mark the 'source based nature' as belonging to gentoo in any way, so it reads as a commonality (especially since 'outside' next to a counting ('a lot') is likely to be interpreted as exclusion from that count, rather than exclusion as a factor in the overall sentiment (arch and gentoo are similar)).
 
(especially since 'outside' next to a counting ('a lot') is likely to be interpreted as exclusion from that count, rather than exclusion as a factor in the overall sentiment (arch and gentoo are similar))
Lisp user detected.
 
View attachment 8516740
I can successfully make automation scripts that can read and pass the checksum(what bytes need to be flipped/changed)
Ive FINALLY decoded the pattern

I can FINALLY say I understand the NAND controller and how it works.
Oh my gosh the amount of things I did was insane. But I finally got it down
No theres one more function I have to implement/figure out.
But honestly from EVERYTHING I learned these last 3 month journey im SO close.
1770310076365.png

I have a LUAJIT script that im trying to RECREATE what the checksum and ECC functions do so I can better understand it
1770310152047.png

See that third byte. That 61? That 61 is a command that changes up how things are done.
Normally the bytes between 37-49 are the STORED checksum that is there. If this checksum does not match the CALCULATED checksum than it knows that its bad. Byte 40 is also involved and while I don't know WHAT it does I do know that it may be controlled by the REAL nand circutry that a simple NAND dump can't emulate. So I have to make my QEMU Nand controller I made HANDLE it if we want the bootloader to pass it. So the STORED precalculated information is bytes 37-40 the OTHER bytes in the OOB 64 bye spare area is used to calculate the checksum, and then the bootloader just compares it against bytes 37-40

Now look at this if you will
1770310392393.png

I paused QEMU right at the TV bootloader FAILING a checksum and it failed at page 4224 NOW lets see this. And now look at what my LUA script of the functions I decompiled says the stored ECC would be on real hardware
1770310484521.png

As you can see the STORED ECC is different. Bytes 37 to 40 are NOT what the bootloader used as the stored ECC... Wait a minute, those bytes GDB gave are at bytes 13-15...

Ok so if we were to reassign bytes 37-40 it would be
13-16.

Ok now I want you to tell me...
99% of the ECC areas start with
FF FF FF
But this one which is in a VERY important area starts with
FF FF 61
Tell me what is 61 FLIPPED. Why its 16. Now tell me what is the location of the Bytes GDB says the STORED ECC is for that page?
13-16

NONE of the functions mention or handle this so its likely that 61 is a UNDOCUMENTED NAND command that when given ALTERS the bytes? In a way to make that checksum valid... Something that a external programmer would not account for and just read litterally? But that has a lot of problems.

AHA!!! Im going to go into detail of EACH of the functions here that are in play because it snapped into place.

This is the FIRST function in the 3 that are relevant. It gets the METADATA or IMPORTANT information in the 64 byte spare area/ECC/Checksum/Whatever you want to call it. Inside this metadata is a STORED checksum. Each 64 byte OOB area has a STORED checksum.. This is used later. This precalculated or prestored checksum is what were going to call "Stored ECC"
1770311504985.png

This then TAKES parts of that information and calculates a CHECKSUM... This RETURNED checksum is 3 bytes. This is what were going to call the "Calculated ECC"
1770311523072.png

FINALLY we then go to "Nand_ECC_CORRECT"
1770311590873.png

This takes the values of the STORED ECC and the calculated ECC. If they do not match then it knows that its corrupted and to NOT trust the full 2112 byte page. This is a common standard for NAND. Most nand that comes out of the FACTORY and in your phone and embedded systems have bad blocks or BAD parts that don't work. This is common, instead of checking the 2048 bytes that acutally CONTAIN the data it just checks a SPARE 64 byte part at the end. And that's just what it does. If the 2048 data is BAD it will still accept it, its only if the 64 bytes at the END of the 2112 byte page is bad THEN it will bitch and even then there are combinations that would still work.

Have you ever had werid quirks or things as a kid on your TV or game console(wii, gamecube) or ETC? This is why. And Linux is even worse. Linux just checks the FIRST 2 BYTES in the NANDS 64 byte spare area and if that byte is FF then it counts the ENTIRE 2112 area as "Good". This is to make startup times faster. And yes they just check at the factory if it BOOTS.

Anyways guess what.
1770312065227.png

This stored ECC is impossible. nand_oob_data checks EXACTLY the 37, 38, 39 and 40th byte. The fact that the Stored ECC is changed starting at bytes 13-14-15-16 AND the bootloader KNOWS that is impossible because those values are HARDCODED into the function. EXCEPT. the failing path that scans the "failing" page does not use nand_oob_data... It calculates it on its own.

99% of the checksums go through THIS path/function
1770312202626.png

But that one failing page? It goes through a seperate one that does NOT USE NAND_OOB_DATA!!!!! so that means ITS METADATA THAT IT GETS FROM THE 64 BYTE SPARE AREA IS DIFFERENT WHICH WOULD ALLOW ITS STORED_ECC TO START SOMEWHERE ELSE
1770312301263.png

1770312478450.png

0xc is 12 in hex which 12+1 is 13 which is EXACTLY the starting position of this different stored ECC...
Its now time to
  1. Update my LUAJIT testing code to handle different metadata and figure out MORE on how it works.
  2. Maybe change my tweak from before where I set byte 40 to 0 to byte 16.
  3. Figure out WHAT are the quailties FOR the SPARE to be these "Special spare areas"
Kiwifarms attachments are broken it seems right now so you may have to read this later...
 
Last edited:
Back
Top Bottom