- Joined
- Dec 19, 2022
They did, but back then very few PCs supported UEFI. The typical approach to boot a hackintosh was to set up a bootloader that emulated UEFI and then chainloaded the actual Mac bootloader.Didn't the first Intel Macs use UEFI32?
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
They did, but back then very few PCs supported UEFI. The typical approach to boot a hackintosh was to set up a bootloader that emulated UEFI and then chainloaded the actual Mac bootloader.Didn't the first Intel Macs use UEFI32?
Without looking it up, from memory they used a 32 bit EFI which they carried over from "new" PPC macs. UEFI was a later invention for the PC manufacturers to move on from BIOS, but macs never needed to be compatible with the IBM PC so they just went with their own thing. They later adopted UEFI to make it easier to run Windows and other OS on Intel Macs.Didn't the first Intel Macs use UEFI32?
that im unable to confirm I just remember patching out or was it emulating the uefi stuff using cloverDidn't the first Intel Macs use UEFI32?
That's right.emulating the uefi stuff using clover
I think you also could install a kext for snow leopard that would completely bypass the need for uefiThat's right.
Why do you still have a year? It's a somewhat random process.Though I suppose I still have a year before I need to think about data rotation so i avoid bit rot.
Should be done every three years and I only built this server a year and a half agoWhy do you still have a year? It's a somewhat random process.
Plus migrating data from a 14tb drive to another takes a loooooong time.
View attachment 8818803
im gonna go test drive bharat linux wish me luck
14 terabytes from two drived plugged into the same computer with sata. I don't get what you're going on about with NFS and CAT6Do you have NFS set up with a tertiary machine? Did me wonders migrating like... 1TB's worth of crap from my old PC to my current one before repurposing the former into my home server. I'm hamstrung by my ISP's plan, but 1Gbps transfer speeds via NFS made the entire ordeal take like... 3-5 hours? Somewhere within that ballpark. Way easier to handle than doing the old-fashioned thing of transferring shit via USB 3.x portable hard disks. The only limiting factor would be how long your CAT6 cables are. A trip to Micro Center for a 25-50ft cable should rectify that in your case if you don't already have one.
14 terabytes from two drived plugged into the same computer with sata. I don't get what you're going on about with NFS and CAT6
yeah with sata drives im still limited by the write speed so I'm looking at two days to copy a drive over to a new one. I could do parallel ones, but drives are still really expensive and i only have room for two more so I'd still be looking at two weeks or so to refresh all the drivesOh, it's on the same machine. I thought you were transferring from one machine to the other. That's a brain fart on my part. I brought up NFS because I was looking up ways to avoid relying on USB interfaces to transfer all my old shit over from the old PC to the new one, and NFS just so happened to pop up. Basically file transfer via LAN that's basically at the same speed your ISP gives you. In my case, 1Gbps internet plan -> 1Gbps LAN file transfer from old PC to the next. The CAT6 cable was something I lacked in my situation, so I had to get it separately. If it's on the same machine, then that won't help you. My bad.
But why? Do you even have any form of parity checking? This just sounds like a good way to introduce bitrot if you're copying files from one location to another without validation. Do you have more than one copy of these files?Should be done every three years
I’ve transferred about 8TB over gigabit Ethernet in about 8 hours before via scp.Do you have NFS set up with a tertiary machine? Did me wonders migrating like... 1TB's worth of crap from my old PC to my current one before repurposing the former into my home server. I'm hamstrung by my ISP's plan, but 1Gbps transfer speeds via NFS made the entire ordeal take like... 3-5 hours? Somewhere within that ballpark.
Jesse Smith from DistroWatch said:To my mind, one of the best scenarios for sharing a review comes about when I find a useful or interesting piece of technology. When I am so impressed by it that I want to share my discovery with the world, well, those are nice surprises and a pleasure to write. My feelings are more neutral about reviewing projects because other people expect to see reviews of them. Popular distributions such as Ubuntu or openSUSE may or may not introduce any groundbreaking changes, but we always get asked to talk about them because those of the projects which people tend to use the most. Its beneficial to the audience, but usually less gratifying for the reviewer. To my mind, the least satisfying reviews to write happen when a developer or company has asked me to write a review of their product and it turns out to not perform as advertised. Those aren't fun projects to explore, they aren't a pleasure to write, and they tend to only serve the community as a word of warning.
For good or ill, those are the three reasons I usually end up writing a review - I find something appealing on my own, people request reviews of popular projects, or projects approach me to write about them.
Jesse Smith from DistroWatch said:This week I ended up test driving a distribution for a new reason, one that I don't think I've ever before had. I'll explain in more detail in a moment. First, I want to introduce Origami Linux.
Origami Linux is a Fedora-based desktop Linux distribution with an immutable root filesystem and atomic updates. It reportedly uses System76's COSMIC desktop. The distribution does not offer a live mode; it brings up the Anaconda system installer right after the initial boot for a guided installation instead. The project's website reports it is optimized with the CachyOS kernel and modern schedulers.
The distribution's website further states it runs packages with CPU optimizations:
The project's website also had this unusual warning: "Note: When booting for the first time, you may be prompted to enroll the MOK (Machine Owner Key). The enrollment password is: origami" I didn't find any explanation on the distribution's website concerning why we need to enroll in the MOK, what purpose it would serve, or who supplies the MOK.Origami Linux is a modern Linux distro prioritizing modern systems. It requires an x86_64-v3 (or newer) CPU to run. Legacy systems with x86_64-v2 and below, as well as legacy NVIDIA cards, are not supported because we ship with the latest NVIDIA drivers.
The Origami website also repeatedly suggests it is primarily designed for developers. The only feature I found that seemed developer-focused was the inclusion of containers. The website suggests: "Develop in isolated environments while keeping your host clean using Distrobox and Podman."
Another feature of the distribution is it replaces classic command line tools with Rust-written replacements: "Replace legacy tools with Rust-powered alternatives: eza for ls, bat for cat, ripgrep for grep, Helix, LazyGit, Micro, hyperline, yazi, Zellij, Procs, and du-dust out of the box."
Jesse Smith from DistroWatch said:As I was reading all of this, the image that was forming in my head was of a bleeding-edge distribution with a locked-down filesystem that optimized packages which would offer little practical benefit, but would prevent older machines from running the operating system. I imagined running COSMIC, a Wayland-only, feature-incomplete, resource-heavy desktop. I was picturing a distribution where I couldn't easily install classic packages, but would need to rely on OSTree and Flatpak and containers to do any testing or development work. On top of all of this I might need to enroll in something which was never properly explained just to get started. I was envisioning running a distribution which would be hampered by Fedora's unusually short support lifespan that would require a major upgrade about once a year. On top of all of this, Origami does not even provide a live environment for testing purposes. We need to dive right into the classic version of Anaconda, one of the more notoriously awkward system installers in the Linux ecosystem, and hope it works.
Jesse Smith from Distrowatch said:In short, I was suddenly faced with a distribution which sounded like all of my least favourite technologies and design philosophies available to Linux users all wrapped into one experience. In fact, I struggled to imagine a combination of design choices I would enjoy less. Here I had my personal nightmare of an awkward and slow system installer, a kernel which (despite its claims of optimizations) had always performed poorly in my tests, packages optimized in a way which wouldn't benefit me, but might prevent me from using on some test equipment, a locked down filesystem, and potentially mandatory enrollment in an unknown program. Further, I'd be running one of the heaviest, feature-poor, slowest desktops available. To me, Origami Linux sounded like a horror movie: something that would be so unpleasant to experience that I couldn't turn away from it.
Jesse Smith from Distrowatch said:This is probably the worst reason I have every had for writing a review, going in with the expectation that the experience will be bad, and wondering (with a morbid fascination) just how unpleasant the experience could get. With this in mind, I want to make two things clear. First, I didn't go into this review with malicious intent. I was fascinated by how terrible I might find Origami Linux, because I was curious, not because I wanted to put down the developers or their efforts. Some people like containers and immutable filesystems and COSMIC - and that's great. I'm not trying to tell developers not to do something, this was about exploring my personal preferences and what I believed the exact opposite of the sum those preferences would be.
Second, I want to dispel any illusions that this review will have a twist ending. All of this build-up does not resolve itself with me learning the error of my ways and seeing the light in terms of immutable filesystems, unnecessary optimizations, and Wayland. In the end of this article, I don't go frolicking, hand-in-hand, into the sunset with Origami. This is not a movie (or, if it is, it's a horror rather than a comedy) and my computing preferences are built on decades of experience; they're not going to all reverse in the span of a week. This is a review about questionable design choices and technical mistakes, not about personal growth.
Origami Linux 2026.03 is available in just one edition (featuring the COSMIC desktop) and runs on a single architecture: x86_64 with x86_64-v3 optimizations. The distribution's latest release appears to be based on Fedora 43 and some screens refer to the project as "Origami 43". The ISO file is a 4.5GB download.
Jesse Smith from DistroWatch said:One of my concerns with Origami, which I had not suspected from reading its website, was that the distribution set up zRAM instead of a swap file or swap partition. When zRAM is enabled, memory which is not actively used can be compressed inside RAM rather than copied out to the disk. In theory, this makes re-loading data into RAM and accessing it faster, at the cost of reserving some RAM for compressed data. This can work well, in some situations, but when RAM nears capacity (and no out of memory service kicks out large processes) the result is the system becomes even more limited than usual in terms of memory and starts swapping (compressing) and retrieving data almost constantly in a loop. This quickly brings the desktop interface to its knees. Disabling the zRAM virtual device fixed this issue.
Jesse Smith from DistroWatch said:There were a few unfamiliar utilities in the application menu. One was labelled Noctua and, when I opened it, the sparse interface didn't give many hints as to what it was meant to do. With a little experimenting I found it was a document viewer and could be used to open PDFs. Oddly enough, by default, I found PDF documents opened in a web browser rather than in Noctua, which made the document viewer somewhat unnecessary. The other surprise application was called Cloudflare Zero Trust. I hadn't encountered this application before and so I looked it up on the Cloudflare website. After reading the website and the company's briefing documents, I was convinced Zero Trust was a repository for useless buzzwords and sales jargon. I couldn't find any information on where someone would actually use it, except perhaps as an "augmented" VPN, though what set it apart from other VPNs was not specified.
Jesse Smith from DistroWatch said:As mentioned above, there are Rust-based alternatives to the common GNU command line tools installed on the system. These alternatives are presented and handled in a variety of ways. For example, if we run the classic ps program from the command line our command is redirected to a shell function which suggests we use an alternative called procs instead, then runs ps anyway as we requested. The grep command is also hijacked by a shell function which suggests we use rg and then runs grep as requested. This quickly becomes annoying if we're running classic Unix commands a lot as it results in regular nagging messages in the output. Other commands are redirected using aliases. For instance, cat is silently aliased to bat which displays different output. Likewise cp, ln, mv, sort, and a dozen other commands are aliased to their Rust-based equivalents. This means that we not only get output different from what might be expected, but disabling the new alternatives requires checking in multiple locations to see whether our shell is being redirected by a function or an alias.
I'd like to clarify that I have nothing against the new Rust-based tools, they mostly perform the same functions as the classics. My issue is with having the command I have run redirected to a different command which can display different output or output which is structured differently than what I requested.
Jesse Smith from DistroWatch said:I went into my trial with Origami being not only aware that I wouldn't appreciate aspects of the distribution, but with a curiosity about what sort of experience I would have using a distribution which makes every design choice in direct contradiction to what I want. Origami uses an immutable base, I prefer writable filesystems; Origami is Wayland-only, I prefer X11 for performance; this distro uses a new and inefficient desktop environment, I like something battle tested and snappy; Origami relies heavily on containers and Flatpaks while I prefer to use traditional packages; and so on...
While I expected to have an unpleasant experience, I was unprepared for how frustrating using the distribution would be at points. Origami doesn't offer a live desktop, making it harder to test hardware compatibility; the distribution uses an old and painfully slow system installer; common command line tools redirect to other tools and don't use a consistent method of redirection, making it more work to disable this behaviour. The desktop is sluggish, there is no notification of software updates (despite there being three separate sources of software), and the welcome window is all over the place - duplicating questions from the installer, vaguely mentioning some features without explanation, and at other times being quite useful in customizing the desktop.
Using this distribution was like sandpapering my skin and underlines how important it is to have multiple distributions in the world. Not only because I very much want to use an operating system which is the polar opposite of Origami Linux, but also because I'm aware there are probably lots of people in the world who will be delighted to have a bleeding-edge, Flatpak-focused, Wayland-powered, immutable, build-optimized distribution. People should have the freedom to choose what they want, whether I like it or not. Just as I would like to have options which match my workflow and preferences.
I will say one thing in Origami's favour, and it may be the sole thing which I appreciated about the distribution. I liked that the distribution included an all-in-one "update" command which works for everything - Flatpak bundles, containers, and the core system. Very few distributions include an all-in-one meta package manager and the rare time I encounter one, I really appreciate it. I don't like to run separate update commands for every source of software and this was a welcome feature in a sea of irritations.
Jesse Smith from DistroWatch said:My physical test equipment for this review was an HP DY2048CA laptop with the following specifications:
Processor: 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz
Display: Intel integrated video
Storage: Western Digital 512GB solid state drive
Memory: 8GB of RAM
Wireless network device: Intel Wi-Fi 6 AX201 + BT Wireless network card
I don't recall ever really hearing about this (or maybe I did and it fell out of my memory) so I did some digging it sounds like ZFS and other solutions with redundancy just do data scrubbing every now and then specifically to check for corruption caused by this: if the checksum doesn't match when reading data during a scrub it uses the redundant copy to fix the bit rot and moves on with life.Should be done every three years and I only built this server a year and a half ago
Plus migrating data from a 14tb drive to another takes a loooooong time.
How did you come to this conclusion? Your internal LAN can have whatever you want, your ISP's box is just the gateway out. I have a 10gbit switch between my NAS, my main PC, my laptop dock, the mac I use to remote in and download stuff, and then 2.5gbit out to another switch which connects that up to my WIFI6 access point and the ISP gateway.Basically file transfer via LAN that's basically at the same speed your ISP gives you. In my case, 1Gbps internet plan -> 1Gbps LAN file transfer from old PC to the next. The CAT6 cable was something I lacked in my situation, so I had to get it separately.
Does EXT4 do that also? That's what my drives are in (except one XFS drive used for downloading)I don't recall ever really hearing about this (or maybe I did and it fell out of my memory) so I did some digging it sounds like ZFS and other solutions with redundancy just do data scrubbing every now and then specifically to check for corruption caused by this: if the checksum doesn't match when reading data during a scrub it uses the redundant copy to fix the bit rot and moves on with life.
No. Unless you have something on top of EXT4 to ensure bitrot protection, you have absolutely none. Copying the files to new drives just increases your chances for rot.Does EXT4 do that also? That's what my drives are in (except one XFS drive used for downloading)