Open Source Software Community - it's about ethics in Code of Conducts

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
The FSF stance is extreme, but it also has not changed since it's inception in 1985, which was back when computers were simpler (as in less complex systems). That's why the stance seems somewhat absurd and unenforceable on modern computers.
To give some idea 1985 was the year where the C128 and Amiga 1000, and intel 80386 were introduced. I will go as far as to say that was the times when a single person could still fully understand a given computer. That was to contrast companies like IBM which produced computers-as-a-service and charged you hourly.

Fast forward 40 years and things have taken a different turn. Computers now are far more complex, and the market is more monopolized than in 85. It is not possible for a single person to write *all* of the software that a modern computer runs (by "all" i mean all the code that's running from Power-on Reset to bringing a computer to a usable state, like the BIOS prompt). That is simply a task too complex on most modern platforms.

Pretty much the only modern system architecture where a fully free software stack is possible would be Risc-V, and not all of them, only the ones with fully open source sillicon. That has some chance to exist but it's still ages beyond x86 and ARM. We are at the point of first ATX-compatible RiscV motherboards, and will likely have to wait 5-10 years for a first RiscV laptop,

This is sold by framework to put in the laptops they sell.

I guess it's not a complete laptop, But if you own a framework laptop you can convert it to a risc-v one for $200 if you so choose.
 

IIRC expressjs is another one of the many JavaScript frameworks (I could be wrong, and it's something else). But if it is, this is actually probably a net positive for the world. Just waste as much of their time as you can. Its not like anyone is going to make anything with this bullshit that isn't garbage that's littering up the Internet.

Edit lmao. I paused and went to do something half way through. He blames all of this on this one video of some jeet woman showing how to make a fork, and saying in it to make a pull request in hindi.

Of course he is too much of a cuck to say "This is what happens when you let teach indians this stuff. They will go and shit up everything". That is what he is saying even if he denies that is what he meant. The words he says in this video basically means that if you are man enough to actually admit what the problem is here.

Skip to about 8 minutes in to see what I mean. I would just clip that part if I was at my computer.
 
Last edited:
Re: FOSS CPU architectures, I found the Russian Elbrus CPU to be an interesting case study in that it uses a Very Long Instruction Word architecture that offloads micro-op scheduling to the compiler instead of implementing it in silicon. Obviously the Elbrus is not FOSS, but I'm curious if a home-grown VLIW ISA could be a viable solution for reducing the scope of designing a CPU from scratch, since things like speculative execution and other performance tricks are baked into the binary at compile time instead of implemented in silicon.
 
Re: FOSS CPU architectures, I found the Russian Elbrus CPU to be an interesting case study in that it uses a Very Long Instruction Word architecture that offloads micro-op scheduling to the compiler instead of implementing it in silicon. Obviously the Elbrus is not FOSS, but I'm curious if a home-grown VLIW ISA could be a viable solution for reducing the scope of designing a CPU from scratch, since things like speculative execution and other performance tricks are baked into the binary at compile time instead of implemented in silicon.
SO it's a Russian Itanic. Nice
 
Obviously the Elbrus is not FOSS, but I'm curious if a home-grown VLIW ISA could be a viable solution for reducing the scope of designing a CPU from scratch, since things like speculative execution and other performance tricks are baked into the binary at compile time instead of implemented in silicon.
well writing a compiler that can efficiently schedule instructions ahead of time is actually about as hard as the typical practice of having the large register files and parallel instruction decoders and deep pipelines and speculative execution hardware that an efficient risc processor would need to have

i do think a vliw chip would have to involve things like pipelines just like risc does though
perhaps the chip itself would still end up a bit simpler in the end. hard to tell
SO it's a Russian Itanic. Nice
the biggest problem with itanium is that it claimed to support x86 but fucking sucked
and vliw-based architectures have a reputation for compiling pretty badly if you just throw random code at it, as it's sort of a similar situation to simd computing where you have to trace a bunch of dependencies very carefully to compile it well
meanwhile on a risc core the silicon just traces dependencies at insanely fast speed while things are running, and doesn't give a fuck if it's tracing through function boundaries (or into illegal areas of memory to perform a side channel attack)
 
well writing a compiler that can efficiently schedule instructions ahead of time is actually about as hard as the typical practice of having the large register files and parallel instruction decoders and deep pipelines and speculative execution hardware that an efficient risc processor would need to have

i do think a vliw chip would have to involve things like pipelines just like risc does though
perhaps the chip itself would still end up a bit simpler in the end. hard to tell

the biggest problem with itanium is that it claimed to support x86 but fucking sucked
and vliw-based architectures have a reputation for compiling pretty badly if you just throw random code at it, as it's sort of a similar situation to simd computing where you have to trace a bunch of dependencies very carefully to compile it well
meanwhile on a risc core the silicon just traces dependencies at insanely fast speed while things are running, and doesn't give a fuck if it's tracing through function boundaries (or into illegal areas of memory to perform a side channel attack)
No, the problem with Itanium was that the compiler was supposed to basically tell the CPU what order to handle instructions in, which is pretty much impossible because the compiler has no way of knowing what other executables will be running at the same time as the code it is compiling. It basically would require an omniscient, clairvoyant compiler.

For someone who clearly loves shitting out walls of text, maybe do a little reading instead.
 
the problem with Itanium was that the compiler was supposed to basically tell the CPU what order to handle instructions in
yeah i said about as much with "trace a bunch of dependencies very carefully" and "the silicon just traces dependencies at insanely fast speed while things are running"
the compiler has no way of knowing what other executables will be running at the same time as the code it is compiling.
actually that might be a good thing, since you surely know the very funny things that tend to happen when processes with entirely different security domains measurably affect each others' speculative processor bullshit (i think it's why libreboot disables shit like hyperthreading)
 
Last edited:
Re: FOSS CPU architectures, I found the Russian Elbrus CPU to be an interesting case study in that it uses a Very Long Instruction Word architecture that offloads micro-op scheduling to the compiler instead of implementing it in silicon.
the biggest problem with itanium is that it claimed to support x86 but fucking sucked
and vliw-based architectures have a reputation for compiling pretty badly if you just throw random code at it
Itanium was super weird even when it comes to the theory behind VLIW. I thought the idea behind VLIW was to have a massive instruction (256-bit or 512-bit) and each subsection of the instruction went to a very specific functional unit. The scheduling moves to the compiler. With EPIC, it batched every set of instructions into groups of three and the instructions had to be independent. With full VLIW, the compiler handles all dependencies. With EPIC, the compiler ensures there are no dependencies, except when a special instruction was added indicating a dependency stop (in which case the CPU cleared the pipeline).

No, the problem with Itanium was that the compiler was supposed to basically tell the CPU what order to handle instructions in, which is pretty much impossible because the compiler has no way of knowing what other executables will be running at the same time as the code it is compiling. It basically would require an omniscient, clairvoyant compiler.
It's not impossible. It's literally how the computer you're typing on right now works. The differences is there are two stages. Your PC or Phone requires a compiler that takes a regular language and changes into instructions (load, store, add, subtract, multiple, shift left, shift right, etc.). It sends those instruction to the CPU. If your CPU gets add 1 to y and store z and then sub x from b and store a it can run those independently. If the second instruction said sub z from y and store a it now has to wait on the previous instruction because they have a dependency (y).

You CPU has thousands of functional units allowing instructions to run out of order, so long as they return to the Operating System in order. Your CPU schedules everything in pipelines.

So you have compiling and then you have scheduling. VLIW attempt to merge the compiling and scheduling phase. In theory you could do this with two different software stages, both which would look like normal compiling. The trouble is, people who are good at writing hardware schedulers weren't great at writing compilers and people who wrote compilers had enough on their plate without trying to optimize (or unoptimize) for some new software scheduler phase.

The benefit is if you make advances in scheduling, you can push those speedups in compiler updates. You can update the CPU speed itself in how you build the software, far mode than you could do with micro-code updates. The disadvantage is it's really fucking difficult to write the compiler.

Itanium also suffered from a lot of design issues and the shitty compilers Intel/HP made didn't help.
 

IIRC expressjs is another one of the many JavaScript frameworks (I could be wrong, and it's something else). But if it is, this is actually probably a net positive for the world. Just waste as much of their time as you can. Its not like anyone is going to make anything with this bullshit that isn't garbage that's littering up the Internet.

Edit lmao. I paused and went to do something half way through. He blames all of this on this one video of some jeet woman showing how to make a fork, and saying in it to make a pull request. In Hindi.

Of course he is too much of a cuck to say. This is what happens when you let teach indians this stuff. They go and shit up everything. That is what he is saying, even if he denies that's what he is saying. The words he says in this video basically means this if you are man enough to actually admit what the problem is here.

Skip to about 8 minutes in to see what I mean. I would just clip that part if I was at my computer.


prs.png

readme.png
 
The benefit is if you make advances in scheduling, you can push those speedups in compiler updates. You can update the CPU speed itself in how you build the software, far mode than you could do with micro-code updates. The disadvantage is it's really fucking difficult to write the compiler.
Yes, very hard.
But you also have the issue that as a side effect of moving the scheduling out into the compiler it gets even harder to have high code density.
And with this I mean if the compiler can not always fill each very large instructions completely with u-instructions, this basically means dead space in caches.

Ugly as it is, the x86 isa is very compact and compiles into very small code. Smaller than pretty much anything else and thus being able to utilize the very limited and very critical i-cache better than anything else.
VLIW was already at a disadvantage by having to, as much as possible, normalizing the size of instructions (== being less compact). Add to this that if the compiler can not utilize the the entirety of every vliw then the efficient use of i-cache becomes even worse.

VLIW was an idea that looked good on paper but never worked as the reduced cache efficiency meant that it could never compete with something like x86[_64].

A good way to think of the x86 isa is that its really ad-hoc and kludgy isa is a form of compression. And the isa that has the best compression and can fit the most code into icache will always win.
 
Edit lmao. I paused and went to do something half way through. He blames all of this on this one video of some jeet woman showing how to make a fork, and saying in it to make a pull request in hindi.
Just like in 2020 when another jeet made a guide on how to open a PR and get a free shirt from DigitalOcean for participating in Hacktoberfest, and an endless horde of jeets flooded all the OSS repos they could find.
How One Guy Ruined #Hacktoberfest2020 / HN
 
I wonder if the push to "modern" tech, where everything is as-a-service, etc. is partly because of Indians. They want to reduce it down to a simple thing so anyone can use it, which of course supplants all of the necessary understanding of the infrastructure. And as we know, leads to things like bloat, technical debt, etc. Maybe I'm just seeing things that don't exist. But it does explain the shift from the past, where even the most incompetent H1B worker had to have an iota of technical skill, and that doesn't seem to be the case anymore.
 
I wonder if the push to "modern" tech, where everything is as-a-service, etc. is partly because of Indians. They want to reduce it down to a simple thing so anyone can use it, which of course supplants all of the necessary understanding of the infrastructure. And as we know, leads to things like bloat, technical debt, etc. Maybe I'm just seeing things that don't exist. But it does explain the shift from the past, where even the most incompetent H1B worker had to have an iota of technical skill, and that doesn't seem to be the case anymore.
Not really. For the platforms it's subscription revenue (which has massive accounting advantages), for the bosses it's a capex/opex trade and the ability to fingerpoint at someone external. For the devs they were tired of those silly admins saying no to huge hardware requests. Now you can requisition all the hardware you want until someone notices the bill!
 
You think it's easy to maintain infrastructure for a globally available service? What's the biggest project you've worked on?
Not even close to what I said.
Not really. For the platforms it's subscription revenue (which has massive accounting advantages), for the bosses it's a capex/opex trade and the ability to fingerpoint at someone external. For the devs they were tired of those silly admins saying no to huge hardware requests. Now you can requisition all the hardware you want until someone notices the bill!
Yeah, there's definite advantages to subscription models, but this seems to go beyond subscription services. Like running Docker containers locally, or using Terraform to build out your layout, as opposed to even automatically provisioning VMs via API let alone doing manual or semi-automatic deployments. But with building it all in code and pressing play and magically it's there? Just seems to take away some of the understanding. This might be from a trend of computer literacy decreasing due to the use of phones and tablets over actual desktop machines as well. Maybe I'm just seeing specters where there aren't any.
 
yeah i said about as much with "trace a bunch of dependencies very carefully" and "the silicon just traces dependencies at insanely fast speed while things are running"

actually that might be a good thing, since you surely know the very funny things that tend to happen when processes with entirely different security domains measurably affect each others' speculative processor bullshit (i think it's why libreboot disables shit like hyperthreading)
Look up how the Itanic actually worked in real life and try to tell anyone it is a smart way to design a processor.

HINT: There is a VERY good reason why the world went with X86-64 over IA64.
 
Last edited:
Back
Top Bottom