Diseased Open Source Software Community - it's about ethics in Code of Conducts

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Kubernetes is fucking bloatware designed to burn compute resources and money. This faggoty shit is just one more reason I'll never use it for personal projects.
Should be lighter weight than VMs for most things. But people who run it on top of a VM are retarded.
 
Kubernetes is fucking bloatware designed to burn compute resources and money. This faggoty shit is just one more reason I'll never use it for personal projects.
Can you tell me a little about that? I've used Docker but never Kubernetes, and I honestly don't get the use case for Kubernetes.
 
Can you tell me a little about that? I've used Docker but never Kubernetes, and I honestly don't get the use case for Kubernetes.
It's Docker Swarm but with a more flexible clustering model. If you haven't used Swarm, it's docker-compose, but it can respond to usage by replicating containers and load balancing requests across a network cluster.
 
To make it even worse, this is the person responsible for this release: Kat Cosgrove (she/they). I think it's a turbo-handmaiden (see screenshot below) biowoman, but not 100% sure, since she's ugly and mannish.


1000012188.png

I am not archiving shit, as I am currently busy looking for a cliff to jump from.
 
It's nigger shit just like Docker and all of this other unnecessarily complicated nonsense. Anyway, how funny.
You really should give Docker a try and see how it actually is and what it does. Not only is Docker whiter than albinos, but it's easy as fuck to use.

Now any retard can run and build software without much worry about where it runs, there's no more "but it works on my machine under the exact same conditions". People run the container, and it just works.

No installing libraries, dependencies, changing OS. If you want to run a youtube archiving server that gets all metadata, comments, scans channels automatically for new videos, etc, you just fill out some yaml: replace the existing values(e.g ports) and paths with your own, add some environment variables in a .env file (or a web interface if you use portainer) and that's it! You don't even need to know what redis and elasticsearch are.
 
To make it even worse, this is the person responsible for this release: Kat Cosgrove (she/they). I think it's a turbo-handmaiden (see screenshot below) biowoman, but not 100% sure, since she's ugly and mannish.


View attachment 5917243

I am not archiving shit, as I am currently busy looking for a cliff to jump from.
Have a look at her list of demands before she'll agree to speak at your conference: https://github.com/katcosgrove/katcosgrove/blob/main/speaking.md, https://archive.is/ZFG44

Imagine being this far into the cult.
 
I spotted a familiar name on the issue tracker of a Redis fork:
https://github.com/valkey-io/valkey/issues/18 (A, taking its sweet time to finish saving)

Let's direct further discussion about GitHub vs Codeberg to the appropriate thread. You'll have to sign up for a Codeberg account to participate, though, which I think shows good faith anyway!
Come to my platform to discuss why you don't want to come to my platform. :thinking:

I'm sorry Drew, I wanted to be on your side, but all of your responses in this thread have been very aggressive and demanding of everyone migrate to your fork because you had already settled on a name and DIFFERENT license?
 
Can you tell me a little about that? I've used Docker but never Kubernetes, and I honestly don't get the use case for Kubernetes.
So the idea of docker is: What if we combine the isolation of a VM with the low resource usage of a process. And I think that works great.

And a docker container is just that a container containing your program. Then think about cloud providers like Google, Amazon, Microsoft (Azure). They have as much compute resources as you want, as long as you set your money on fire. So the idea is instead of worrying about renting VMs and finding a way to commission new VMs and spread your software, why not just think about containers.

And Kubernetes is pretty smart from an architectural sense. Instead of imperative, Kubernetes is declarative, you describe the state you want your cluster in: Only ONE nginx, the networking state, atleast 2 instances of your webapps, your webapp containers being connected to the db. You can declare which containers get their secrets mounted etc. So your load balancer can request more instances of your webapp and you scale without huge amount of sysadmins.

Also, Kubernetes is more an API, so AWS, GCP, Azure etc provide an implementation, your account requests 5 more of my webapp containers please and in an ideal world they boot them on their server racks.

The problem is: The whole cloud native stuff has experienced a gold rush. So a lot of people got into that stuff, Kubernetes got extended to accomodate every use case and worst of all, Kubernetes became an "industry standard". So every retarded project has to be on Kubernetes now even if it doesn't make sense. It makes a lot of sense for Silicon Valley startups to use Kubernetes because they expect to scale 1000x over night. But company internal apps, projects with predictable usages or with modest expectations: You can just run this shit on a normal VM. And sometimes companies use a normal VM and run Kubernetes on top of it themselves: Great now they have to admin Kubernetes themselves. Kubernetes is a big abstraction layer, complicated, many DevOps/ SysAdmins only know tangentially how it works.

Edit: I have a dev colleague that founded his own small business based around a web app. He deployed it with Kubernetes. The app had like 5-15 concurrent users and he generated server costs of 100-1000$ (obscured) per month. I use the same provider for a single VM running an inefficient Python server (as well as nginx, postgres all running on it) and it costs me 10$.
 
Last edited:
You really should give Docker a try and see how it actually is and what it does. Not only is Docker whiter than albinos, but it's easy as fuck to use.

Now any retard can run and build software without much worry about where it runs, there's no more "but it works on my machine under the exact same conditions". People run the container, and it just works.

No installing libraries, dependencies, changing OS. If you want to run a youtube archiving server that gets all metadata, comments, scans channels automatically for new videos, etc, you just fill out some yaml: replace the existing values(e.g ports) and paths with your own, add some environment variables in a .env file (or a web interface if you use portainer) and that's it! You don't even need to know what redis and elasticsearch are.
I think his point is that deployment and dependency management have been shat up to such a degree that the insanity of shipping half a VM instead looks inviting now. That's not white people tech; it's piling shit upon more shit in an attempt to make the whole stink less, and there's no reason to assume that the forces that shat up the first stage - which are still at work, since the problem remained unfixed - won't also shit up this new second stage. But don't worry, there's a new third stage that fixes it all, you just have to add...
 
Not one of her photos from what appear to be articles or speaking engagements(and a wedding) look at all professional.
View attachment 5917610
And then there's this:
View attachment 5917616
Sco*ish and works for Dell.
'actual cyborg'. I guessed she was one of those fuckheads who got a RFID tag needled into them and go ME CYBERPUNK NOW, and of course, I was right.
 
People run the container, and it just works.
It's a good and bad thing. Super low barrier to entry, a lot less dicking around to get something running, but you'll end up hating your life if something goes wrong. Running some dude's docker compose file is basically creating a series of interconnected black boxes and a lot of FOSS projects that only support running under docker (Piped for instance) presume everything will "just work" so there's no help if you have an issue.
 
I think his point is that deployment and dependency management have been shat up to such a degree that the insanity of shipping half a VM instead looks inviting now. That's not white people tech; it's piling shit upon more shit in an attempt to make the whole stink less, and there's no reason to assume that the forces that shat up the first stage - which are still at work, since the problem remained unfixed - won't also shit up this new second stage. But don't worry, there's a new third stage that fixes it all, you just have to add...
It's an interesting variation on cargo cults. People who build things only understand the top level abstraction or memorize the commands. They don't know docker works under the hood, or are able to build a docker container from scratch (for example, a container that runs busybox instead of a full blown Ubuntu). If you can do that, then you have a strong advantage. In the land of the blind, the one-eyed man is king.
 
You really should give Docker a try and see how it actually is and what it does. Not only is Docker whiter than albinos, but it's easy as fuck to use.

Now any retard can run and build software without much worry about where it runs, there's no more "but it works on my machine under the exact same conditions". People run the container, and it just works.

No installing libraries, dependencies, changing OS. If you want to run a youtube archiving server that gets all metadata, comments, scans channels automatically for new videos, etc, you just fill out some yaml: replace the existing values(e.g ports) and paths with your own, add some environment variables in a .env file (or a web interface if you use portainer) and that's it! You don't even need to know what redis and elasticsearch are.
Let's look at descriptions of these programs from their official websites:
Docker provides a suite of development tools, services, trusted content, and automations, used individually or together, to accelerate the delivery of secure applications.
Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.
I'm not exaggerating when I write that both of these projects would be single lines of code under a proper system. I've read that the GNU HURD would provide all of Docker with simpler primitives, but this isn't my point. The reason this software exists is because it runs under UNIX. A Smalltalk system from the 1970s could trivially provide isolation, and the systems were designed to be controlled over a network very simply. Smalltalk code can be transparently moved across the network, even. Since UNIX isn't a real networked operating system, none of this is simple, and instead a giant clusterfuck is made. With this one example, I think I've made it clear that both of these projects really could be a single line of code each, under a better system.

However, Docker and Kubernetes don't want to throw away the millions of lines of code that have gone into GNU/Linux and related systems, so they add their own millions of lines of code to the pile. I've not checked this, so someone else can see exactly how big both of these projects are, for what little value they provide.

Those are my thoughts on this nonsense.
 
both of these projects would be single lines of code under a proper system

Not only isolation, but also all dependencies required in an incredibly small package, very easy multi-arch building, ease of use and portability. You don't need to wait three and a half years to compile a newer version of python for the rpi zero w, you make an armv6 image. You can have any number of versions running at the same time without having any installed. Automatic vlan, automatic firewall rules, exact memory and CPU limits, and many more useful things.

You really can't do all that docker does with a single line of code in Smalltalk unless you replace new lines with semicolons or whatever it uses for delimiters. Do you mean just the isolation? Because that's just a security benefit, you can technically mount everything from your host OS and give it no isolation if you really want.

I think most of the complaints are more towards retarded usecases rather than the software itself.

it's piling shit upon more shit in an attempt to make the whole stink less, and there's no reason to assume that the forces that shat up the first stage - which are still at work, since the problem remained unfixed - won't also shit up this new second stage. But don't worry, there's a new third stage that fixes it all, you just have to add...

You don't need 5000 microservices (which I assume is what you mean), that's just bad development. The vast majority of self hosted oss people host in containers don't even do that sort of stuff, at most they have a redis and a database, both of which are pretty essential if you want to have parallelized queues on multiple workers on different machines even in the case of redis, and I'm pretty sure everyone here knows what a DB is useful for.

If the software needs those microservices, it will need them regardless if you use Docker for it. I think the comparison here should be between deploying an application natively vs deploying it on Docker
there's no help if you have an issue
That's not docker's fault, it's just OSS stuff in general. People say "yeah open source people solve issues" which to be fair is true for small and medium projects, but if it's something bigger even home assistant size they just don't give a fuck.

Ever been on Home Assistant's discord? The helpers will do that shit women do where they insult you indirectly in some sly way but make it seem like it's not an insult, so if you complain they just act oblivious and eventually ignore you, or refer you to the rules for daring to complain about them. Some dude abandoned his entire Home Assistant integration because of one of these passive aggressive jannies
5-15 concurrent users and he generated server costs of 100-1000$

Yeah like vercel and netlify. Just buy or rent a dedicated machine and host your software there. If you need more servers, put up more servers. If you really need kubernetes you'll know because you'll notice the time spent doing all the sysadmin stuff.

If you avoid Docker like trannies avoid showers you might just miss out on something really useful. If I want to start all my servers over from scratch I can just reinstall the OS and run the same yamls. If any one of you wants to run anything I am already running, I can literally just give you my compose file, explain what you need to change(usually some paths and environment variables like username password etc) and in two minutes you have anything you want running.

Sure, it lowers the barrier of entry, but it also lowers the amount of work you need to do. Sure, you can setup a database or whatever you want already, but there's just nothing that can really beat the convenience and speed of it. In the time you install and uninstall the mysql software from your host OS I can deploy and delete hundreds of mysql containers. Yes, you never need to do that, but the exponential decrease in time per usage adds up, especially at scale.

The software is free and incredibly useful, and even better, people you probably hate have to do the hard work of developing it while seething that such hateful individuals as us use the software they put their blood sweat and tears into.

I don't really have a bone to pick in this, don't mean to run defense for companies that pay me zilch, and I hate most software companies and corporations, but the above are just esoteric "if only we had a perfect system we could do this easily without docker" or things that aren't actually docker's fault.
 
I'm not exaggerating when I write that both of these projects would be single lines of code under a proper system. I've read that the GNU HURD would provide all of Docker with simpler primitives, but this isn't my point
Docker does two things:
  1. Limit the resources (CPU usage, file access, IO, etc)
  2. Fix all version management stuff by providing all OS components.
The first one is actually a Linux kernel feature called cgroups (version 2)Link, FreeBSD has jails which does the same thing.

Now Docker used the ability to segregate a process from all disk access by providing all the fucking files of a ubuntu installation and only providing them to the process.

I think Docker appealed to all the niggercattle who couldn't write programs not depend on 500+ libs which break with minor versions and thus running this on the same machine as a second program written by a troglodyte was impossible. But since it appealed to the masses now Docker is super popular and a simple cgroup access which just encapsulates programs from file access and IO will probably not be developed because Docker is "good enough".
 
Not only isolation, but also all dependencies required in an incredibly small package, very easy multi-arch building, ease of use and portability.
This is only a problem because of how software is able to become entangled in a web of shit. Now, I write my libraries myself, but in a decent language like Ada I don't have portability problems and wouldn't anyway, because the language provides a good base on which to build without this entanglement. Python directly exposes POSIX interfaces, and then things don't work well under MicroSoft Windows.
You can have any number of versions running at the same time without having any installed.
GNU Guix also allows for this.
You really can't do all that docker does with a single line of code in Smalltalk unless you replace new lines with semicolons or whatever it uses for delimiters.
Fine, I exaggerated. In truth, both of those would be zero lines of code, because it wouldn't even occur to someone to try to turn them into something like this, in the same way no decent language has a library that provides addition. It would be like the air we breathe, in that we wouldn't notice it. It would already be there, in a much simpler way.
I think most of the complaints are more towards retarded usecases rather than the software itself.
I'm pointing out that it fundamentally shouldn't exist. Instead, we get millions of lines of shit that don't work.
If the software needs those microservices, it will need them regardless if you use Docker for it.
Microservices are just retarded Object-Orientation. Look at Erlang for an example.
I don't really have a bone to pick in this, don't mean to run defense for companies that pay me zilch, and I hate most software companies and corporations, but the above are just esoteric "if only we had a perfect system we could do this easily without docker" or things that aren't actually docker's fault.
No, a perfect system isn't necessary, and these systems already existed. I feel that I've made my case.
 
Now Docker used the ability to segregate a process from all disk access by providing all the fucking files of a ubuntu installation and only providing them to the process.
The good thing about docker/podman/containers is you can use every library available in Ubuntu and then add a bunch more for your app.

The bad thing about .....

I've taken a couple apps I use that are Ubuntu-centric and containerized them rather than try and get them to run on Debian. And the containers are, of course, HUGE.

Of course, for real development, developers should be using JEOS/distroless/slim images. Luckily they don't and that's why I have a job.
 
  • Agree
  • Like
Reactions: 0gh and Marvin
Back