Open Source Software Community - it's about ethics in Code of Conducts

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
rob pike.png
 
He's right. I hate those people who rape the planet and are a detriment to society while putting on a fake smile. I fucking hate them. Can we nuke them already?

Oh wait, he wasn't talking about Indians. His opinion goes into the trash can.
Google employees and Indians can share the wall. :)

Those two sets have a lot of overlap by now, anyway.
>Make a language designed specifically for indians because they're too stupid for a real one
>They use it to jeet up the tech sector and eventually the planet
I can't believe it. How could Andrew Tate do this?
 
God fucking dammit I am livid. I run my homelab using flux and k8s, and unfortunately rely on ghcr.io for most of docker images. Well since I don't know how long, the fucking pajeets there made it so that since around 2PM till midnight pulling anything from there is like pulling teeth, speeds around 100kbps are the goddamn norm. After midnight it goes hunky dory to go down the drain again at 2PM. Every PR takes at least 30m where it should've taken 2 at most because the goddamn actions (which are on self hosted runners) need to repull a 100MB image over and over and it's impossible to cache them.

The worst part is there is no alternative. Dockerhub is on the same level, and everything else is paid. Not to mention it was a multi-year process to get everyone to push to ghcr, and now that they do, there is no alternative, even tho everyone fled dockerhub because of shit like this. Maybe if dockerhub kept their mouth shut and just quietly throttled everyone there wouldn't have been a backlash.

I hate poojetsoft so much it's unreal. What is even the point of paying for gigabit fiber anymore where almost everyone throttles their links.
 
need to repull a 100MB image over and over and it's impossible to cache them.

Why do you need to repull it over and over and why is it impossible to cache?
This sounds like either ghcr.io is completely retarded or a skill issue.

This is all open source so even if ghcr.io is retarded you can still just change the code to work the way you want it to. Skills issue.
 
Last edited:
God fucking dammit I am livid. I run my homelab using flux and k8s, and unfortunately rely on ghcr.io for most of docker images. Well since I don't know how long, the fucking pajeets there made it so that since around 2PM till midnight pulling anything from there is like pulling teeth, speeds around 100kbps are the goddamn norm. After midnight it goes hunky dory to go down the drain again at 2PM. Every PR takes at least 30m where it should've taken 2 at most because the goddamn actions (which are on self hosted runners) need to repull a 100MB image over and over and it's impossible to cache them.

The worst part is there is no alternative. Dockerhub is on the same level, and everything else is paid. Not to mention it was a multi-year process to get everyone to push to ghcr, and now that they do, there is no alternative, even tho everyone fled dockerhub because of shit like this. Maybe if dockerhub kept their mouth shut and just quietly throttled everyone there wouldn't have been a backlash.

I hate poojetsoft so much it's unreal. What is even the point of paying for gigabit fiber anymore where almost everyone throttles their links.
What is stopping you from mirroring these images locally and then directing your workflow to pull from your local harbor/nexus/what-have-you instance? You'd still get the new images when they come out but with the benefit that you always have something locally so you don't have to wait 3 hours for your CI job to finish. You can even make it so ghcr and docker links get proxied to your own instance, meaning you don't have to rewrite every link.

Am I retarded and missing something really obvious here?
 
Am I retarded and missing something really obvious here?
Without checking, I assume it's the GitHub actions not playing well with a local cache, despite being locally hosted.

Edit: just checked and that is indeed the case. Lots of complaints about there being no way to cache images for locally hosted runners.
 
Last edited:
Without checking, I assume it's the GitHub actions not playing well with a local cache, despite being locally hosted.

Edit: just checked and that is indeed the case. Lots of complaints about there being no way to cache images for locally hosted runners.
And? We knew this. His actual complaint is that because of Github Actions being retarded, he has to pull from a remote public source every time and that at X time of day that particular step is painfully slow. How is that not mediated by simply not pulling from the same source the entirety of India is pulling from? If it's stored locally you won't need to share that resource with anyone except yourself.
 
And? We knew this. His actual complaint is that because of Github Actions being retarded, he has to pull from a remote public source every time and that at X time of day that particular step is painfully slow. How is that not mediated by simply not pulling from the same source the entirety of India is pulling from? If it's stored locally you won't need to share that resource with anyone except yourself.
If the process he's using relies on GitHub workflows triggering GitHub actions to generate the images he uses on github's docker repo, then it's not just a case of changing one particular step, but a complete rip and replace of a big chunk of his infrastructure.
 
This sounds like either ghcr.io is completely retarded
Let me stop you there, because you're concentrating on GitHub actions per se, which is a separate point. But please note first that a fucking billion dollar company doesn't seem fit to run a package repository (as they call it) with speeds exceeding 10Mbit/s. Let that simmer down in your mind for a second. LESS THAN 10Mbit. That's ~1MegaBYTES per second. Where most modern images start at around 100MegaBYTES.

Without checking, I assume it's the GitHub actions not playing well with a local cache, despite being locally hosted.
DING DING DING A WINNER IS YOU

What is stopping you from mirroring these images locally and then directing your workflow to pull from your local harbor/nexus/what-have-you instance?
Do you remember just a week ago when people were set aback on the stupidity of safe_sleep inside of the GitHub actions runner? Well, take a look at this fucking thing

As you see it *is* possible, but I'm aggravated on how much shit I have to put up with to make it work, and even more assblasted on the reason I even have to bother. And no, it's DinD action runner, you can't just configure it to use a pull-through cache like you would a normal docker daemon.
 
Last edited:
Back
Top Bottom