Open Source Software Community - it's about ethics in Code of Conducts

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
I don’t necessarily hate a language, here Rust, as long as it means a reliable, stable and safe solution. Rust implementations (almost?) never comply with the two former, nevermind the latter. They always lack in terms of functionality with their original counterpart. It doesn’t help that most Rusty people are massive attentionwhoring faggots with too much time on their hands. Literally butchering the Arch wiki to announce their projects.
 
Rust implementations
they don't even have multiple implementations. shit language
one of the things i judge a good language on is how easy it is to implement by yourself
obviously scheme is the ideal here because there are more scheme implementations on the various git forges of the world than there are sonic recolors on deviantart (this of course has problems of its own, but it's much better than wanting to compile rust to some weird architecture and getting immediately fucked)
 
I don’t necessarily hate a language, here Rust, as long as it means a reliable, stable and safe solution. Rust implementations (almost?) never comply with the two former, nevermind the latter. They always lack in terms of functionality with their original counterpart. It doesn’t help that most Rusty people are massive attentionwhoring faggots with too much time on their hands. Literally butchering the Arch wiki to announce their projects.
I can never take rust seriously when rust aborts (crashes) on memory allocation failure and most rust programs are littered with .unwrap() to crash the program, instead of handling errors/faults properly. Zig as an alternative for example never crashes on failure and nobody writes zig programs that crashes on errors/faults because handling them properly is easier.
 
Bring it up whenever a Rustroon tries to force Rust down someone's throat and ask them what other benefit Rust has when you can just compile C/C++ to be memory safe and watch as they twist into a pretzel just to not say that "the only benefit of using Rust is that you'll be using Rust". Delightfully devilish.
In fairness, Rust enforces memory safety at build time while Fil-C only injects a bunch of failsafes so that the program panics instead of accessing memory it's not supposed to. So theoretically Rust would protect you from encountering a memory safety issue in the first place, while Fil-C would just turn it from a potential vulnerability to a simple crash.
But honestly, if a tool works, it works, and if it doesn't require adding another language to your toolchain (especially in the case of Rust/Cargo), that's even better.
 
In fairness, Rust enforces memory safety at build time while Fil-C only injects a bunch of failsafes so that the program panics instead of accessing memory it's not supposed to. So theoretically Rust would protect you from encountering a memory safety issue in the first place, while Fil-C would just turn it from a potential vulnerability to a simple crash.
But honestly, if a tool works, it works, and if it doesn't require adding another language to your toolchain (especially in the case of Rust/Cargo), that's even better.
At build time and runtime* as rust inserts array bounds checking into the compiled code
 
Do you believe security researchers are smart people? They must be, right?
Late post, but what else did you expect from a default profile picture account? He is probably halfway through "cYbErSeCuRiTy" university programme and thought he can just generate some code in an afternoon to get some "high-profile" resume scores.

A lot of so called cybersecurity proffessionals are so fucking huge LARPers that it makes my head spin. It all starts with the military style acronyms and vocabulary ("red team", "blue team" are okay but if someone calls a plan a "CONOPS" i assure you he is a massive faggot). Most of them who are fresh out of uni have done a bunch of bullshit courses, got some fancy certificates but if you ask them for hands on experience they are all like "oh i did all of dickswinger's academy wep app safety 101 track and 5 challenges on hackthebox". And it turns out they know how to follow a metaspoit tutorial, how XSS and CSRF works or how to use burp suite (which is mostly what actually good web developers have to know to avoid those bugs). These people think they can land a corporate job writing password guidelines and buying ED&R while making six figures at the same time.
 
it probably just happens a lot more these days and the lazy retards are lazy and retarded in different ways
I agree with everything you say here, but herein lies the rub. In general, the maintainers are too accommodating, and the users are incapable of understanding computing beyond "buttons don't work no more."

you don't get to say shit about "Codes of Conduct" or other tranny bullshit.
There are only two acceptable Codes of Conduct:
  1. "Don't be a dick"
  2. The Rule of St. Benedict
 
A lot of so called cybersecurity proffessionals are so fucking huge LARPers that it makes my head spin. It all starts with the military style acronyms and vocabulary ("red team", "blue team" are okay but if someone calls a plan a "CONOPS" i assure you he is a massive faggot). Most of them who are fresh out of uni have done a bunch of bullshit courses, got some fancy certificates but if you ask them for hands on experience they are all like "oh i did all of dickswinger's academy wep app safety 101 track and 5 challenges on hackthebox". And it turns out they know how to follow a metaspoit tutorial, how XSS and CSRF works or how to use burp suite (which is mostly what actually good web developers have to know to avoid those bugs). These people think they can land a corporate job writing password guidelines and buying ED&R while making six figures at the same time.
To be fair, the majority of your typical corporate junior pentester or soc analyst's workload is going to be running the same 10 commands and reading logs all day. Both are checklist jobs that don't typically require a whole lot to get started in; hell, most of them don't even know any programming beyond basic bash or python scripting. And that's fine by me. There is absolutely nothing wrong with learning on the job, what you mentioned is exactly the skill level a fresh bachelor/junior should be at. Companies should be made to invest in bringing up young workers rather than Sakhdeek Poojar with his 20000 vibe-hacked CTFs. Higher level red/blue team positions are where the good stuff is at anyways.
 
Rust's marketing (heckin enby memory safe and valid) is dogshit but maybe that was intentional from the beginning. It's a great corpo lang to corral the hordes of niggercattle desk jockeys.
 
There are only two acceptable Codes of Conduct:
  1. "Don't be a dick"
  2. The Rule of St. Benedict
There were okay CoCs which date before the troon era. For example, this Debian one is not bad:
In short: Be respectful, assume good faith, be collaborative, try to be concise, be open, and an "in case of problems" section.

However, it was later supplemented with an "interpretations" document, which for example clarified that the "Be respectful" rule prohibits "deadnaming" and using "wrong" pronouns.
 
"oh i did all of dickswinger's academy wep app safety 101 track and 5 challenges on hackthebox"
Saar do not investigate who proctored my CySA/Security+/CASP/DACRP/CISSO/CSAP exam saar do not redeem the academic honesty review saar please no
 
Last edited:
Memory safety is useful but most unsafe memory bugs take a CIA op level of planning and skill and sometimes luck to pull off. Even that spectre meltdown stuff that slowed down all CPUs is mainly just ayylmao, just look up how it's done then start shaking your head at the thought of losing up to 30% performance for this

You know who should be scared of these memory bugs? Big corpos and governments. Which kinda brings it back to FFmpeg, huh. Is it all a big tech psyop?

EDIT so I don't doublepost: CASE IN POINT

You need to have a specially designed tar that could extract files from inside another tar nested in that tar as if they were extracted from the main tar

TLDR:
x.tar->
- safe_software
- nested. tar ->
- - safe_software (actually malicious OOOOOO)

But at that point WHY NOT JUST PUT THE DANGEROUS FILES IN THE MAIN TAR? How could this EVER be exploited? So you're controlling a download location where people can get a tar from and you want to give them malicious files? Just give them malicious files! Why go through all of this?

8.1 CVE - TARMAGEDDON

This is not a fucking exploit/CVE/ARMAGEDDON, it's just a normal bug
 
Last edited:

a fun one from the primagen. I'm sure there is nothing surprising in this video to anyone here. Don't buy smart products if you don't want your appliances to be a part of some botnet etc.

Memory safety is useful but most unsafe memory bugs take a CIA op level of planning and skill and sometimes luck to pull off. Even that spectre meltdown stuff that slowed down all CPUs is mainly just ayylmao, just look up how it's done then start shaking your head at the thought of losing up to 30% performance for this
That's why I usually just disable mitigations. At least most of them. Not all of them effect performance equally, and the specific cpu makes a difference also. But really I don't think having the big speculative exploit mitigations matters that much for a desktop user. Especially since I don't even think the attack vector is the same for them on all the cpu's they do effect. It seems extremely unlikely that it will matter.

For the lazy you can just pass mitigations=off to disable all of them. Better than that is just building the kernel yourself. In my experience the biggest noticable performance gain wasn't even from the various mitigations, some of those do have an impact obviously. The biggest performance gain was disabling things in the "kernel hacking" section of the config. Those are basically all of the debugging options. And there are some other things in that part of the config that noticeably slow down performance like the memory debugging stuff, or config items that enable collecting a bunch of statistics constantly that will never be looked at. Generally a lot of them are actually overall worse for security to have enabled also.
 
Memory safety is useful but most unsafe memory bugs take a CIA op level of planning and skill and sometimes luck to pull off.
Which is true for a lot of major bugs. I was working when Heartbleed came out (as an aside, I always joked that it's not a real vuln until it has a catchy name and a website with a .io domain) and everyone was panicking. I dug into it and found out that it basically allowed for random memory extracts. Considering that you have billions to trillions of memory registers in a system, the chances of getting something useful when executing it is basically zero. Same with Shellshock (also lol): you basically had to allow external commands to be passed by bash. Trivial to stop.
 
Back
Top Bottom