Programming thread

I'd be surprised if Python doesn't do the same.
I wouldn't; CPython is an infamously basic bitch implementation that leaves all kinds of low-hanging fruit behind. On the flip side, this means that % vs & barely makes a difference because both operators will likely go through bytecode dispatch or indirect calls anyway. Hell, even in C the difference would only matter in a hot/tight loop.
 
I think they're talking about bitwise operators.
You're correct. Though, this case refers more specifically to bitmasks.

Bitmasks are seen most often in hexidecimal form, since base-16 relates well to binary. So, unlike in other bases like base-10, we can do shit like val & 0xFFFF0000 to isolate the first 2 bytes of a 32-bit int. You'll see this most often in little-endian architectures, since the least significant bits are placed first.
 
Last edited:
Code:
These basically do exactly what the C source code says it should do: and1.c ANDs the input with 0x1, while mod2.c divides the input by 2 and checks the remainder.
Interesting difference between GCC and Clang here - GCC always does the optimization, even with no -O flags.
 
Regarding the "confidently incorrect" problem as an argument against AI - ever hear of W3Schools?

To make a specific example, this dogshit is the top search result for Lisp for me. Looks good, right? Clean site design, nice website... until I try reading about the docs. Then it's absolute garbage that doesn't actually teach a goddamned thing. Turns out the "correct" resource is actually https://common-lisp.net/, a site that looks like it never left the aughts, with much shittier page design.

Eventually, I realized the first site was retarded and kept looking, but my time was still wasted. And I don't get to interrogate the dumbfuck that made the first site as to why he would choose to do such a terrible thing. At least with an AI I can immediately call bullshit and ask for it to source where it's getting its retard ideas from, instead of leaving the site with nothing and no leads as to what is ACTUALLY correct.
 
Last edited:
Eventually, I realized the first site was retarded and kept looking, but my time was still wasted. And I don't get to interrogate the dumbfuck that made the first site as to why he would choose to do such a terrible thing. At least with an AI I can immediately call bullshit and ask for it to source where it's getting its retard ideas from, instead of leaving the site with nothing and no leads as to what is ACTUALLY correct.
One day I'm gonna fine-tune my own LLM by scraping all the best blogs/dev resources and it'll be like using Google circa 2004
 
Eventually, I realized the first site was retarded and kept looking, but my time was still wasted. And I don't get to interrogate the dumbfuck that made the first site as to why he would choose to do such a terrible thing. At least with an AI I can immediately call bullshit and ask for it to source where it's getting its retard ideas from, instead of leaving the site with nothing and no leads as to what is ACTUALLY correct.
ChatGPT is know to hallucinate sources as well. Article about it.
I feel like it would be easier to look up opinions on resources, even on reddit, before using them over trying to wrangle AI and ensure it's correctness.
The issue with AI is, most of things that it will say correctly is already widespread on internet, hence why it is likely to be correct about it.

What I think happend in question about match-case in python is asking about performance is not a common question. However the words are similar to questions about switch statements in other languages. Questions comparing switch performance to if/else are especially frequent. So it created answer to what's frequently seen in datasets.
Which leads to biggest issue: the harder the question the higher likelihood of hallucinations.
Even worse, the more you have preconceived ideas about something, the more likely you are to nudge it towards wrong answer which will play into your biases.
Like here assuming that performance of match-case would be better than if/else.

I would really be careful with those AI tools.
Also, if you are in dire need of resources to learn, go visit Library Genesis and "borrow" some books from reputable publishers.
 
Last edited:
If Python doesn't optimize divides/switches/whatever it's because the cost of a few divides is nothing compared to the interpretor, virtual machine, etc. Python is 100x slower than unoptimized c at minimum, the class of optimizations that they have to do is different than anything a machine code compiler would need to optimize.
 
  • Agree
Reactions: UERISIMILITUDO
I would really be careful with those AI tools.
Also, if you are in dire need of resources to learn, go visit Library Genesis and "borrow" some books from reputable publishers.
While I respectfully disagree with you about whether or not LLM is a useful educational resource, I can definitely agree with you here. The LLM usually will run out of new information past the surface level of a topic, at which point I'll ask for book recommendations. I guess at the end of the day, the LLM replaces the librarian instead of the library.

I'll take you up on asking people for books though instead of the AI - anything good for learning calculus, or mathematics?
No it doesnt
Common Python L.
 
Also, if you are in dire need of resources to learn, go visit Library Genesis and "borrow" some books from reputable publishers.
Psst

I'll take you up on asking people for books though instead of the AI - anything good for learning calculus, or mathematics?
We have discussed textbooks some in the Math Thread. Linked is a post about a textbook that's good for applied calculus. As for other fields of math, I can suggest books, but I need the subject you're looking for.
 
endianness is only relevant when byte-swapping, the masking expressions work the same on all platforms otherwise the code wouldn't be portable.
Endianness is relevant whenever you're converting between word sizes because there is no canonical way to convert between e.g. a series of 32-bit words and a series of 8-bit words without making some choice. It's simply ambiguous to say "these four 8-bit numbers represent a 32-bit number", that's why network/disk/etc formats tell you the endianness. If you find yourself swapping bytes, you are almost certainly doing something wrong.
 
endianness is only relevant when byte-swapping, the masking expressions work the same on all platforms otherwise the code wouldn't be portable.
You see it most in compiled binaries if you throw them into a debugger, which is why I mention endianness as a bit of a footnote for beginners to think about. And if you're reading/writing a file in binary mode, the ordering of the bytes does matter quite significantly.
 
Last edited:
  • Agree
Reactions: Wol
Let’s say I’m scraping JSON tags tied to social media identity and make a node graph. How can I better analyze the data to dox them?
bipartite_user_projection.png
Let’s say I have a bipartite graph of weighted edges based on occurrence of coincidently visited domains between username in a big .txt labeled Naughty List.

Asking for a friend of course. I’d never do this.
 
Last edited:
Let’s say I’m scraping JSON tags tied to social media identity and make a node graph. How can I better analyze the data to dox them?
View attachment 6774346
Let’s say I have a bipartite graph of weighted edges based on occurrence of coincidently visited domains between username in a big .txt labeled Naughty List.

Asking for a friend of course. I’d never do this.
I guess if you were trying to associate accounts you'd try to build up a 'profile' to see if visits and visit prevalence matched between accounts. Even just domains in common would presumably help narrow things down?
 
Byte swapping is very common, so much so it is implemented absolutely everywhere in system headers, standard libraries, and compiler/platform intrinsics (C++20 added std::endian, C++23 adds std::byteswap 🙃). Choose one and don't think about it.
Has anyone actually seen his "don't do this" example
Yes, there's nothing actually wrong with it. Usually it's written more explicitly.
 
  • Agree
Reactions: UERISIMILITUDO
Has anyone actually seen his "don't do this" example in the wild though? It seems bizarre.
I've seen it countless times in crappy C codebases. Keep in mind the post is from 2012 and by Rob Pike, so it's had some time to influence people.
Byte swapping is very common
Common and sensible are different things, and it's not like C++ is above standardizing stupid or niche things.
 
I'm learning 64-bit assembly as a sort of morbid curiosity/hobby and am struggling to find a good environment. I had a pretty good environment setup a few years ago when I was doing 32-bit but those tools either don't exist anymore/severely changed/are abandoned by the devs and never got updated to 64-bit software. What I had was Kate which was a good basic IDE that supported asm syntax highlighting and a terminal where I could launch the Insight debugger. Kate got enshittified into some shit like Kdevelop/Kwrite/Kwhat-ever-the-fuck and Insight has been abandoned since 2009. What I'm using now is just a basic ass text editor with a custom xml theme for syntax highlighting and I have to use the GNU Debugger on the CLI. GDB isn't too bad but printing to the terminal is all fucked up, GDB prints the string starting all the way to the left instead of buffering it for the (gdb) prompt (pic related)
1734822500944.png - GDB trying to print Hello, world!

So, does anyone know a decent IDE or graphical front end for GDB that I can use specifically for assembly?
 
Back