Programming thread

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Sure, C's choice was perhaps an odd one
I'm guessing it's because that's what you get out of CPUs' signed integer divide instructions.
Looking at the documentation for Intel IA-32 IDIV we see:

Non-integral results are truncated (chopped) towards 0. The sign of the remainder is always the
same as the sign of the dividend. The absolute value of the remainder is always less than the
absolute value of the divisor.
Overflow is indicated with the #DE (divide error) exception rather
than with the OF (overflow) flag.

I presume it was the same for the CPUs that were in common use when C was invented.
 
I'm guessing it's because that's what you get out of CPUs' signed integer divide instructions.
Looking at the documentation for Intel IA-32 IDIV we see:



I presume it was the same for the CPUs that were in common use when C was invented.
CPUs at the time C was made didn't have dedicated division instructions. The sign was implementation-defined originally to avoid overhead, then restricted to truncation in C99 to ease ports of Fortran code.
 
I'm guessing it's because that's what you get out of CPUs' signed integer divide instructions.
That's literally exactly why they did it, actually. Programming languages (sanely) define their integer division div and modulus operator mod so that the following identity always holds for any a and b (as you'd 'intuitively' expect it to):
Code:
div(a, b) * b + mod(a, b) == a
i.e. if you integer divide by b then multiply it back and add that to the remainder, you should get the original a back, exactly as you'd hope. And as it turns out, that's exactly what C does with its truncated modulus, and also exactly what Python does with its floored modulus.

C (integer division is / for integer operands and modulus is %):
C:
#include <stdio.h>

int main(void) {
    printf("-2 / 10 = %d\n", -2 / 10);
    printf("-2 \% 10 = %d\n", -2 % 10);
    printf("(-2 \% 10) * 10 + (-2 \% 10) = %d\n", (-2 / 10) * 10 + (-2 % 10));

    return 0;
}
Code:
-2 / 10 = 0
-2 % 10 = -2
(-2 % 10) * 10 + (-2 % 10) = -2

Python (integer division is // and modulus is %):
Python:
>>> -2 // 10
-1
>>> -2 % 10
8
>>> (-2 // 10) * 10 + (-2 % 10)
-2

Their integer divisions return different answers, so their modulus operations also return different answers so that the identity holds.
 
Mathematically correct modulus should return 8 for -2 % 10. Just because it common doesn't mean it's correct.
---
I maintain that floored is the correct definition for modulus.
The modulo operator in Java and C is optimized for speed, not correctness. This speed is important for things like cryptographic libraries. If you want strict correctness, you should be using the Math class.

Python does it correctly because Python is slow as shit anyway and so no one cares.
 
Last edited:
  • Disagree
Reactions: 306h4Ge5eJUJ
Arguing about this is like arguing that 22/7 should equal 🥧 (3.14....).

It is traditional in many programming languages to do truncate. That does not make it correct. I have no dought that Java's choice in this was the same as to have a static void main, even though it really doesn't have anything to do with Java, and is only a slight nod towards C and C++. The entrypoint for a Java program could have just as easily been something like the Runnable interface or an abstract public Main, which your program's entrypoint must be a child of.

Forget about all the language stuff, algebra and programming languages, for a minute. If you have a finite number line where each unit is one step and it's defined starting at 0 and ending at 10, and you walk two steps backwards, then you would be at 8. If you walking forward on this number line, since it is the only number line in existence, when you step off of 10 you are teleported back to 0, or accept that position 0 on the line have been glued onto position 10, so that the perimeter of the number line forms a circle. Starting from the natural position of 0, you walk 2 steps backwards, or -2 steps. You would then be at position 8. If you were to walk 15 forward steps, then you would end up at position 5. The only reason we use division to calculate this is because division is recursive cutting. If a doctors want's to subtract a appendix from a patient, they cut it out in surgery. If I cut away two from ten, then it's eight, if I have to do this multiple times arriving back at zero in between cuts, it works the same. You can calculate modulus using factorisation, not division, and you get just as accurate results. This is because on a finite number line you'd have to find yourself at some position on the line because it is the only space you have to measure in. So, putting aside all the Mathematics and Programming syntax and symbols, it's illogical to return a negative number for a finite number line that does not have any negative positions. Likewise it would be equally as silly to say return 8 from a number line ranging between 0 and -10. With 0 and -10, 8 does not exist on that number line and since we can only have position that exist on that number line, 8 should not be returned given the set of constraints provided.

It is a mathematical legalist argument to state that -2 % 10 != 8 and then provide profs that state otherwise. C got it wrong. While it is traditional to copy C's mistakes, it is still wrong. Providing formulas that calculate illogical results for arithmetic modulus does not magically make C right, even it does make you feel betting by providing a seemingly reasonable rationalization for your position. It's very interesting to note that Java has 2 different types of modulus in it's standard library: the % operator and Math.floorMod. Why you would need to implement the same operation twice? Could it be because one is to satisfy a population of programmers accustomed to wrong answers while also know these answers are wrong and therefore necessitate the implementation of the correct operation to accompany the errant one? Java as many other ambitious programming languages were launched with the promises of lessening errors in programs. How can this be any more then whole-cloth when Java goes out of it's way to provide the wrong result to common operations. What is % in Java if it's not arithmetic modulus? It must be some other operation perpetrating a mascaraed.

Interestingly enough, Ada has a feature which would be perfect for this situation: Ranges. You can define a type that must contain a number bounded by a desecrate continuum. Though, Ada also has an arithmetically correct modulus operator, so using Ranges would not be the only option.

This would not be a problem if the disabled were not hired to write software.
Bro... just take the L. I'm not reading all that shit my nigga
 
The modulo operator in Java and C is optimized for speed, not correctness. This speed is important for things like cryptographic libraries. If you want strict correctness, you should be using the Math class.

Python does it correctly because Python is slow as shit anyway and so no one cares.
I respectfully disagree. There a lot of reasons why Python is slow as shit. Doing the correct maths is not too expensive or not optimized.
 
Last edited:
  • Agree
Reactions: 306h4Ge5eJUJ
CPUs at the time C was made didn't have dedicated division instructions.
I don't know where you heard this, but it's incorrect. The PDP-11 had a dedicated division instruction. Pretty much every minicomputer at the time did.

Hell, even the PDP-1 (released in 1959) had a division instruction:
1654509167278.png


The only computers without access to division and multiplication in hardware were microprocessors like the 4004 at the time. But C certainly wasn't targeting them.
 
I don't know where you heard this, but it's incorrect. The PDP-11 had a dedicated division instruction. Pretty much every minicomputer at the time did.

Hell, even the PDP-1 (released in 1959) had a division instruction:
View attachment 3357877

The only computers without access to division and multiplication in hardware were microprocessors like the 4004 at the time. But C certainly wasn't targeting them.
Indeed got things mixed up with microprocessors, thanks for the correction. The overhead/Fortran remark is from the C99 rationale though, so it should be reliable.
 
I don't know where you heard this, but it's incorrect. The PDP-11 had a dedicated division instruction. Pretty much every minicomputer at the time did.

Hell, even the PDP-1 (released in 1959) had a division instruction:
View attachment 3357877

The only computers without access to division and multiplication in hardware were microprocessors like the 4004 at the time. But C certainly wasn't targeting them.
What's interesting is now some CPUs (RISC-V) don't come with required division instructions, presumably to reduce implementation complexity. Division is handled in the kernel/userspace.
 
What's interesting is now some CPUs (RISC-V) don't come with required division instructions, presumably to reduce implementation complexity. Division is handled in the kernel/userspace.
To be fair, even modern x86-64 processors don't have dedicated hardware for division. They use microcode for that.

Although to add some extra contrarianism to this thread - RISC-V has a standard extension "M" that adds integer division and multiplication to the instruction set. I dunno how widespread it is, but I'm assuming 'very,'
 
I don't know where you heard this, but it's incorrect. The PDP-11 had a dedicated division instruction. Pretty much every minicomputer at the time did.

Hell, even the PDP-1 (released in 1959) had a division instruction:
View attachment 3357877

The only computers without access to division and multiplication in hardware were microprocessors like the 4004 at the time. But C certainly wasn't targeting them.
Probably thinking of floating-point division, which tended to be handled in slow software until chip makers like Intel came along and gave us dedicated hardware to produce fast and sometimes incorrect answers.
 
So due to an issue with trying to make Salesforce work with OCR, I now have started learning to program Apex, which claims to be similar to Java.

Considering that all my previous experience was with Python (and it’s been a while since I’ve done much with that), learning to deal with static typing is weird.

On the plus side, even just reading some tutorials and some of the developer documentation is giving me ideas for other ways to use it, rather than continue to rely on their “flows”.

I’ve wanted to make a way to summarize long fields of text for a long time, and there’s an easy way to do it in Apex, once I learn how to actually connect things and make the classes work.
 
So due to an issue with trying to make Salesforce work with OCR, I now have started learning to program Apex, which claims to be similar to Java.
I'd never heard of it before. It appears to be a Salesforce-exclusive thing. Wonder why they invented their own language instead of just using Java. Weird.

Anyway, if you like what you see, there's no harm in going on to learn Java itself. It has a bad reputation but it gets the job done, and if you're competent with it you're pretty much guaranteed at least a wagie-level codemonkey job for life.
 
Considering that all my previous experience was with Python (and it’s been a while since I’ve done much with that), learning to deal with static typing is weird.
Protip (since you'll likely come across this problem): Learn about abstract classes and interfaces. You'll probably run into an issue where you'll need to write a method that can take different types of values. The pajeet way of doing it is to just write a separate method for each type. If the types are close enough in structure and functionality though, you can just write an interface that both types implement (or define an abstract class that both types inherit from) and write a single method whose parameters are typed as your interface/abstract class.

A lot of built-in types do this as well so you should read the docs when working with them. For example, ArrayList and LinkedList both implement the List interface so methods that need to work on both can just take parameters typed as List. Integers and Doubles (the classes not the primitives) both inherit from the Number abstract class so methods that need to work on both of those can just take parameters typed as Number.

Alternatively, you can just use aggressive upcasting and reflection if you want the true greasy cowboy coder experience.

Godspeed. In my experience, Salesforce is a ghetto and they jew you for every line of code you write on their platform (I remember being charged per line in the log back in the day). Salesforce experience pays really well tho.
 
Well, it took me a few hours, but I managed to build a pair of triggers that I wanted. I’ve come around a bit on static typing - it at least prevents the Python issue where you’re expecting an input to be a string, but you forget to unwrap it from the list.

Their method of doing SQL (or SOQL) calls was weird. Took me 2 hours just to figure out how to get the value of a Key : Value pair.

Unfortunately, now I have to build tests for them before Salesforce lets me deploy which is something I’ve never done in anything. I know they work in live-fire situations, but I’m either firing things in the wrong order or something and my tests null. I wound up accidentally creating a test that comes back valid if nothing happens, and it accepted it as valid code coverage, so I may use it for now. It’s just a field update, and I wrapped enough exception code around it that, if anything, it just won’t work, but at least it won’t hang, so it’s probably fine for jow.
 
I'd never heard of it before. It appears to be a Salesforce-exclusive thing. Wonder why they invented their own language instead of just using Java. Weird.

Anyway, if you like what you see, there's no harm in going on to learn Java itself. It has a bad reputation but it gets the job done, and if you're competent with it you're pretty much guaranteed at least a wagie-level codemonkey job for life.
Fuck java in the ass with the largest dick in history.

I initially learned c++ and java in both high school and college and thought it was fine.

Then I picked up C#. People hate Microsoft but C# (.NET Core) is everything good about java with 5% of the autistic boilerplate code. Who gives a shit about sorts, Microsoft has already determined the optimal method of sort given your specific scenario and testing their algorithm will prove them right.

To add to this, my rage is mostly the result of having to learn java again on the fly after years of disuse. The project architecture is shitty by any language standard, so that doesn't help.

I was much more hesitant of Microsoft before they gave up on opposing Linux. Now they realize they can use Linux at scale in the cloud and are one of the largest contributors to the linux kernel. Their security fixes now benefit us all.
 
Back