Programming thread

It's possible you misunderstand generics. You wrote earlier:
This isn't true, and hasn't been true historically, going back at least to ML in the 70s. Generics in Java (and I think in just about every other language with generics) are a mathematically sound type-system extension, and have nothing to do with code expansion. One property generics normally have is that the type checker can prove that a function which uses generics is type-correct without knowing anything about how it's actually instantiated. This is not true of C++, and is something that I consider to be a massive flaw in its approach to generics, one which is still waiting to be fixed with Concepts.

Furthermore, it's generally expected that type-checking of generics should always halt. Personally, I want my type-checker to halt pretty quickly. I also want it to support separate compilation, so it's not forced to re-expand templates from other modules every time I make a change in how I've used them. The approach taken by C++ here is directly responsible for its long build times. In Java, separate compilation is forced on you because it's expected that everything is hot-loadable.

Finally, in both Java and C#, the designers were upfront in saying that they wanted to support polymorphic recursion. It's simply not possible to do this by treating generics as templates: it will always just send the compiler into an infinite loop.

When generics are not implemented as code expansion, you need to think harder about how they are implemented, especially in languages like Java and C# which are supposed to support runtime type information and runtime loading of classes. Java made things somewhat harder by not putting generics into the CLR. But the other constraints they were working with are real constraints on absolutely solid type theory.
Who cares if java generics are "sound" within java's garbage type system? It's still a trashy hack for a trashy platform. Consider that you literally have to violate language rules to make a generic list type;
NOOOOO YOU CAN'T JUST CAST AN OBJECT ARRAY TO AN ARRAY OF T THAT'S UNSAFE!
1617048012638.png

NOOOO YOU CAN'T JUST INSTANTIATE AN ARRAY OF T; THAT DOESN'T EXIST!
1617048103580.png

NOOOOO YOU CAN'T JUST CAST AN OBJECT TO T THAT'S UNSAFE!
1617048207662.png

No matter what you do it'll be inherently unsafe, the only "solution" is to pass the element type into the constructor (note that this cannot be automated with typeof(T); java generics do not exist except in the compiler's imagination) and check every input against it. Wow, cool type system retards!
C# generics on the other hand suffer from none of this, you can't just cast off the type qualification and fill your ArrayList<Integer> with strings because a List<int> is a List<int> period. Also love lifting integers on to the heap to store them, it's a great waste of gc time!
I eagerly await the day C# finishes burying java and it's retarded subhuman bullshit.

All this said, I personally would love if C# added some more C++-like templating features, the ability to have a Vec<4> instead of Vec4, Vec3, Vec2 etc. would be huge —and with spans it's only a matter of allowing fixed arrays to be accessed in safe code. It'd also be great if there were more generic constraints —like where T : T operator+(T, T), T operator*(T, float), or at least add some common interfaces for things like math; having to witness pattern everything is shitty.


Ofc you need a debugger where everything is mutable, things can pass by reference, and the method of abstraction is encapsulation. That sounds like a paradigm problem with procedural languages.

Doesn't this strike you as a problem? Why do you even need such heavy refactoring tools? Why do you have to write so much boilerplate? It sounds like a problem inherent to the tools.
Lispy queer detected; dispensing swirly

Real talk though, a common dysfunctional programming claim is that managing state is tooo haaard (:_(maybe lispies are not actually the nerds they are portrayed to be, but rather pea brained mudblood monkey men?
troglodyte.jpg
Picture of italian man with computer introduced into his enclosure is unrelated


Haven't tried it yet. My current code editor is Sublime. Tree-like undo is just the obvious way to handle the problem.
This sounds like a use case for git. Tree undo is helpful sometimes, but if what you want to do is pull up something you previously got rid of, git and good habits are a more scalable solution.
 
Last edited:
All this said, I personally would love if C# added some more C++-like templating features, the ability to have a Vec<4> instead of Vec4, Vec3, Vec2 etc. would be huge —and with spans it's only a matter of allowing fixed arrays to be accessed in safe code. It'd also be great if there were more generic constraints —like where T : T operator+(T, T), T operator*(T, float), or at least add some common interfaces for things like math; I hate having to witness pattern everything.
My only caution against that is unnecessary use of templates causes compilation time to go through the roof, makes errors woefully obscure and makes casting very cumbersome. Templates are powerful, but when writing new code, it's really best to use concrete types unless you have a good reason.
 
My only caution against that is unnecessary use of templates causes compilation time to go through the roof, makes errors woefully obscure and makes casting very cumbersome. Templates are powerful, but when writing new code, it's really best to use concrete types unless you have a good reason.
That's fair, but for C# all I'd like is more flexible constraints to allow generics to be used in more places, or at least more core interfaces. It's shitty that you have to write a custom lerp function for every primitive, and shitty that you need distinct types for every dimension of vector, that kind of thing. Having to restate the same algorithm for functionally similar types is shitty —and the more times you restate it the higher the chance you'll subtly typo one of the variants.
MAKE ALGORITHMS GENERIC AGAIN!
 
Last edited:
Lispy queer detected; dispensing swirly
I've been found out!
Real talk though, a common dysfunctional programming claim is that managing state is tooo haaard (:_(maybe lispies are not actually the nerds they are portrayed to be, but rather pea brained mudblood monkey men?
Managing state is a source of complexity. If you can control it, why not? I'm not a fan of complex type systems to manage state, because that's just another source of complexity. Pure FP is nice on paper. However a mostly immutable program with managed states is solid and usable.
Do you see C# replacing Java in the backend world anytime soon?
 
  • Like
Reactions: Marvin
Managing state is a source of complexity. If you can control it, why not?
I don't think I've ever seen anybody disagree with that. It's when people start derailing into (((lisp))) that they lose me.

Minimizing state is important, but a little bit of state is perfectly understandable and the removal of it only begins to obfuscate the program.
Do you see C# replacing Java in the backend world anytime soon?
God I hope so. The primary barrier is legacy compatibility.
 
Minimizing state is important, but a little bit of state is perfectly understandable and the removal of it only begins to obfuscate the program.
Some languages have managed state constructs. In Java, for example, you can work with immutable data behind a mutable atomic reference with Compare And Swap semantics, that already dramatically changes your program's behavior. Managed, time aware state is very different than pervasive state.
God I hope so. The primary barrier is legacy compatibility.
I was under the impression CLR didn't have a wide ecosystem as the JVM did. Also, design mistakes notwithstanding, doesn't the JVM have better performance for high loads?
 
This sounds like a use case for git. Tree undo is helpful sometimes, but if what you want to do is pull up something you previously got rid of, git and good habits are a more scalable solution.
Because that's what I want to be doing, committing every time I want to undo something, right? It couldn't possibly be a be a better idea to just have my code editor keep track of what I've been doing, no, sir. I use Fossil, bee-tee-dubs.
 
I was under the impression CLR didn't have a wide ecosystem as the JVM did. Also, design mistakes notwithstanding, doesn't the JVM have better performance for high loads?
I'm unsure about claims of high loads, but real world java applications universally have miserable responsiveness. Much of that has to do with the complete and total lack of control over contiguous and stack memory.

If by high load you mean they handle concurrency or large memory allocation better, maybe, but I've never heard anyone bring that up.

Also I think CLR runs on just about anything now.
 
  • Like
Reactions: Strange Looking Dog
I'm unsure about claims of high loads, but real world java applications universally have miserable responsiveness. Much of that has to do with the complete and total lack of control over contiguous and stack memory.

If by high load you mean they handle concurrency or large memory allocation better, maybe, but I've never heard anyone bring that up.

Also I think CLR runs on just about anything now.
Imagine dick waving about whether C# or Java is the faster language. SMDH.
 
  • Like
Reactions: Gorilla Tessellator
WRT responsiveness, the JVM has new garbage collectors which are optimized for it (ZGC), while others are optimized for throughput (Parallel) or a balance (G1).
With everything about Java being gross, the JVM being slow to start and respond, the amount of sheer man-hours that went into the JVM polished that turd to make it not just usable, but actually good.
 
  • Thunk-Provoking
Reactions: Marvin
Because that's what I want to be doing, committing every time I want to undo something, right? It couldn't possibly be a be a better idea to just have my code editor keep track of what I've been doing, no, sir. I use Fossil, bee-tee-dubs.
Literally just make a commit before a big change you might regret lmao! It's a good habit to get into anyway

Managing state is a source of complexity. If you can control it, why not? I'm not a fan of complex type systems to manage state, because that's just another source of complexity. Pure FP is nice on paper. However a mostly immutable program with managed states is solid and usable.
FP is fine for glue programs or as an academic thing, but computers are imperative, and you miss out on a lot of powerful data structures by imposing this mathematically styled abstraction on everything. For example, is there any reason to implement a concurrent skip list in lisp? The implementation will have to go through so many layers of lispy abstraction any benefits it could offer will be negated.
Do you see C# replacing Java in the backend world anytime soon?
The great replacement is already underway, and it is BASED! Microshaft has been very busy getting .Net supported on every mainstream system, and now that the framework/core/standard nonsense is being unified it'll be a lot smoother. Not to mention, things like interop are much easier in C# since you have things like spans and safe pointer math.

Some languages have managed state constructs. In Java, for example, you can work with immutable data behind a mutable atomic reference with Compare And Swap semantics, that already dramatically changes your program's behavior. Managed, time aware state is very different than pervasive state.
Immutability like that is just a feature of OOP, can you do this in java though?
C#:
/// <summary>Struct which can read and written atomically without locking</summary>
/// <remarks>Avoids the gc allocations necessitated by ref records</remarks>
struct AtomicStruct<T>
    where T : struct
{
    private const uint lockMask = 0x1 << 0x1f;
    
    public T Read()
    {
        var wait = new SpinWait();
        uint before, after;
        T value;
        do // Read value checking sentry before and after
        {
            wait.SpinOnce();
            before = Volatile.Read(ref flags_); // Volatile to prevent reordering, seems to work without on my system, but this is more portable
            value = value_;
            after = Volatile.Read(ref flags_);
        } while (before != after || ((before | after) & lockMask) == lockMask);
        // Retry if sentry changed, or either was locked (there may have been wrtie in progress)
        return value;
    }
    
    public void Write(in T value)
    {
        var wait = new SpinWait();
        uint prev, next;
        do // Aquire exclusive lock
        {
            TOP: // Continue would jump to while, not do
            wait.SpinOnce();
            prev = flags_;
            if ((prev & lockMask) == lockMask)
                goto TOP;
            next = prev | lockMask;
        } while(Interlocked.CompareExchange(ref flags_, next, prev) != prev);
        
        value_ = value;
        flags_ = unchecked((next & ~lockMask) + 1) & ~lockMask;
        Interlocked.MemoryBarrier(); // Publish results
    }
        
    public AtomicStruct(in T value)
    {
        flags_ = 0;
        value_ = value;
    }
        
    private T value_;
    private uint flags_;
}
 
FP is fine for glue programs or as an academic thing, but computers are imperative, and you miss out on a lot of powerful data structures by imposing this mathematically styled abstraction on everything. For example, is there any reason to implement a concurrent skip list in lisp? The implementation will have to go through so many layers of lispy abstraction any benefits it could offer will be negated.
I'm not a purist by any means, and with Clojure being my first choice of lisp I'm a heretic among lispers. If I need a concurrent skip list I'll probably just use Java's, although in most use cases an immutable HAMT behind an atomic reference is good enough, and if it isn't I'll just use Caffeine cache.
The great replacement is already underway, and it is BASED! Microshaft has been very busy getting .Net supported on every mainstream system, and now that the framework/core/standard nonsense is being unified it'll be a lot smoother. Not to mention, things like interop are much easier in C# since you have things like spans and safe pointer math.
Microshaft has officially joined the open JDK project last year, so I honestly don't know they stand with regards to that.
Immutability like that is just a feature of OOP, can you do this in java though?
It is. Like I said, I'm not a purist. OOP is nice for writing the tools and libraries to better express yourself functionally.
Something similar to that is Clojure's atoms, but they don't retry on read, since data is immutable there's no risk of it corrupting
You can also have transactions between references which do maintain consistency
 
I'm not a purist by any means, and with Clojure being my first choice of lisp I'm a heretic among lispers. If I need a concurrent skip list I'll probably just use Java's, although in most use cases an immutable HAMT behind an atomic reference is good enough, and if it isn't I'll just use Caffeine cache.
That's just a copy on write dictionary, with all the pluses and minuses associated. One of the big reasons you'd want a CSL is that you can safely iterate over it in order even as mutations occur. I will grant Java has a great standard library of collections, even if they are rather undermined by being in Java.

It is. Like I said, I'm not a purist. OOP is nice for writing the tools and libraries to better express yourself functionally.
Something similar to that is Clojure's atoms, but they don't retry on read, since data is immutable there's no risk of it corrupting
https://github.com/clojure/clojure/blob/master/src/jvm/clojure/lang/Atom.java You can also have transactions between references which do maintain consistency
https://github.com/clojure/clojure/blob/master/src/jvm/clojure/lang/Ref.java
I have to say it really all does look much better in clojure opposed to it's Java form wow, nice moves java 🤮, I was going to ask how you'd even use this, but this looks fine. Still, there's a lot of costs hidden in there. for starters, each of those refs is like ten gc allocations just on it's own, at least the transaction objects are reused, but that's still a lot of overhead. The whole reason for AtomicStruct is it imposes no gc overhead, and no locking overhead, cheap safe primitive.

What I'd really love to see is better available hardware transaction capabilities
1617138236350.png
This shit sounds really cool, and could have really changed up the way we handle concurrency, but sadly all we have now is some shitty Intel extensions that are universally disabled due to their massive security issues, gg Intel you did it again!
That said I bet hardware transactions would suffer in most languages due to the abundance of pointers and memory fragmentation therein
 
Last edited:
That's just a copy on write dictionary, with all the pluses and minuses associated. One of the big reasons you'd want a CSL is that you can safely iterate over it in order even as mutations occur. I will grant Java has a great standard library of collections, even if they are rather undermined by being in Java.


I have to say it really all does look much better in clojure opposed to it's Java form wow, nice moves java 🤮, I was going to ask how you'd even use this, but this looks fine. Still, there's a lot of costs hidden in there. for starters, each of those refs is like ten gc allocations just on it's own, at least the transaction objects are reused, but that's still a lot of overhead. The whole reason for AtomicStruct is it imposes no gc overhead, and no locking overhead, cheap safe primitive.

What I'd really love to see is better available hardware transaction capabilities
View attachment 2043358
This shit sounds really cool, and could have really changed up the way we handle concurrency, but sadly all we have now is some shitty Intel extensions that are universally disabled due to their massive security issues, gg Intel you did it again!
That said I bet hardware transactions would suffer in most languages due to the abundance of pointers and memory fragmentation therein
It's no secret that processors are a fuck. They aren't even sequential machines, they're Out Of Order machines, with speculative execution and what have you. If they were sequential we wouldn't be in half of that mess.
RE CSL: With immutable collections there's no risk of them changing under your feet, so you can safely iterate over an immutable collection behind an atom even if it's modified:
  1. Alice derefs an atom, gets a pointer to an immutable object
  2. Alice starts iterating over the object
  3. Bob derefs the atom as well, also refers to the same object
  4. Bob creates a new object (for example, by "adding" a key to a hash map), no longer has reference to original object
  5. Bob sets the atom to refer to the new object
  6. Alice is still iterating over the old object, nothing changes for her until she derefs the atom again.
The world model Clojure enforces is that "looking" at the world is free, and once you have, you get an immutable snapshot of it. Makes writing concurrent programs really easy.
You're right it isn't exactly light on allocations, but generally, the performance is good enough. You can optimize that if you must, but generally there are plenty of things to fix and optimize in Clojure code before you reach "too many allocations while doing in-memory transactions" territory.
 
  • Like
Reactions: Marvin
yeah but we're talking about undo messups here. That's too fined grained to deal with commits.
I guess? Seems kind of silly though, if you're genuinely afraid of losing significant progress between commits, maybe it's time for a commit. If it's not that significant, what's the big deal?
This message brought to you by git gang; use it or you'll lose it

It's no secret that processors are a fuck. They aren't even sequential machines, they're Out Of Order machines, with speculative execution and what have you. If they were sequential we wouldn't be in half of that mess.
Reordering isn't even half the nightmare, different cores having out of sync perceptions of memory is where the beast dwells. Good thing they aren't sequential though, it would basically kill multithreaded performance, and with no multithreading we wouldn't even be talking about this issue in the first place —we'd instead be talking about Intel's newest incredible melting processor, still struggling to somehow up clock speed

Bob creates a new object (for example, by "adding" a key to a hash map), no longer has reference to original object
See though, he doesn't just create a new object, he recreates all the objects in the tree above the replaced nodes (at minimum)
The world model Clojure enforces is that "looking" at the world is free, and once you have, you get an immutable snapshot of it. Makes writing concurrent programs really easy.
You're right it isn't exactly light on allocations, but generally, the performance is good enough. You can optimize that if you must, but generally there are plenty of things to fix and optimize in Clojure code before you reach "too many allocations while doing in-memory transactions" territory.
It's fair to say GC doesn't impact throughput so much, but that assumes a non-real time application, and GC pauses cannot be avoided except by minimizing GC usage. I suppose though a fully immutable language could implement a fully concurrent non-compacting gc ...which would basically negate my concerns lmao. Is this a thing already? I feel like it's too tempting for nobody to have done it, though I'll note a lack of compaction poses it's own performance problems.
 
Last edited:
Reordering isn't even half the nightmare, different cores having out of sync perceptions of memory is where the beast dwells. Good thing they aren't sequential though, it would basically kill multithreaded performance, and with no multithreading we wouldn't even be talking about this issue in the first place —we'd instead be talking about Intel's newest incredible melting processor, still struggling to somehow up clock speed
I sincerely wish more focus was put on cooperative multithreading over preemptive. Most coders don't even know what a coroutine is. They're unfortunately easy to make sloppy code with, but applied correctly, the difference in code clarity is life changing.
 
Reordering isn't even half the nightmare, different cores having out of sync perceptions of memory is where the beast dwells. Good thing they aren't sequential though, it would basically kill multithreaded performance, and with no multithreading we wouldn't even be talking about this issue in the first place —we'd instead be talking about Intel's newest incredible melting processor, still struggling to somehow up clock speed
We're stuck in a future where processors and compilers of co-evolved while the languages have remained the same. Can you imagine what a processor optimized for Erlang or Java would have looked like?
See though, he doesn't just create a new object, he recreates all the objects in the tree above the replaced nodes (at minimum)
Yeah, I kinda elided for brevity. All the path to the node is copied, but the TLDR is that when Bob allocates IPersistentMap h1 = h.assoc(k, v), h1 is a new object. The fact that it shares some objects with the original h is incidental, an optimization.
It's fair to say GC doesn't impact throughput so much, but that assumes a non-real time application, and GC pauses cannot be avoided except by minimizing GC usage. I suppose though a fully immutable language could implement a fully concurrent non-compacting gc ...which would basically negate my concerns lmao. Is this a thing already? I feel like it's too tempting for nobody to have done it, though I'll note a lack of compaction poses it's own performance problems.
From experiments I can tell you GC does impact throughput, both for the naive case and what your cores are saturated. As far as real time applications are concerned, though, I'd avoid a GCed language altogether, seems like asking for trouble. If you have soft real time requirements, i.e. pauses under a threshold, some new GCs become viable. I think ZGC is fully concurrent, or close to it. Azul has a proprietary GC called Z4 which is fully concurrent. I don't remember how they handle compaction.
 
Back