Shoggoth
kiwifarms.net
- Joined
- Aug 9, 2019
TDD stands for Tab Driven DevelopmentOh no no, the autocompletion IS the Devil. Toolchain hiding is just a minor demon in comparison.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
TDD stands for Tab Driven DevelopmentOh no no, the autocompletion IS the Devil. Toolchain hiding is just a minor demon in comparison.
Who cares if java generics are "sound" within java's garbage type system? It's still a trashy hack for a trashy platform. Consider that you literally have to violate language rules to make a generic list type;It's possible you misunderstand generics. You wrote earlier:
This isn't true, and hasn't been true historically, going back at least to ML in the 70s. Generics in Java (and I think in just about every other language with generics) are a mathematically sound type-system extension, and have nothing to do with code expansion. One property generics normally have is that the type checker can prove that a function which uses generics is type-correct without knowing anything about how it's actually instantiated. This is not true of C++, and is something that I consider to be a massive flaw in its approach to generics, one which is still waiting to be fixed with Concepts.
Furthermore, it's generally expected that type-checking of generics should always halt. Personally, I want my type-checker to halt pretty quickly. I also want it to support separate compilation, so it's not forced to re-expand templates from other modules every time I make a change in how I've used them. The approach taken by C++ here is directly responsible for its long build times. In Java, separate compilation is forced on you because it's expected that everything is hot-loadable.
Finally, in both Java and C#, the designers were upfront in saying that they wanted to support polymorphic recursion. It's simply not possible to do this by treating generics as templates: it will always just send the compiler into an infinite loop.
When generics are not implemented as code expansion, you need to think harder about how they are implemented, especially in languages like Java and C# which are supposed to support runtime type information and runtime loading of classes. Java made things somewhat harder by not putting generics into the CLR. But the other constraints they were working with are real constraints on absolutely solid type theory.
typeof(T)
; java generics do not exist except in the compiler's imagination) and check every input against it. Wow, cool type system retards!ArrayList<Integer>
with strings because a List<int>
is a List<int>
period. Also love lifting integers on to the heap to store them, it's a great waste of gc time!Vec<4>
instead of Vec4, Vec3, Vec2
etc. would be huge —and with spans it's only a matter of allowing fixed arrays to be accessed in safe code. It'd also be great if there were more generic constraints —like where T : T operator+(T, T), T operator*(T, float)
, or at least add some common interfaces for things like math; having to witness pattern everything is shitty.Lispy queer detected; dispensing swirlyOfc you need a debugger where everything is mutable, things can pass by reference, and the method of abstraction is encapsulation. That sounds like a paradigm problem with procedural languages.
Doesn't this strike you as a problem? Why do you even need such heavy refactoring tools? Why do you have to write so much boilerplate? It sounds like a problem inherent to the tools.
This sounds like a use case for git. Tree undo is helpful sometimes, but if what you want to do is pull up something you previously got rid of, git and good habits are a more scalable solution.Haven't tried it yet. My current code editor is Sublime. Tree-like undo is just the obvious way to handle the problem.
My only caution against that is unnecessary use of templates causes compilation time to go through the roof, makes errors woefully obscure and makes casting very cumbersome. Templates are powerful, but when writing new code, it's really best to use concrete types unless you have a good reason.All this said, I personally would love if C# added some more C++-like templating features, the ability to have aVec<4>
instead ofVec4, Vec3, Vec2
etc. would be huge —and with spans it's only a matter of allowing fixed arrays to be accessed in safe code. It'd also be great if there were more generic constraints —likewhere T : T operator+(T, T), T operator*(T, float)
, or at least add some common interfaces for things like math; I hate having to witness pattern everything.
That's fair, but for C# all I'd like is more flexible constraints to allow generics to be used in more places, or at least more core interfaces. It's shitty that you have to write a custom lerp function for every primitive, and shitty that you need distinct types for every dimension of vector, that kind of thing. Having to restate the same algorithm for functionally similar types is shitty —and the more times you restate it the higher the chance you'll subtly typo one of the variants.My only caution against that is unnecessary use of templates causes compilation time to go through the roof, makes errors woefully obscure and makes casting very cumbersome. Templates are powerful, but when writing new code, it's really best to use concrete types unless you have a good reason.
I've been found out!Lispy queer detected; dispensing swirly
Managing state is a source of complexity. If you can control it, why not? I'm not a fan of complex type systems to manage state, because that's just another source of complexity. Pure FP is nice on paper. However a mostly immutable program with managed states is solid and usable.Real talk though, a common dysfunctional programming claim is that managing state is tooo haaardmaybe lispies are not actually the nerds they are portrayed to be, but rather pea brained mudblood monkey men?
I don't think I've ever seen anybody disagree with that. It's when people start derailing into (((lisp))) that they lose me.Managing state is a source of complexity. If you can control it, why not?
God I hope so. The primary barrier is legacy compatibility.Do you see C# replacing Java in the backend world anytime soon?
Some languages have managed state constructs. In Java, for example, you can work with immutable data behind a mutable atomic reference with Compare And Swap semantics, that already dramatically changes your program's behavior. Managed, time aware state is very different than pervasive state.Minimizing state is important, but a little bit of state is perfectly understandable and the removal of it only begins to obfuscate the program.
I was under the impression CLR didn't have a wide ecosystem as the JVM did. Also, design mistakes notwithstanding, doesn't the JVM have better performance for high loads?God I hope so. The primary barrier is legacy compatibility.
Because that's what I want to be doing, committing every time I want to undo something, right? It couldn't possibly be a be a better idea to just have my code editor keep track of what I've been doing, no, sir. I use Fossil, bee-tee-dubs.This sounds like a use case for git. Tree undo is helpful sometimes, but if what you want to do is pull up something you previously got rid of, git and good habits are a more scalable solution.
I'm unsure about claims of high loads, but real world java applications universally have miserable responsiveness. Much of that has to do with the complete and total lack of control over contiguous and stack memory.I was under the impression CLR didn't have a wide ecosystem as the JVM did. Also, design mistakes notwithstanding, doesn't the JVM have better performance for high loads?
Imagine dick waving about whether C# or Java is the faster language. SMDH.I'm unsure about claims of high loads, but real world java applications universally have miserable responsiveness. Much of that has to do with the complete and total lack of control over contiguous and stack memory.
If by high load you mean they handle concurrency or large memory allocation better, maybe, but I've never heard anyone bring that up.
Also I think CLR runs on just about anything now.
Literally just make a commit before a big change you might regret lmao! It's a good habit to get into anywayBecause that's what I want to be doing, committing every time I want to undo something, right? It couldn't possibly be a be a better idea to just have my code editor keep track of what I've been doing, no, sir. I use Fossil, bee-tee-dubs.
FP is fine for glue programs or as an academic thing, but computers are imperative, and you miss out on a lot of powerful data structures by imposing this mathematically styled abstraction on everything. For example, is there any reason to implement a concurrent skip list in lisp? The implementation will have to go through so many layers of lispy abstraction any benefits it could offer will be negated.Managing state is a source of complexity. If you can control it, why not? I'm not a fan of complex type systems to manage state, because that's just another source of complexity. Pure FP is nice on paper. However a mostly immutable program with managed states is solid and usable.
The great replacement is already underway, and it is BASED! Microshaft has been very busy getting .Net supported on every mainstream system, and now that the framework/core/standard nonsense is being unified it'll be a lot smoother. Not to mention, things like interop are much easier in C# since you have things like spans and safe pointer math.Do you see C# replacing Java in the backend world anytime soon?
Immutability like that is just a feature of OOP, can you do this in java though?Some languages have managed state constructs. In Java, for example, you can work with immutable data behind a mutable atomic reference with Compare And Swap semantics, that already dramatically changes your program's behavior. Managed, time aware state is very different than pervasive state.
/// <summary>Struct which can read and written atomically without locking</summary>
/// <remarks>Avoids the gc allocations necessitated by ref records</remarks>
struct AtomicStruct<T>
where T : struct
{
private const uint lockMask = 0x1 << 0x1f;
public T Read()
{
var wait = new SpinWait();
uint before, after;
T value;
do // Read value checking sentry before and after
{
wait.SpinOnce();
before = Volatile.Read(ref flags_); // Volatile to prevent reordering, seems to work without on my system, but this is more portable
value = value_;
after = Volatile.Read(ref flags_);
} while (before != after || ((before | after) & lockMask) == lockMask);
// Retry if sentry changed, or either was locked (there may have been wrtie in progress)
return value;
}
public void Write(in T value)
{
var wait = new SpinWait();
uint prev, next;
do // Aquire exclusive lock
{
TOP: // Continue would jump to while, not do
wait.SpinOnce();
prev = flags_;
if ((prev & lockMask) == lockMask)
goto TOP;
next = prev | lockMask;
} while(Interlocked.CompareExchange(ref flags_, next, prev) != prev);
value_ = value;
flags_ = unchecked((next & ~lockMask) + 1) & ~lockMask;
Interlocked.MemoryBarrier(); // Publish results
}
public AtomicStruct(in T value)
{
flags_ = 0;
value_ = value;
}
private T value_;
private uint flags_;
}
I'm not a purist by any means, and with Clojure being my first choice of lisp I'm a heretic among lispers. If I need a concurrent skip list I'll probably just use Java's, although in most use cases an immutable HAMT behind an atomic reference is good enough, and if it isn't I'll just use Caffeine cache.FP is fine for glue programs or as an academic thing, but computers are imperative, and you miss out on a lot of powerful data structures by imposing this mathematically styled abstraction on everything. For example, is there any reason to implement a concurrent skip list in lisp? The implementation will have to go through so many layers of lispy abstraction any benefits it could offer will be negated.
Microshaft has officially joined the open JDK project last year, so I honestly don't know they stand with regards to that.The great replacement is already underway, and it is BASED! Microshaft has been very busy getting .Net supported on every mainstream system, and now that the framework/core/standard nonsense is being unified it'll be a lot smoother. Not to mention, things like interop are much easier in C# since you have things like spans and safe pointer math.
It is. Like I said, I'm not a purist. OOP is nice for writing the tools and libraries to better express yourself functionally.Immutability like that is just a feature of OOP, can you do this in java though?
That's just a copy on write dictionary, with all the pluses and minuses associated. One of the big reasons you'd want a CSL is that you can safely iterate over it in order even as mutations occur. I will grant Java has a great standard library of collections, even if they are rather undermined by being in Java.I'm not a purist by any means, and with Clojure being my first choice of lisp I'm a heretic among lispers. If I need a concurrent skip list I'll probably just use Java's, although in most use cases an immutable HAMT behind an atomic reference is good enough, and if it isn't I'll just use Caffeine cache.
I have to say it really all does look much better in clojure opposed to it's Java form wow, nice moves javaIt is. Like I said, I'm not a purist. OOP is nice for writing the tools and libraries to better express yourself functionally.
Something similar to that is Clojure's atoms, but they don't retry on read, since data is immutable there's no risk of it corrupting
https://github.com/clojure/clojure/blob/master/src/jvm/clojure/lang/Atom.java You can also have transactions between references which do maintain consistency
https://github.com/clojure/clojure/blob/master/src/jvm/clojure/lang/Ref.java
It's no secret that processors are a fuck. They aren't even sequential machines, they're Out Of Order machines, with speculative execution and what have you. If they were sequential we wouldn't be in half of that mess.That's just a copy on write dictionary, with all the pluses and minuses associated. One of the big reasons you'd want a CSL is that you can safely iterate over it in order even as mutations occur. I will grant Java has a great standard library of collections, even if they are rather undermined by being in Java.
I have to say it really all does look much better in clojure opposed to it's Java form wow, nice moves java, I was going to ask how you'd even use this, but this looks fine. Still, there's a lot of costs hidden in there. for starters, each of those refs is like ten gc allocations just on it's own, at least the transaction objects are reused, but that's still a lot of overhead. The whole reason for AtomicStruct is it imposes no gc overhead, and no locking overhead, cheap safe primitive.
What I'd really love to see is better available hardware transaction capabilities
View attachment 2043358
This shit sounds really cool, and could have really changed up the way we handle concurrency, but sadly all we have now is some shitty Intel extensions that are universally disabled due to their massive security issues, gg Intel you did it again!
That said I bet hardware transactions would suffer in most languages due to the abundance of pointers and memory fragmentation therein
deref
s an atom, gets a pointer to an immutable objectliterally just use an editor with tree undo lmao! it's a nice feature for an editor to have anywayLiterally just make a commit before a big change you might regret lmao! It's a good habit to get into anyway
yeah but we're talking about undo messups here. That's too fined grained to deal with commits.Literally just make a commit before a big change you might regret lmao! It's a good habit to get into anyway
I guess? Seems kind of silly though, if you're genuinely afraid of losing significant progress between commits, maybe it's time for a commit. If it's not that significant, what's the big deal?yeah but we're talking about undo messups here. That's too fined grained to deal with commits.
Reordering isn't even half the nightmare, different cores having out of sync perceptions of memory is where the beast dwells. Good thing they aren't sequential though, it would basically kill multithreaded performance, and with no multithreading we wouldn't even be talking about this issue in the first place —we'd instead be talking about Intel's newest incredible melting processor, still struggling to somehow up clock speedIt's no secret that processors are a fuck. They aren't even sequential machines, they're Out Of Order machines, with speculative execution and what have you. If they were sequential we wouldn't be in half of that mess.
See though, he doesn't just create a new object, he recreates all the objects in the tree above the replaced nodes (at minimum)Bob creates a new object (for example, by "adding" a key to a hash map), no longer has reference to original object
It's fair to say GC doesn't impact throughput so much, but that assumes a non-real time application, and GC pauses cannot be avoided except by minimizing GC usage. I suppose though a fully immutable language could implement a fully concurrent non-compacting gc ...which would basically negate my concerns lmao. Is this a thing already? I feel like it's too tempting for nobody to have done it, though I'll note a lack of compaction poses it's own performance problems.The world model Clojure enforces is that "looking" at the world is free, and once you have, you get an immutable snapshot of it. Makes writing concurrent programs really easy.
You're right it isn't exactly light on allocations, but generally, the performance is good enough. You can optimize that if you must, but generally there are plenty of things to fix and optimize in Clojure code before you reach "too many allocations while doing in-memory transactions" territory.
I sincerely wish more focus was put on cooperative multithreading over preemptive. Most coders don't even know what a coroutine is. They're unfortunately easy to make sloppy code with, but applied correctly, the difference in code clarity is life changing.Reordering isn't even half the nightmare, different cores having out of sync perceptions of memory is where the beast dwells. Good thing they aren't sequential though, it would basically kill multithreaded performance, and with no multithreading we wouldn't even be talking about this issue in the first place —we'd instead be talking about Intel's newest incredible melting processor, still struggling to somehow up clock speed
We're stuck in a future where processors and compilers of co-evolved while the languages have remained the same. Can you imagine what a processor optimized for Erlang or Java would have looked like?Reordering isn't even half the nightmare, different cores having out of sync perceptions of memory is where the beast dwells. Good thing they aren't sequential though, it would basically kill multithreaded performance, and with no multithreading we wouldn't even be talking about this issue in the first place —we'd instead be talking about Intel's newest incredible melting processor, still struggling to somehow up clock speed
Yeah, I kinda elided for brevity. All the path to the node is copied, but the TLDR is that when Bob allocatesSee though, he doesn't just create a new object, he recreates all the objects in the tree above the replaced nodes (at minimum)
IPersistentMap h1 = h.assoc(k, v)
, h1 is a new object. The fact that it shares some objects with the original h is incidental, an optimization.From experiments I can tell you GC does impact throughput, both for the naive case and what your cores are saturated. As far as real time applications are concerned, though, I'd avoid a GCed language altogether, seems like asking for trouble. If you have soft real time requirements, i.e. pauses under a threshold, some new GCs become viable. I think ZGC is fully concurrent, or close to it. Azul has a proprietary GC called Z4 which is fully concurrent. I don't remember how they handle compaction.It's fair to say GC doesn't impact throughput so much, but that assumes a non-real time application, and GC pauses cannot be avoided except by minimizing GC usage. I suppose though a fully immutable language could implement a fully concurrent non-compacting gc ...which would basically negate my concerns lmao. Is this a thing already? I feel like it's too tempting for nobody to have done it, though I'll note a lack of compaction poses it's own performance problems.