Programming thread

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Does anyone here know of any good places to discuss programming that aren't:
  1. Full of bitter autists (regular autists are fine)
  2. Full of trannies
  3. Full of the kind of people who care about Codes of Conduct
  4. Full of people who speak horrible English
Aside from this wonderful thread of course.

I may be a programmer, but dealing with fellow programmers has made me hate computers and anyone who uses them for any reason. I don't know why it's such a uniquely awful community.
 
Does anyone here know of any good places to discuss programming that aren't:
  1. Full of bitter autists (regular autists are fine)
  2. Full of trannies
  3. Full of the kind of people who care about Codes of Conduct
  4. Full of people who speak horrible English
Aside from this wonderful thread of course.

I may be a programmer, but dealing with fellow programmers has made me hate computers and anyone who uses them for any reason. I don't know why it's such a uniquely awful community.
Who would have thought that shoving a bunch of emotionally stunted social outcasts into a room would make for an unpleasant experience.

I'm really not aware of any formal ones. Theres a bunch of techies on fediverse but its not an organized place to talk about stuff.

Maybe somebody should try wrangling them.
 
I don't know about type theory, but in conventional languages, you'd just make the function's return type something that represents either
  • the future computation result
    (e.g. std::future<T> in C++, or something named along the lines of Promise or AsyncResult in other languages/frameworks),
  • or the asynchronous computation itself
    (e.g. Task<T> in C#).
In C++ that's a normal type that the standard library happens to define, but you could have also defined yourself (and simply document that this is what it means).

In C#, it's actually integrated into the language to some extent, with the async keyword allowing you to call and chain Task<T>-returning functions in a way that makes asynchronous code more concise and readable.
The problem with this answer the time dimension. The function can take its sweet ass time before returning a Task or Future. It can block for an hour, then return it. That is an unacceptable behavior for this model.
For the first question, I can't think of a type system that embeds the execution time of functions (since blocking functions execute for an arbitrary amount of time). The types of functions are written with the assumption that they terminate. For programming languages like OCaml, a nonterminating function would just have an arbitrary return value (they cannot return so it doesn't matter). In languages like Coq all functions must halt so blocking is not a problem (you can't directly do IO in Coq, but you can compile Coq to Ocaml, which can do IO). The closest thing I can think of is that you can write functions that "time out" after a certain number of steps (a "step indexed" function) and you can write a proof with type "Given any list of arguments A to the function F there exists a value N such that F(A) never times out" and pass that around as a function that doesn't block.
So that would be a heuristic dependent type. "This function returns in less than N steps"
For the second question, from most type-theoretic standpoints functions only spit out values, so there is no point in specifying if a function has been called or not. I assume you ask this because saying if a function has been called yet or not is important if you consider functions that can make side effects. You can fudge side effects back in if each function takes an argument that contains the entire environment of execution and returns a (possibly modified) environment. I can't think (at least off the top of my head) think of a very general way to write this, since you have a million corner cases (how many times is the first function executed? What if the first function is equal to the second function? etc).
Yeah, fuck my life, this is indeed for effects / continuations, where the two functions are the success and failure continuations.
Possible solution - if both functions take a real world token, i.e. a representation of the entire environment, then linear types black magic can require it is used only once, and its usage is in those functions. QED?
@Marvin said but I can't quote him:
In a turing complete language, I don't think that's possible.

If you're asking what typecheckers assign to functions that never return, in Ocaml, it's an indeterminate type. So failwith is typed string -> 'a because it never returns
Why shouldn't it be possible? What's required to type a function to say at it returns "immediately", or withing reasonable time?
Is there a logic system which parallels this requirement? If so, there's an equivalent type system
Does anyone here know of any good places to discuss programming that aren't:
  1. Full of bitter autists (regular autists are fine)
  2. Full of trannies
  3. Full of the kind of people who care about Codes of Conduct
  4. Full of people who speak horrible English
Aside from this wonderful thread of course.

I may be a programmer, but dealing with fellow programmers has made me hate computers and anyone who uses them for any reason. I don't know why it's such a uniquely awful community.
Urbit
 
Why shouldn't it be possible? What's required to type a function to say at it returns "immediately", or withing reasonable time?
Is there a logic system which parallels this requirement? If so, there's an equivalent type system
You can probably calculate a class of expressions that return immediately, like mathematical expressions and things like that. But if there's any executable logic, I think you pretty much need to actually execute the code to determine when/if it terminates. For example, consider a simple function that sums up the numbers in an array. It executes probably nearly instantly on a modern computer for even extremely large arrays. However, with enough memory, you can stretch that time out indefinitely.

It's basically the halting problem, I think.

With a fancier, maybe programmable type checker, like @Account mentioned, you can probably write increasingly elaborate algorithms to identify and mark execution times for some functions, but I don't think there can be a general algorithm that does it universally.

But I don't know, my theory here might be shaky.
 
oh god no

I'm gonna spend the next dozen hours or so raging at Hoon again, aren't I.
You don't need to know any Hoon to use Urbit, but if you want some insanity, good luck.
You can probably calculate a class of expressions that return immediately, like mathematical expressions and things like that. But if there's any executable logic, I think you pretty much need to actually execute the code to determine when/if it terminates. For example, consider a simple function that sums up the numbers in an array. It executes probably nearly instantly on a modern computer for even extremely large arrays. However, with enough memory, you can stretch that time out indefinitely.

It's basically the halting problem, I think.

With a fancier, maybe programmable type checker, like @Account mentioned, you can probably write increasingly elaborate algorithms to identify and mark execution times for some functions, but I don't think there can be a general algorithm that does it universally.

But I don't know, my theory here might be shaky.
There is stuff like process and sequence calculus which may be able to model it. While this relates to the halting problem, it's a nice subset of it, you don't want to know what the computation ever terminates, only that it returns immediately
 
Does anyone here know of any good places to discuss programming that aren't:
  1. Full of bitter autists (regular autists are fine)
  2. Full of trannies
  3. Full of the kind of people who care about Codes of Conduct
  4. Full of people who speak horrible English
If you find any let me know.

Aside from this wonderful thread of course.
This place freaks me out. I thought I was good at cooding but @Shoggoth, @Marvin, @Considered HARMful, @Account and @ConcernedAnon make me feel like the biggest brainlet when I try (and fail) to follow along with their posts.
 
Can't you cast away const correctness?
Why is this a bad thing though? If you don't intend to do it, just don't do it.

I swear functional programmers are only happy if they're restrained in a hug box LIKE THE AUTISTS THEY ARE. Personally I enjoy how permissive C++ is, once you understand what is considered roughly "correctish" you're free to do as you please. Perhaps that's problematic if you don't trust your colleagues but what can I say? From time to time you must do your duty as their better and yank the choke chain.

There is stuff like process and sequence calculus which may be able to model it. While this relates to the halting problem, it's a nice subset of it, you don't want to know what the computation ever terminates, only that it returns immediately
That'd be a pretty limited definition, which'd have to exclude most loops, and perhaps even any variably sized input as @Marvin mentioned. I feel like it'd be too imprecise to be very useful. More useful perhaps would be manual annotations, as the programmer probably has a better idea of what'll generally terminate quickly.



On the past topic of C/++/# and type, my only gripe with C-likes is how they conflate type and structure; leading to ridiculous constructs such as GC heap allocated wrapper types, and poorly generalized algorithms. FP is guilty of this structural typing too mind you, just in a less overt way.

What I long for is an entirely interface based language such that type becomes both specific, yet permissive. Not typed statically, not dynamically; but rather sufficiently.
 
This place freaks me out. I thought I was good at cooding but @Shoggoth, @Marvin, @Considered HARMful, @Account and @ConcernedAnon make me feel like the biggest brainlet when I try (and fail) to follow along with their posts.
If you want we can have a discussion about impostor syndrome, which I sometimes feel keenly, and then I have strangers sharing my work calling it insightful or cold dm-ing me to ask questions and guidance, and the only thing I can think is "hell, I'm barely just blundering about and I'm mostly self-taught", and already technically I'm probably ahead of most of my colleagues, so if I'm a half trained crossbreed between a monkey and a jackass, what does that mean about them?
I dunno man, but it keeps me up at night sometimes.
I swear functional programmers are only happy if they're restrained in a hug box LIKE THE AUTISTS THEY ARE. Personally I enjoy how permissive C++ is, once you understand what is considered roughly "correctish" you're free to do as you please. Perhaps that's problematic if you don't trust your colleagues but what can I say? From time to time you must do your duty as their better and yank the choke chain.
You should try dynamic functional programming, it's like cooking with gas, but more fun.
That'd be a pretty limited definition, which'd have to exclude most loops, and perhaps even any variably sized input as @Marvin mentioned. I feel like it'd be too imprecise to be very useful. More useful perhaps would be manual annotations, as the programmer probably has a better idea of what'll generally terminate quickly.
I think the only way to ensure that happens is to require the function immediately returns / allocates a closure and nothing more. Can that be done?
What I long for is an entirely interface based language such that type becomes both specific, yet permissive. Not typed statically, not dynamically; but rather sufficiently.
Just take a look at Clojure and specifically, protocols
 
There is stuff like process and sequence calculus which may be able to model it. While this relates to the halting problem, it's a nice subset of it, you don't want to know what the computation ever terminates, only that it returns immediately
I think the only way to ensure that happens is to require the function immediately returns / allocates a closure and nothing more. Can that be done?
To me it sounds like you're trying to solve the halting problem, but phrasing it differently. The halting problem doesn't have a timescale aspect to it, so it doesn't matter whether or not you're trying to know if it's immediate. I think of the problem like knowing whether or not the sun is going to be there the next day when we are living in a perfectly stable universe with infinite resources. The only thing we as an observer can do is give our best guess and wait to see the result as it happens, because the universe we live in is fundamentally lacking in the ability to answer the question.

If that still makes you feel unsure, ask yourself two questions about your program: What are the semantic properties of my program? That is, what is it that my program does, not how it's structured. Then ask are the semantics non-trivial? In other words, not something basic like "my program computes the sum of two integers and returns their result." If yes, then congratulations, someone much more devoted to this than you (and me) proved any programs that have the same characteristics as your program are undecidable. This proof is known as Rice's Theorem.
 
Why shouldn't it be possible? What's required to type a function to say at it returns "immediately", or withing reasonable time?
Solution to halting problem?

[EDIT] FML, ninja'd

What I long for is an entirely interface based language such that type becomes both specific, yet permissive. Not typed statically, not dynamically; but rather sufficiently.
So sorta-kinda duck typing?
I think the only way to ensure that happens is to require the function immediately returns / allocates a closure and nothing more. Can that be done?
Halfwitted approximation: C++'s constexpr, consteval.
 
  • Like
Reactions: Marvin
This place freaks me out. I thought I was good at cooding but @Shoggoth, @Marvin, @Considered HARMful, @Account and @ConcernedAnon make me feel like the biggest brainlet when I try (and fail) to follow along with their posts.
I think that's mostly because programming is such a wide field that it's not possible to become universally good at it. It's like saying you're good at "sports". What sport? There's like a thousand of them. And no matter how sporty you are, there's always some esoteric sport you've never even heard of out there. There is nobody who's good at every sport. Even the best athletes in the world generally pick one or two and ignore the rest.

Programming is the same way. There are more languages than any reasonable person could even remember the names of, much less become proficient at. When people start talking about Haskell I just stop paying attention entirely. I don't care about Haskell, I probably never will care about Haskell, and that's okay. Ultimately the only languages that matter are the languages you regularly use.

And that's not even getting into the fact that every programming language has a million different applications which often barely overlap at all. Unless two people are using the same language and are in the same field, there's a good chance they'd have a very hard time following each other's code no matter how experienced they are.
 
Hard disagree, C++ type system is quite strong and const-correctness is a thing.

Just because there is an escape (compare http://www.cs.virginia.edu/~evans/cs655/readings/bwk-on-pascal.html, section 2.6), doesn't mean the type system is optional.
How in hell did people actually use this for anything? Being unable to even write general string methods seems like a complete deal breaker.

Halfwitted approximation: C++'s constexpr, consteval.
IIRC you can still put an infinite loop in a constexpr. I presume the compiler just times out and returns some kind of error sort of like it would with a template depth issue.

So sorta-kinda duck typing?
Kind of, but with well qualified names; so it's not a.Add(b) but rather Num:Add(a, b). Type witnesses as @Shoggoth mentioned are a good analogy.

Duck typing is complete trash and doesn't actually support generalization. Look at this stupid faggot language 👇🏿
Python:
def lerp(a, b, t):
    return a * (1 - t) + b * t

lerp("a", "b", 0) # returns "a"
lerp("a", "b", 1) # returns "b"
lerp("a", "b", 3) # returns "bbb"
lerp("a", "b", 0.3) # throws type error
Like honestly what the fuck is the point of it? If I wanted a lerp function that operates on vector space types there's no way to specify that; if I just use duck typing I get undefined behavior for a wide selection of inputs, and if I constrain the inputs then we're back to static typing.

Now that I think about it even the "If it quacks like a duck..." analogy is fucking retarded, like that's a big if buddy. Basically an admission that the paradigm provides no guarantees. Glue (eating) language.

Just take a look at Clojure and specifically, protocols
That looks like it fulfills the requirements of interface pretty well, but I'm picturing something that goes beyond interface; semantics too should be separable from type. I want to see the monolithic type broken up such that each individual concern —be they interface, structure, mutability, or even lifetime— might be addressed separately.

What intrinsically about a Dictionary<TK, TV> makes it a heap type? Rewriting it to live on the stack changes nothing about the implementation, yet would be considered an entirely different type. These are the same type aside from their lifetime, why can we not then represent both with a single stroke of the brush? And should I try to access an interface method of a stack type I must first lift it to the heap that the runtime's foolish assumptions about interfaces might be met.

Rust seems to have some of this down actually, however I'm looking for a JIT'd language and I'd also like to see the concept advanced, so I'll probably take another swing at creating such a language this coming year. We'll see how it goes this time lol
 
Last edited:
That looks like it fulfills the requirements of interface pretty well, but I'm picturing something that goes beyond interface; semantics too should be separable from type. I want to see the monolithic type broken up such that each individual concern —be they interface, structure, mutability, or even lifetime— might be addressed separately.
Even in Java you can already do that with interface and mutability, although I think mutable objects should have different interfaces, i.e. a mutable type is another type. Once you get project Valhalla you'll also have structure support, but that's almost like saying implementation and interfaces solves that.
Protocols in Clojure are backed by Java interfaces and dispatch to a mutable lookup table for new classes, so you can extend a behavior to existing implementations, thus "solving" the expression problem.
It sounds like most of what you want to do is already possible. What are you missing? or what am I missing in your vision?
 
It sounds like most of what you want to do is already possible. What are you missing? or what am I missing in your vision?
Protocols are a huge step up to be sure and I really do wish .Net had an equivalent mechanism, but they are only part of the problem. The perfect example I think is C#'s newish ref struct types, which have the unique property that they may hold references to stack allocated objects (kind of sort of, it's complicated). They are however entirely incompatible with interfaces, because interfaces have been defined in such a way that they require potential heap lifetime. The arbitrary tying of one thing to another has had the collateral effect that they're going to create an entirely separate class of interface that can only apply to ref structs.

This isn't an isolated issue either, for the sake of avoiding waste I've had to create struct type versions of several of the .Net standard library collections. That may sound extreme, but say you may rarely need a list for some operation (such as containing and aggregating exceptions) it's wildly more efficient to have a struct type list, as if it isn't used it doesn't produce any waste. Fundamentally though this all comes back to .Net's original sin; choosing to walk in the footsteps of java. If I had some kind of ultimate power everything would be a "struct", which you could simply annotate into a heap type e.g. heap List<T>.

As to mutability I generally agree that separate interfaces are the best solution, however the still leaves you with the problem of passing a mutable list as immutable. Sure you can cast it, and the receiver then can just cast it back. What are you to do? Create a separate wrapper type to obscure the mutable interface? If not a readonly qualifier, then it should be possible to irreversibly strip an interface from a reference so as to create a safe "view" of the object.
 
  • Like
Reactions: GreeneCoDeputy
How in hell did people actually use this for anything? Being unable to even write general string methods seems like a complete deal breaker.
In times of that article, Pascal was a lot less, let's say, advanced than - for example - TurboPascal or modern pascals like FreePascal or Delphi (Object Pascal). Some of the issues were rectified when insane wackos started to use Pascal for real-world development instead of classroom teaching, for which it was originally devised.

As I've already stated in this thread - trying to use teaching tools for serious stuff is pretty much always asking for trouble.
Duck typing is complete trash and doesn't actually support generalization. Look at this stupid faggot language 👇🏿
Half-disagree. Python is garbage in general, but with templates in C++ (and concepts with C++20) you have in effect statically checked duck typing. So, as usual, the problem lies in dynamic typing. :story:
 
  • Like
Reactions: Strange Looking Dog
Does anyone here know of any good places to discuss programming that aren't:
  1. Full of bitter autists (regular autists are fine)
  2. Full of trannies
  3. Full of the kind of people who care about Codes of Conduct
  4. Full of people who speak horrible English
Aside from this wonderful thread of course.

I may be a programmer, but dealing with fellow programmers has made me hate computers and anyone who uses them for any reason. I don't know why it's such a uniquely awful community.
Misery loves company. Read Sartre.
 
Protocols are a huge step up to be sure and I really do wish .Net had an equivalent mechanism, but they are only part of the problem. The perfect example I think is C#'s newish ref struct types, which have the unique property that they may hold references to stack allocated objects (kind of sort of, it's complicated). They are however entirely incompatible with interfaces, because interfaces have been defined in such a way that they require potential heap lifetime. The arbitrary tying of one thing to another has had the collateral effect that they're going to create an entirely separate class of interface that can only apply to ref structs.

This isn't an isolated issue either, for the sake of avoiding waste I've had to create struct type versions of several of the .Net standard library collections. That may sound extreme, but say you may rarely need a list for some operation (such as containing and aggregating exceptions) it's wildly more efficient to have a struct type list, as if it isn't used it doesn't produce any waste. Fundamentally though this all comes back to .Net's original sin; choosing to walk in the footsteps of java. If I had some kind of ultimate power everything would be a "struct", which you could simply annotate into a heap type e.g. heap List<T>.

As to mutability I generally agree that separate interfaces are the best solution, however the still leaves you with the problem of passing a mutable list as immutable. Sure you can cast it, and the receiver then can just cast it back. What are you to do? Create a separate wrapper type to obscure the mutable interface? If not a readonly qualifier, then it should be possible to irreversibly strip an interface from a reference so as to create a safe "view" of the object.
To quote Brian Goetz, "It's amazing Java managed to succeed in spite of getting all the defaults wrong".
Project Valhalla is supposed to fix some of that in the future with inline classes which would be passed as value objects (stack allocated) unless you take a ref from them which will be heap allocated. They can still participate in interfaces.
Regarding mutability, casting is for chumps. Have a static type. A function which takes a Mutable can't take an Immutable. Let the type checker blow up in your face. Immutable objects should implement toTransient and Mutable objects should implement toPersistent. Casting is a mistake.
 
  • Agree
Reactions: Marvin
Regarding mutability, casting is for chumps. Have a static type. A function which takes a Mutable can't take an Immutable. Let the type checker blow up in your face. Immutable objects should implement toTransient and Mutable objects should implement toPersistent. Casting is a mistake.
What happens when you call toTransient? Does it create a new object to wrap the original or does it internally cast the original's reference? Having to create an entirely separate object just to represent a subset of the original's functionality seems a bit daft, and if it's the latter how is it any different?

Computers have no notion of type, rather objects —pieces of memory— are granted structure by the operations performed on them. We say that an object has a type in order to constrain and guide the set of operations that are allowed on it so as to stay consistent with it's structure. But with a more abstract concept of type we cease talking about structure and begin talking about interface and semantics, why though, should we lose this fundamental flexibility by presuming that an object has *a* type?

I would propose that type is found in the reference, not the object. When you cast an object reference you merely view the object by a new interface, and yet such a system is still statically typed. This enables wrapping types without type wrappers, and it enables dynamically extending object functionality without changing object type. Witnesses solve half this problem, but they also have the effect of globally linking the interface to the "base type", which is not always ideal.

Trivial example; an iota function
Code:
func MakeIota(ref int start) : Iterator<int> {
    return (ref int | new Iterator<int>{ # Cast an anonymous implementation of iterator onto start
        func MoveNext() : bool {
            ((ref int)this)++; # Retrieve original object interface and increment
            return true;
        }
    
        func GetCurrent() : int {
            return (int)this; # Retrieve original interface and deref
        }
    })start;
}

var myInt : int = 10;

const var loopEnd = 100;
var iter = MakeIota(ref myInt);    # Same object as myInt, different reference, different interface
while (iter.MoveNext()) {
    var i = iter.GetCurrent();    # Use new interface to iterate, could use foreach (steps shown for demonstration)
    if (i >= loopEnd)
        break;
    # ... Do stuff with i
}
assert(myInt == loopEnd); # Underlying object has been modified by other reference
assert(!(myInt is Iterator<int>)); # Object type hasn't been altered

A more practical example would be .Net's IValueTaskSource, which provides a relatively simple (if only 😩) way to implement a custom asynchronous waiter, without GC overhead.
C#:
private class ATVTS : IValueTaskSource // Wrapper type
{
    public void GetResult(short timeline) => parent_.GetResult(timeline);
 
    public ValueTaskSourceStatus GetStatus(short timeline) => parent_.GetStatus(timeline); // Wasted but mandatory indirection
 
    //... Not shown here, continuation handling
 
    internal ATVTS(AwaitableTimeline parent) { parent_ = parent; } // This whole thing is boilerplate
 
    private AwaitableTimeline parent_;
}

public class AwaitableTimeline // Can't implement IValueTaskSource directly, we don't want to expose the interface
{
    internal void GetResult(short timeline) {
         // Dispatch any registered continuations...
    }
 
    internal ValueTaskSourceStatus GetStatus(short timeline) {
        return CmpTimeline((ushort)timeline, timeline_) <= 0 // Check to see if our timeline has advanced past the waiting task
            ? ValueTaskSourceStatus.Succeeded
            : ValueTaskSourceStatus.Pending;
    }
 
    //...
 
    public ValueTask WaitNext() { // Wait for the next signal
        return new ValueTask(vtsWrapper_, unchecked((short)(timeline_ + 1))); // Create a ValueTask with our ValueTaskSource wrapper
        // Not shown here; continuation preparations
    }
 
    public ValueTask Signal() { // Increment the timeline, releasing corresponding waiting tasks
        unchecked {
            Interlocked.Increment(ref timeline_);
        }
    }
 
    public AwaitableTimeline() {
        timeline_ = 0;
        vtsWrapper_ = new ATVTS(this);
    }

    private ushort timeline_;
    private ATVTS vtsWrapper_;
    //...
}

Ideally you'd be able to cast the functionality of IVTS onto the existing AwaitableTimeline object, no extra GC waste, no extra indirection, no restating the functions.
 
Last edited:
  • Thunk-Provoking
Reactions: Shoggoth
Back