Programming thread

I disagree that the main driver for language choice should be ease of transitioning from the previous language: the driver should be the benefits gained from transitioning weighted against the cost of the transition. I fail to see any benefits in Python given this scenario.

Your arguments presumption is that there must be a transition from PHP into anything. This is false. The situation is that XenForo is at risk of being lost, and it is an opportune time to transition into a better foundation than PHP. I disagree that Python forms a better foundation.

As for tutorials and Stack Overflow answers for Python I will argue that the good information is drowned out by the contradictory postings of semi-literate retards. This is true for PHP as well, to a lesser degree, but the accessibility and popularity among non-programmer academics has had a detrimental effect. We must also account for the fact that the subject is already proficient in PHP.

I'd agree that the absolute best choice would be staying on PHP and redoing as little as possible. I just didn't like that nobody really pushed back on Rust for a web server when he wouldn't really see many of the pros while getting all of the cons. The whole thing devolving into language wars was just me asserting that "yes, a language used by multi billion dollar companies as the backbone of their applications serving millions of users globally is in fact scalable in any real-world scenario".
 
  • Agree
Reactions: Knight of the Rope
I don't want to get too involved in this clash of autism, but the sentiment of "don't design your system for 10 users and scale it up; design it for 10M users then scale it down" is hardly a "premature optimization" when rebuilding an established web service that is constantly growing and subject to DDoS by hostile parties.
I consneed that if we’re considering forum software for the Farms specifically, performance is a consideration at every step of development. I was thinking more broadly, however, and including hobbyists and newbies who are still learning to solve more elementary problems.
 
I fucking love hearing about all these failed experiments - Lua, PHP, Python, whatever else - how they finally come to drop the pretense of dynamicness being the bestest feature no srsly guis, we're not some poo-poo "static" thing lol who wants to be "static" in a dynamic and fluid XXI century oh my god why do I have 20 GiB of stacktraces in my server log...
PHP and Python are great for hacking shit together, they just don't work for long-term quality software.
Obviously, some batshit insane nutjob will soon develop a new, "fully dynamic BUT SAFE (no really, trust me!)" language in a rebellion against this ossified and "static" and "obsolete" technologies. And if it gains any traction, the same story will repeat itself 20-30 years down the road.
What about inferred typing?
Show me a good type system
C's.
Hah, I was just thinking about that Knuth quote.


I don't think the point of the saying is that you shouldn't try to make things efficient, but that iteration will usually give you better results than trying to presuppose your future needs and structuring everything under that assumption from the getgo.
Designing things properly will make them run faster and be more maintainable, it's not a straight trade-off. Knuth's talking about instances when you turn the whole system into spaghetti code just to eke out a 2% performance increase.
And what should happen if the checks are being violated, for example when reading input from user, database, file or malicious actor? Crash and burn? Exception thrown? How are errors handled? You can't handwave the issue away by saying "type system doesn't allow for invalid segments to exist", unless you're planning on either not taking any input at all or handling it in some "unsafe" interface block/module boundary, in which case - once again - the complexity does not disappear, is just transferred.

It's all so tiresome.
No, there is a third option. Psuedocode:

x = getUntrustedInput(); if (isValid(x)) { // changes the type of x doSomething(x); } else { // x has type void in this scope }

Alternatively:

Option x = getUntrustedInput(); if (x.isValid()) { doSomething(x.value); } else { // x.value in this scope is a compile-time error } // x.value in this scope is a compile-time error too
 
Finally got a third monitor set up, about half the size of the other two. This feels like a lot, overall, but I think it's about right. I'm mostly just using it for task lists and referencing bug reports. The other two are for code/content. It's really nice not having to alt/tab so much or juggle half screens.

I guess that's one thing I couldn't articulate before. Some screens are fine to go halfsies, but whatever else is on your two main monitors is NOT. So for screens that you can split, you can compile those on one monitor away from screens that can't tolerate that.
 
tbh I have no idea why we were having a fight either. Pax.
I don't think we're having a fight. I'm just an obnoxious communicator and swear profusely. :story:

And I definitely will tear up any idea that I can't even steelman efficiently or which has gaping holes in it. This does not preclude me from having a civil discussion on another (sub)topic.

The approach I'm suggesting can be useful for OwOP, imperative and functional programming. What's nice with optional/incremental type systems is the ability to turn them on, off, and partially at different points. So for example, at the edges of the system where you ingest data you'll probably want it on. It'll probably be compiled as part of the constructor.
If those guarantees are satisfied, and on during testing, you can pretty calmly turn them off at production for the system "internals". Why? Because you know the constraints are satisfied for the input, and the type checker has validated that your program is correct, i.e. they have been propagated throughout your code.
This looks completely backwards. Why have the type system if you can switch it off? This isn't what type system is all about! Sorry, I'm not buying this handwaving fiesta anymore. I stupidly allowed myself to be dragged into discussing this idea by responding to a non-sequitur and all I'm getting is these crazy ideas.

I see people having flame wars over dynamic/static type systems and it seems backwards to me.
Well, sure, waging flame wars on a settled matter is backwards. There is no doubt anymore, that static typing is essential and the popular dynamic languages have admitted failure. That's just one bit of information on the type system, but it's the fundamental bit.

Have you ever had an opportunity to use logic programming languages like Prolog?
In passing at the university. Not a fun experience, unless one wants to stay at the university.

In your example with a polygon I would represent segments as a 4-tuple of cartesian and circular coordinates. This makes writing assertions about a directions very easy.
No, sorry, this is insane on multiple levels. Again, in no particular order:
  • Now I'm going to use twice as many memory to store coordinates? If not and, say, the polar coordinates are not stored but actually computed when and where needed, what about performance concerns? Where is the origin of the coordinate system? Is it the polygon's centroid? How is the coordinate system updated when I want to add or remove a point in the polygon?
  • Not to say that this all looks needlessly complex compared to old-school imperative approach. So once again we trade serious complexity, performance and maybe memory issues for...theoretical purity?
  • [EDIT - point added] How one would maintain the consistency between cartesian and polar coordinates if they're actually stored somewhere? If they're not and are computed on demand, then this question is moot. If they're actually being stored with the cartesian coordinates, then I guess you can define a "constraint" enforcing consistency between the two coordinate system. But for such a constraint to actually be correct, it would have to infinitely recurse on the other coordinate system.
  • You didn't address the point I consider important: I have a Segment type marked with a x0 < x1 constraint. How do you define a Polygon using such Segments? Is it possible? Let's say I want to travel the perimeter of the Polygon, maybe for the purpose of drawing it. Understanding that the Polygon is a circular sequence of Segments, and assuming the constraint holds for each Segment, does that mean that it is my (the programmer) responsibility to check that, when I'm fetching the next Segment in the sequence, I'm not holding that Segment "backwards"?
  • This is already getting out of hand, and we're still in the realm of almost trivial cases! What if I don't want the Polygon to have self-intersections?
  • What actually is a polygon? How many Segments does it need to have to be a valid Polygon? If, for example, we somehow constrain the Polygon to have at least three Segments, then how would I build the Polygon object from streaming data? Should I buffer enough data for three Segments, create them, create the Polygon from them and only then can I use a generic "add Segments until data ends" approach?
I sincerely feel like I'm actually putting more brainpower into thinking about real-world implications and possible design (and problems thereof) than you. This makes me feel like I'm wasting my time and therefore a sad pepe.
Edit: an example of how this could work IRL - imagine you knew for some reason some list can't have more than 5 elements, and code in another place in your system tries to take 6 elements from it. Why should it be a runtime exception and not a compile type error if you 100% know it couldn't happen?
How would that actually be encoded in the type system without creating a separate type AListWithAtMost5Elements?

I'm circling back (heh) to my earlier point, that maybe you have some coherent idea on what you actually have in mind, but for what it's worth, I'm not understanding it one bit and see only proliferating difficulties and complexity-shifting.
 
Last edited:
  • Informative
Reactions: albert chan
PHP and Python are great for hacking shit together, they just don't work for long-term quality software.
I agree.
What about inferred typing?
Like OCaml? Can be useful sometimes, though it's still static, not dynamic. Unless you're talking about duck typing? I'm not sure what you have in mind here.
No, there is a third option. Psuedocode:
...which is exactly what I've written:
handling it in some "unsafe" interface block/module boundary, in which case - once again - the complexity does not disappear, is just transferred.
So now you're writing both the constraints on the type definition AND a function/method to check the conformance of a potentially invalid input to such a constrained type. I fail to see the benefit.
 
This looks completely backwards. Why have the type system if you can switch it off? This isn't what type system is all about! Sorry, I'm not buying this handwaving fiesta anymore. I stupidly allowed myself to be dragged into discussing this idea by responding to a non-sequitur and all I'm getting is these crazy ideas.
I think you're missing the point (benefit?) of formal verification.
Let's say you have a formally verified system S with input source I. You can break down the system into subsystems, S0, S1, S2, etc. which interact with each other. During development and specification you can and should specify those interaction points between the subsystems to add more information for the formal verification process.
Once you are satisfied that the system is correct, however, you only need to verify the input, because you know that for every legal input the state and output of the system are correct.
This means that in compile and test time, you can turn all verifications on, while at run time, it is sufficient to verify only the inputs to the system. If you want to take extra care you can verify data at key interfaces between subsystems.
The down side / hard part of using dependent types and modelling your data and state with types is that it requires careful consideration when starting out. Even considering the polygon example, if I would have used it in production I would sit down and specify a detailed data model, describe invariants and the constraints.
For example, radial coordinates aren't required, you can infer a direction based only on the relation between the XY coordinates of the two points comprising your segment. That way, at construct time you can figure it out as:
Code:
Point :: {x :: double, y :: double}
CWSegment :: {p0 :: Point, p1 :: Point}, P1 after P0
CCWSegment :: {p0 :: Point, p1 :: Point}, P1 before P0
Segment :: CWSegment | CCWSegment
Then to specify a polygon it's enough to require all segments are in the same "direction" and that prevSeg.p1 == nextSeg.p0
Or something to that effect.
 
PHP and Python are great for hacking shit together, they just don't work for long-term quality software.
Python code over 4-5000 lines becomes extremely hard to manage, and the cost of doing everything in an interpreted language becomes exponential. But I agree, any program under a few thousand lines is where it really shines.

This comes from experience and not just speculation. Using a lower-level language such as C++ & Rust allow for more portability, structure and organisation. The only time a high-level language can renounce this problem is by being structured in a way that is built for large scale, such as Erlang, Java or C#.
 
Last edited:
I think you're missing the point (benefit?) of formal verification.
We're not talking about formal verification, this is already twice removed from the initial topic.

The down side / hard part of using dependent types and modelling your data and state with types is that it requires careful consideration when starting out. Even considering the polygon example, if I would have used it in production I would sit down and specify a detailed data model, describe invariants and the constraints.
Thank you for finally admitting my point.
 
Let's say you have a formally verified system S with input source I. You can break down the system into subsystems, S0, S1, S2, etc. which interact with each other. During development and specification you can and should specify those interaction points between the subsystems to add more information for the formal verification process.
Once you are satisfied that the system is correct, however, you only need to verify the input, because you know that for every legal input the state and output of the system are correct.
I'm just curious (and I've been enjoying the back and forth), but how long does specifying a system in its entirety like this take? I can appreciate how enticing the thought of a formally-specified, mathematically-correct program is, and every now and then I'll dream that I'm left to my own devices long enough to bring one of those into reality.

But in the (actual reality) meantime, most of my work is done with in the wake of other programmers/on software with ever-increasing complexity/mid-way between projects/answering to clients who don't know what the hell they want. I have unit tests I still haven't finished writing for software that was technically delivered a year and a half ago (it's on my "when I get around to it" list, which is already hilariously long). I guess from my vantage point I'm struggling to see a time where I'd ever be given enough leeway for something like this. Maybe other people have different circumstances at their jobs, not sure.
 
  • Feels
Reactions: Polyarmory
I'm just curious (and I've been enjoying the back and forth), but how long does specifying a system in its entirety like this take?
Too long by a few orders of magnitude, unless you're working for a space agency. And even they're cutting corners.

Case in point: see how long it took to specify what a Polygon is, and even there the specification is full of handwaving. What does it mean that a point is "before" or "after" another? This is a relation so we're good, right? But how does one actually define this relation itself, as it's not a fundamental relation (meaning: baked in the language) because the types that the relation is operating on are not fundamental types. These are important questions to answer if you actually want the idea to advance from daydreaming stage to something being actually implemented in reality. And in many cases you'd have to put a lot of complexity in the model itself just to satisfy the theory (that was actually admitted explicitly in my last post). You decide if it's worth the effort beyond really trivial examples, like the earlier Segments.

And these are still very simple cases, and even then I can't imagine how would I specify a Polygon which has no self-intersections. I mean, sure, I can think of some specification, and then be promptly slaughtered by efficiency issues, data updating issues (add another vertex to a Polygon). Maybe you'll notice that I'm repeating myself.

I'm still curious about the question I asked regarding the list with at most 5 elements. I mean, I really try to think this through, I really, really do and I just cannot see how any of this can work.

But don't worry, you can stay at the university for how long you want.
 
  • Like
Reactions: Knight of the Rope
Like OCaml? Can be useful sometimes, though it's still static, not dynamic. Unless you're talking about duck typing? I'm not sure what you have in mind here.
I was thinking more like Scala, where it behaves (IIRC) as if it were dynamically typed, and you can omit types and let it infer insane "could be A or B or C but almost certainly a number" types, but also add manifests.
...which is exactly what I've written:

So now you're writing both the constraints on the type definition AND a function/method to check the conformance of a potentially invalid input to such a constrained type. I fail to see the benefit.
I mean, you'll still have the irreducible complexity, but for stuff like unpacking an option, you can ensure there's no unexpected errors/throws etc.
Python code over 4-5000 lines becomes extremely hard to manage, and the cost of doing everything in an interpreted language becomes exponential. But I agree, any program under a few thousand lines is where it really shines.
It's not the interpretation itself - Erlang is interpreted, and it scales fine.
I'm just curious (and I've been enjoying the back and forth), but how long does specifying a system in its entirety like this take? I can appreciate how enticing the thought of a formally-specified, mathematically-correct program is, and every now and then I'll dream that I'm left to my own devices long enough to bring one of those into reality.
Very, very, very expensive. There's a reason it's only done for fighter jets, aerospace stuff, high assurance, etc.

For example: the seL4 microkernel cost $400/LOC, and that's considered cheap.
 
  • Like
Reactions: Knight of the Rope
But don't worry, you can stay at the university for how long you want.
I don't even remember people caring that much about program correctness and how pure their functions were at university either, though. Sure, you'd have the Reddit kids come in wanking themselves raw over how they were totally going to write their postgrad atom physics simulation from the ground up in Haskell, right up until their advisor smacks them with the truth bomb that all of that pedantic wankery is better spent on the actual thesis/paper, that he already has C++ code that the advisor wrote three decades ago that he'll use for this instead, and also that you don't have to publish (or even disseminate) code at all so we're not going to.
 
I was thinking more like Scala, where it behaves (IIRC) as if it were dynamically typed, and you can omit types and let it infer insane "could be A or B or C but almost certainly a number" types, but also add manifests.
Sorry, I'm not familiar with Scala at all, so can't opine on that.

I mean, you'll still have the irreducible complexity, but for stuff like unpacking an option, you can ensure there's no unexpected errors/throws etc.
Yes, I agree, but I can't see how that differs from using optionals in many different languages. I mean, we're still writing imperative-style checks and on top of that we have to specify the constrains in the type definition. So now you have duplication and need to make sure that these constrains and unpacking code are actually doing the same validation.

Or maybe (I'm just guessing what you have in mind, so correct me here) you're thinking - if I specify the constraints on the type, then the toolchain can autogenerate the code for constructing the type from a stream of data along with validating the type, whereas the validation would work by either throwing an exception or returning an empty optional-type?

I would actually see some value in that but I strongly believe that that would be unimplementable in practice beyond trivial examples, like the Segment. I mean, one would need to pedantically and rigorously define the model, the relations, throw away most concerns for performance and then, maybe, could it work. But once again: this is not only shifting, but also adding complexity. I think the almost-trivial Polygon example already shows that in copious amounts.

This also does not solve the issue of type proliferation, like that array or list with at most 5 elements.

[EDIT]
I just noticed that the example, half-handwaved definition of Polygon is faulty still, because you CANNOT actually decide on the winding based only by looking at one particular segment without some external point of reference. You need either a neighboring Segment for that or some particular point serving as an origin of a coordinate system.

So, yeah, no.
I don't even remember people caring that much about program correctness and how pure their functions were at university either, though.
Ditto. And I don't mean to dunk on people actually trying some new, fun and novel ideas, but for all that's unholy, at least think them through somewhat and poke holes in them instead of handwaving away. We've turned the discussion from a simple case of dynamic typing being shite to wonders of formal proofs. This was not what this all was about, if you want to discuss type systems, discuss type systems.

I feel like I'm the lolcow being milked in this thread here.
 
Last edited:
  • DRINK!
Reactions: Knight of the Rope
Erlang was intended for scalability from the get-go. Python (and other interpreted languages) generally aren't built for mass scale.
<https://elixirforum.com/t/boring-a-server-to-death-the-slow-loris-attack/25835>
It depends on what you mean by scale. Yes, Beam and every language written for it are very performant due to the thread architecture. But you have to remember Beam was designed for a more controlled environment then web servers. In a telephone exchange, you have a lot less margin for how equipment is expected to behave. Web servers don't have that luxury and they can show suspicious behaviour that's actually benign from a threat
standpoint.

Beam develops more like Java. Code needs to be compiled for the Beam, unlike Python which can compile to byte code when run. It's good to keep in-mind the different development styles.

With a lot web sites becoming JavaScript-client apps, if you don't want to go in that direction it might be necessary to accept higher cost of operations with more traditional servers. The benefits of one server programming environment over another are likely marginal relative to JavaScript-client apps. Not that JavaScript-client apps are nicer to develop, but they do present certain efficiencies.
 
Erlang was intended for scalability from the get-go. Python (and other interpreted languages) generally aren't built for mass scale.
Erlang is an interpreted language.
<https://elixirforum.com/t/boring-a-server-to-death-the-slow-loris-attack/25835>
It depends on what you mean by scale. Yes, Beam and every language written for it are very performant due to the thread architecture. But you have to remember Beam was designed for a more controlled environment then web servers. In a telephone exchange, you have a lot less margin for how equipment is expected to behave. Web servers don't have that luxury and they can show suspicious behaviour that's actually benign from a threat
standpoint.

Beam develops more like Java. Code needs to be compiled for the Beam, unlike Python which can compile to byte code when run. It's good to keep in-mind the different development styles.

With a lot web sites becoming JavaScript-client apps, if you don't want to go in that direction it might be necessary to accept higher cost of operations with more traditional servers. The benefits of one server programming environment over another are likely marginal relative to JavaScript-client apps. Not that JavaScript-client apps are nicer to develop, but they do present certain efficiencies.
What?
 
1624582376717.png
 
This is probably an elementary question for which I feel obligated to apologize, but I'm not finding an answer in any of my reference materials which is never a good sign. In short, I'm working in Ruby where I want to create a method that can change an arbitrary number of class vars for an instance of a class object by invoking the method along with two sets of params, one being the attrs to change and the other being the new values to use or formula if it's derived data. Kind of like attr_accessor or attr_writer but working on multiple, arbitrary vars so I don't have to 1. repeat myself 2. hardcode which vars are being changed. Once that's figured out I'd also like to be able to abstract it out a little further so that I can change how it's invoked in an extensible fashion.
For example if I have a People class with say, @Name, @job_title, @department, @salary, @boss, @subbordinates, @guid class vars how can I call a single method to at one time change @salary, @boss, @job_title one time, and another @Name, @guid, or the like, and then once that's done abstract it out so I can invoke say,
bob = People.new(...) these_vars_to_change = [...] engineering_vars = [...] marketing_vars = [...] which_vars_to_change = 'marketing' bob.change_many_vars(these_vars_to_change, 'which_vars_to_change'_vars) # This part would be handled by flow control. which_vars_to_change = 'engineering' bob.change_many_vars(these_vars_to_change, 'which_vars_to_change'_vars)
In both the first and second, more abstract case the issue I'm running into is that any vars I pass in are either hardcoded and therefore useless or interpreted as a string literal and therefore useless. I don't like being spoonfed, but the fact that I'm not turning up anything on what should be such an obviously essential tool means I must be missing something fundamental.
 
Back