Programming thread

What happens when you call toTransient? Does it create a new object to wrap the original or does it internally cast the original's reference? Having to create an entirely separate object just to represent a subset of the original's functionality seems a bit daft, and if it's the latter how is it any different?
"implementation detail", which means depending on the semantics and performance profile the language designer wants to provide, these can be done in several ways. Again, I'd invite you to look at how Clojure implemented transient collections. The TLDR is that when you build collections with HAMTs, as long as you do more than two modifications you're better off using a transient.
Why is is daft to create a contained "disaster area" where mutability is assumed under a subset of limiting conditions? I'd say it's actually better than being permissive with it.
Computers have no notion of type, rather objects —pieces of memory— are granted structure by the operations performed on them. We say that an object has a type in order to constrain and guide the set of operations that are allowed on it so as to stay consistent with it's structure. But with a more abstract concept of type we cease talking about structure and begin talking about interface and semantics, why though, should we lose this fundamental flexibility by presuming that an object has *a* type?
The age old tension between representation and implementation. We should program to interfaces and semantics. We need to program to memory when we talk with memory. To quote Knuth, 97% of the time it doesn't matter. For the rest, either use some hardware aware language like C, or write a very clever compiler like Rust and cut your dick off.
I would propose that type is found in the reference, not the object. When you cast an object reference you merely view the object by a new interface, and yet such a system is still statically typed. This enables wrapping types without type wrappers, and it enables dynamically extending object functionality without changing object type. Witnesses solve half this problem, but they also have the effect of globally linking the interface to the "base type", which is not always ideal.
So you want structs to be "untyped"? I'm sorry but I don't see how witnesses don't solve this problem in a static world where types are more than a gentlemanly agreement of "we read the bytes this way" I don't think I fully get what you're aiming at here, will let you know when the penny drops.

No wonder no one wants to program in C#, it's less readable than Javapoo
 
The age old tension between representation and implementation. We should program to interfaces and semantics. We need to program to memory when we talk with memory. To quote Knuth, 97% of the time it doesn't matter. For the rest, either use some hardware aware language like C, or write a very clever compiler like Rust and cut your dick off.
You could dust off Ada but I don't know what level of genitalia mutilation is involved. From what I've read the compiler is in between Go and Rust in terms of cleverness but I am guessing there are good reasons it's long since fallen into disuse in the defense and aviation industries.
 
  • Like
Reactions: Shoggoth
"implementation detail", which means depending on the semantics and performance profile the language designer wants to provide, these can be done in several ways. Again, I'd invite you to look at how Clojure implemented transient collections. The TLDR is that when you build collections with HAMTs, as long as you do more than two modifications you're better off using a transient.
Why is is daft to create a contained "disaster area" where mutability is assumed under a subset of limiting conditions? I'd say it's actually better than being permissive with it.
Based on what I was responding to I presumed toTransient and toPersistant were just bizarre ways of saying toMutable and toImmutable. Looking into it though these are not at all interchangeable concepts. This is all very FP and gross; all I want is be able to pass an object to an unknown function with a guarantee they won't be able to fuck with it. I don't want the Clojure runtime to quietly allocate a partial copy of it for me to mutate, I'm pretty good at managing state, I don't need help with that.

The age old tension between representation and implementation. We should program to interfaces and semantics. We need to program to memory when we talk with memory. To quote Knuth, 97% of the time it doesn't matter. For the rest, either use some hardware aware language like C, or write a very clever compiler like Rust and cut your dick off.
I don't see why we can't have both. I have no desire to drop comfy abstraction and program everything in C, but at the same time I'd still like to be able to make reasonable assumptions about what my program is actually doing. Rust doesn't really do anything that C++ can't on that front.

So you want structs to be "untyped"? I'm sorry but I don't see how witnesses don't solve this problem in a static world where types are more than a gentlemanly agreement of "we read the bytes this way" I don't think I fully get what you're aiming at here, will let you know when the penny drops.
I don't want "structs to be untyped" whatever that would entail, just that interfaces and semantics to be centered over structure. Witnesses are traditionally built around the struct, you declare a struct —or class, call it what you will— and later associate interfaces with it through witnesses. In my view type should be a set of interfaces and semantics (packaged together as "classes" or "aspects") with no core "identifying" element.


No wonder no one wants to program in C#, it's less readable than Javapoo
I dunno, I find it pretty comfy most of the time. Really love how easily you can state math in C# too, something you still can't do in java 😏

I should've added a random element to the growth periods, but it was a decent test anyway. I'm going to try dispatching the growth behavior in parallel latter, which should fix that stutter you see at ~0:20 (not creating ~2000 entities in a single frame would also help). What I'm most happy with is how neatly packaged the code is 👇🏿 It's beginning to feel like I've got a real framework on my hands.
C#:
internal class CrystalTree
{
    private const uint growingState = 0x0, splittingState = 0x1, finishedState = 0x2;
    private const uint hasParentFlag = 0x10;
    private const uint stateMask = 0xf;

    #region Types

    internal struct CrystalGrowthBehavior : IEntityBehavior<TickEvent>
    {
        private static readonly ThreadLocal<Random> tlr = new(() => new());

        public void Post(in BehaviorArgument arg, in TickEvent e) {
            var world = (EntityWorld_)arg.world;                        // EntityWorld_ is a prototype version of IEntityWorld, interface will be used directly when it settles
            var bSys = arg.behaviors;
            var tSys = world.server.RequireService<TransformSystem>();
            var cSys = world.server.RequireService<ComponentSystem>();  // Get necessary services
            var time = world.server.RequireService<TimeService>();
            var rand = tlr.Value;

            var curTime = time.time;
            foreach (var id in arg.ids) {
                var ct = cSys.RequireTyped<CrystalTree>(id);            // Extension method to use type as column attribute
                var t = tSys.GetTransform(id);
                var pt = ct.parentId != null ? tSys.GetTransform(ct.parentId.Value) : null;

                if (ct.state == growingState) {
                    var initialScale = pt?.scale ?? Vec3.one;
                    var finalScale = initialScale * ct.pattern_.growthFactor_;

                    var gFac = MathF.Min(1, (curTime - ct.birthDate_) / ct.pattern_.generationPeriod_); // Interpolate scale

                    t.scale = VecN.Lerp(Vec3.Fill(0.0001f), finalScale, gFac);

                    var spawnTime = ct.birthDate_ + ct.pattern_.generationPeriod_;
                    if (curTime >= spawnTime) {
                        ct.state = splittingState;      // Transition state
                    }
                }

                if (ct.state == splittingState) {
                    if (ct.generation_ < ct.pattern_.maxGeneration_) {          // Create children
                        for (int i = 0; i < ct.pattern_.childCount_; i++) {
                            New_(world, ct.pattern_, id, rand, ct.generation_ + 1, curTime);
                        }
                    }

                    ct.state = finishedState;
                }

                if (ct.state == finishedState) {
                    bSys.RemoveBehavior(id, instance); // Growth is over, disable the behavior
                }
            }
        }

        public static readonly IEntityBehavior<TickEvent> instance = new CrystalGrowthBehavior();
    }

    internal class Pattern
    {
        internal EntityRenderable renderable_; // A model+shader combo that can be associated with entities
        internal int maxGeneration_;
        internal float generationPeriod_;
        internal float growthFactor_; // Relative size of children
        internal int childCount_;
    }

    #endregion Types

    #region Internal

    private uint state {
        get => flags_ & stateMask;
        set => flags_ = (flags_ & ~stateMask) | value;
    }

    private EntityId? parentId => (flags_ & hasParentFlag) == hasParentFlag ? parentId_ : null;

    private static EntityId New_(EntityWorld_ world, Pattern pattern, EntityId? parent, Random? random, int generation, float time) {
        var bSys = world.server.RequireService<BehaviorSystem>();
        var cSys = world.server.RequireService<ComponentSystem>(); // Get required services
        var tSys = world.server.RequireService<TransformSystem>(); // Each service is entirely modular and separable
        var rSys = world.server.RequireService<RenderSystem>();
      
        var eId = world.New(0);

        cSys.SetTyped(eId, new CrystalTree(pattern, time, parent, generation)); // Add our new crystal tree entry using type as the column attribute

        var t = tSys.EnsureTransform(eId);
        if (parent != null) {
            var pt = tSys.GetTransform(parent.Value);

            Quaternion rotOffset;
            if (random != null) {
                var axisOffset = VecN.Norm(new Vec3(VecN.Lerp(-1f, 1f, (float)random.NextDouble()),
                    VecN.Lerp(-1f, 1f, (float)random.NextDouble()), 0.1f));

                rotOffset = Quaternion.AngleAxis(axisOffset, 0.2f); // Tilt a little
            } else {
                rotOffset = Quaternion.identity;
            }

            t.rotation = pt.rotation * rotOffset;
            t.position = MatN.TransformFast(pt.matrixOut, Quaternion.Rotate(rotOffset, new Vec3(0, 1, 0))); // Offset from parent
            t.scale = Vec3.Fill(0.0001f);
        }

        rSys.SetRenderable(eId, pattern.renderable_); // Add our renerable

        bSys.AddBehavior(eId, CrystalGrowthBehavior.instance); // Add our behavior
        return eId;
    }

    #endregion Internal

    public static EntityId New(EntityWorld_ world, Pattern pattern) {
        var tSys = world.server.RequireService<TimeService>();
        return New_(world, pattern, null, null, 0, tSys.time);
    }

    internal CrystalTree(Pattern pattern, float time, EntityId? parentId, int generation) {
        parentId_ = parentId ?? default;
        pattern_ = pattern;
        generation_ = generation;
        birthDate_ = time;
        flags_ = (parentId != null ? growingState | hasParentFlag : splittingState);
    }

    #region Fields

    private readonly EntityId parentId_;
    private readonly Pattern pattern_;
    private readonly int generation_;
    private readonly float birthDate_;
    private uint flags_;

    #endregion Fields
}
 
This is some basic shit considering, but does anyone know the best way to handle compressed files in C++? You would think that something this common would have lots of tutorials and utilities like 7zip would be well-documented but NOPE! I just want to open .7z and .zip files. Does anyone know the best way to do this?

As an aside, I know how to get function names from a shared object using objdump/nm/readelf. Is it possible to get the expected parameters for those functions as well (e.g. if I want to use dlsym at runtime)?
 
  • Informative
Reactions: Friendly Primarina
In my opinion, it's still worthwhile to learn C, because without some experience in C, you really don't know what abstractions are helping you with. You need to try to manage memory allocations manually at least once, and how you can even feel alive if you do not experience SIGSEGV at least 100 times :)

Now, I learn more modern C++ and it's a hell of a ride. I love it. I have changed my job this year, and from working with incompetents I jumped into a company that has some really good C++ programmers. I feel like a total noob/moron, but it's good, that's exactly what I wanted. The worst place to be is a room where you are the smartest person.

I am a little worried that some of my co-workers are a bit interested in Rust. I hope they will not start wearing dresses anytime soon. Riiight?
Personally, I don't understand the appeal. I would rather learn OCaml, Closure or Haskell. Rust syntax gives me nausea, and when I see 'unsafe' I think: "now, a poor tranny is leaving a safe-space and he is going into an oppressive cis-normative world". I just can't help it.
 
Hi, retard here with a dumb question: how important is having a solid grasp over math to your programming skill?

I've read enough "programming 101" resources to know how to do something, but I don't know why it works (if that makes sense).
 
  • Like
Reactions: Gender: Xenomorph
Hi, retard here with a dumb question: how important is having a solid grasp over math to your programming skill?

I've read enough "programming 101" resources to know how to do something, but I don't know why it works (if that makes sense).
I'd say moderately so. I've seen plenty of people manage to learn programming at a professional level without knowing of the underlying mathematics. However I noticed that having a college math background made the learning process go a lot faster for me when I taught myself after graduating. For example, I didn't have to spend a lot of time on lists, matrices, sets, objects, modulo, combinatorics, or boolean expressions because I encountered them all before in slightly different forms. In the end, it's saved me a bit of time that others would have spent banging their head against a wall.

Where I think it starts to make a big difference is if you program things that involve some kind of geometry, since those kinds of problems can be nightmarish to figure out if you don't understand trigonometry or vector space.
 
Hi, retard here with a dumb question: how important is having a solid grasp over math to your programming skill?

I've read enough "programming 101" resources to know how to do something, but I don't know why it works (if that makes sense).
It depends on what field you're going in. As a web developer, I rarely need to use math above high-school algebra and sometimes geometry. If you want to do game development or especially 3D graphics stuff, you'll need to know higher-level stuff like trigonometry and linear algebra. Cryptography has its own weird math that I basically know nothing about. Non-game app development will be closer to the basic math I use as a web dev in most cases.
 
Hi, retard here with a dumb question: how important is having a solid grasp over math to your programming skill?

I've read enough "programming 101" resources to know how to do something, but I don't know why it works (if that makes sense).
It's not about math, it's about knowing how to think and solve problems algorithmically in general.
Read SICP and do the exercises.
 
What do you guys think of Codingame and Codewars? I've dicked around with both quite a bit, and I think they teach some pretty good skills, from fundamentals to complex design patterns and thought experiments.

The main problem I have with them, especially Codewars, is that they also have this competitive aspect which is actually counterproductive. In particular, for each puzzle they have "best" solutions, which means solutions with the most updoots from the community. You've probably already spotted the issue here. The best solutions are all as few, extremely dense lines as possible, and they all benchmark horribly. It teaches novice coders that importing a library that solves a problem in a bad yet visually appealing manner is best practice, which might explain why modern apps take 40% of both your memory and CPU to do nothing but display a gui.

Still, they're pretty fun and I've learned a lot from them, despite the fact that there's no instruction. It just presents you with a problem and tells you to do it in whatever manner you see fit.

There's a third one I used briefly but can't remember the name of (not Code Combat) that was also alright, but Codingame and Codewars are the best I've seen.
 
  • Like
Reactions: Strange Looking Dog
What do you guys think of Codingame and Codewars? I've dicked around with both quite a bit, and I think they teach some pretty good skills, from fundamentals to complex design patterns and thought experiments.

The main problem I have with them, especially Codewars, is that they also have this competitive aspect which is actually counterproductive. In particular, for each puzzle they have "best" solutions, which means solutions with the most updoots from the community. You've probably already spotted the issue here. The best solutions are all as few, extremely dense lines as possible, and they all benchmark horribly. It teaches novice coders that importing a library that solves a problem in a bad yet visually appealing manner is best practice, which might explain why modern apps take 40% of both your memory and CPU to do nothing but display a gui.

Still, they're pretty fun and I've learned a lot from them, despite the fact that there's no instruction. It just presents you with a problem and tells you to do it in whatever manner you see fit.

There's a third one I used briefly but can't remember the name of (not Code Combat) that was also alright, but Codingame and Codewars are the best I've seen.
If you're learning, depending on how you prefer to learn, there are didactic and socratic methods. The didactic would be books like SICP. Socratic would be books like The Little Schemer or The Little Java. I'd say the Koans genre also falls under the socratic.
The gamified versions, interview questions rehashes, etc, are actually demoralizing, especially when shit doesn't work. If you're not doing this to have fun, go do something else
 
  • Agree
Reactions: Marvin and nah
What do you guys think of Codingame and Codewars? I've dicked around with both quite a bit, and I think they teach some pretty good skills, from fundamentals to complex design patterns and thought experiments.
I only touched the introductory parts of Codingame and Codewars just this minute, but this looks like the same kind of hogwash as web sites such as LeetCode and HackerRank -- they exist entirely to support the cottage industry of technical job interviewing. To actually learn technical skills involves working through reviewed and thoughtful writing such as books or papers, applying their concepts in practice, and interacting with a community that's not filled with pajeets trying to get the most internet points by posting code golf. Unfortunately, I am not sure myself where to go for the last one beside having knowledgeable friends and/or coworkers.
 
Am I the only one who doesn't see the point of SOLID? It makes more clumsy and convoluted for the sake of it, and goes against all the optimization tricks I learned. Instead of thinking what needs to be optimized and cut short, everything is cut into a million pieces. It's meant to be reusable, but none of it will ever be reused somewhere...

I guess the idea is, a human should understand the code, not a computer. That's not the way to approach...
 
Am I the only one who doesn't see the point of SOLID? It makes more clumsy and convoluted for the sake of it, and goes against all the optimization tricks I learned. Instead of thinking what needs to be optimized and cut short, everything is cut into a million pieces. It's meant to be reusable, but none of it will ever be reused somewhere...
I'll disagree somewhat on the "none will be reused" part, because I've successfully reused tons of code written for various projects. My lifecycle of code reuse typically goes like this:
  • I have a specific software A to deliver.
  • I write that software.
  • I have another specific software B to deliver.
  • I notice the common parts between the two projects. I rip out what would be the common code into a separate set of classes or functions and put it into a library.
  • I use those classes or functions in libraries to deliver software B, adding necessary new parts to software B which I haven't had the need to write before.
  • From now on, whenever I need the same functionalities, I import them from my library, adjusting, extending and generalizing as needed.
  • The cycle repeats for software C and onwards.
Do I think about SOLID as I'm doing this? Hell no. Does my code conform to the SOLID principles when I take a look back? Sure, maybe not 100%, but definitely more than less.

I think the real cancer that is eating the programming is this obsession with knowing all those fancy "patterns", "SOLIDs", metrics or whatever, before even the problem at hand is researched enough. I can barely name any of the software design patterns, yet I use them all the time without even knowing, because it's kinda common sense in many cases. And frankly, I don't care a bit if someone were to call me out on not conforming to some obscure part of SOLID (or any other acronym), because I know my code is both correct, extendable and maintanable. Good software trumps "academically correct" software each and every time.

Yet, it's almost a given at this point, that a junior programmer needs to know at least 5 different named software patters and regurgitate them, without having the opportunity to devise them by themselves in a bottom-up manner, thus achieving real enlightenment.

TL;DR clown world
 
I'll disagree somewhat on the "none will be reused" part, because I've successfully reused tons of code written for various projects. Do I think about SOLID as I'm doing this? Hell no. Does my code conform to the SOLID principles when I take a look back? Sure, maybe not 100%, but definitely more than less.
The idea of SOLID is, for example, why would you, in a game, have a separate movement script for the player and the enemies, when you can have one movement script that handles it all?

Sure, you can reuse it in the future, but the main idea is having 'single responsibility objects' which is a chore to work with and tie together. In the end, it's more bothersome and annoying than it saves time.
I think the real cancer that is eating the programming is this obsession with knowing all those fancy "patterns", "SOLIDs", metrics or whatever, before even the problem at hand is researched enough. I can barely name any of the software design patterns, yet I use them all the time without even knowing, because it's kinda common sense in many cases. And frankly, I don't care a bit if someone were to call me out on not conforming to some obscure part of SOLID (or any other acronym), because I know my code is both correct, extendable and maintanable. Good software trumps "academically correct" software each and every time.
Yea honestly I feel like the only reason people do it is because some 'expert' told them to. It's not really optimal or fast, and it ends up eating more memory/resoruces than it needs to, but oh we followed the XYZ principles.

I can tell apart a junior programmer from a competent one cause they tend to split everything into 1000s of fragments. They don't think "how will the program most optimally solve this code", they think "hurr durr interfaces".
 
  • Like
Reactions: Marvin
The idea of SOLID is, for example, why would you, in a game, have a separate movement script for the player and the enemies, when you can have one movement script that handles it all?
I don't understand this example. "Movement script" sounds more like data, which would be fed into some generic actor-movement-related-execution algorithm. Even if one were to encode that "script" into another, concrete class, it would more or less just encode the data into code, which is a distinction without difference.
 
The main problem I have with them, especially Codewars, is that they also have this competitive aspect which is actually counterproductive. In particular, for each puzzle they have "best" solutions, which means solutions with the most updoots from the community. You've probably already spotted the issue here. The best solutions are all as few, extremely dense lines as possible, and they all benchmark horribly. It teaches novice coders that importing a library that solves a problem in a bad yet visually appealing manner is best practice, which might explain why modern apps take 40% of both your memory and CPU to do nothing but display a gui.
I never really focus on getting "updoots," but I'll sometimes submit multiple solutions, refactoring to try to achieve a solution that's peak "clever," as well as one that's as performant as I can get it, etc.

Nobody gets to see the solutions until they've submitted one of their own (or given up, I suppose), so the competitive aspect comes after the fact.
 
What do you guys think of Codingame and Codewars?
I thought programming was already like a game. If it's too hand-holding, like trying to be educational, it will suck. Otherwise, why not learn real programming, I hear it's like playing a game only re-compiles don't cost quarters.

Programming is all about freedom. So why the fuck would I want some one telling me what I should be doing with this code-by-numbers bull shit.
 
Back