Programming thread

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Nothing is a bigger ego boner killer for me than having to create a UI to demonstrate code. I should only be allowed to make UIs for the blind.
There are few APIs in software development as convoluted as UI APIs. There are so many intricacies, so many user expectations, so many corner cases to handle that few other subsystems compare.

I'm dealing with a C event loop right now, trying to juggle signals and poll FDs. This is, itself, such an onerous task that there are three big libraries used to help folks with it. But it's peanuts compared to UI dev. ( https://www.sitepoint.com/the-self-pipe-trick-explained/ -- explains the self-pipe trick for handling signals; this is considered standard operating practice in Unixish development.)

One of my first projects as a learning teenager, over half a lifetime ago (Edit: two thirds of a lifetime ago for the earliest bits; I grow painfully old...), was trying to make some basic UI elements in Delphi and Visual Basic. It led me to hate UI work. When I finally got around to learning the Linux command line, things clicked, and I've been derisive of UI since... and then I grew until I started working on console UIs. Now graphical UIs are less of a headache to me, because I see the how and why better.

But UIs are a bundle of sheer insanity that require YEARS of specialization to be anywhere near comfortable with developing.

Correct me if I'm wrong, but isn't it typically the client's responsibility to manage caching?

There are only two hard things in Computer Science: cache invalidation and naming things. -- Phil Karlton

Both server and client play big roles in HTTP caching, and the interplay is on the order of UI development. Some, as quoted, believe it's even harder than UI.
 
Last edited:
There are few APIs in software development as convoluted as UI APIs. There are so many intricacies, so many user expectations, so many corner cases to handle that few other subsystems compare.

I'm dealing with a C event loop right now, trying to juggle signals and poll FDs. This is, itself, such an onerous task that there are three big libraries used to help folks with it. But it's peanuts compared to UI dev. ( https://www.sitepoint.com/the-self-pipe-trick-explained/ -- explains the self-pipe trick for handling signals; this is considered standard operating practice in Unixish development.)

One of my first projects as a learning teenager, over half a lifetime ago (Edit: two thirds of a lifetime ago for the earliest bits; I grow painfully old...), was trying to make some basic UI elements in Delphi and Visual Basic. It led me to hate UI work. When I finally got around to learning the Linux command line, things clicked, and I've been derisive of UI since... and then I grew until I started working on console UIs. Now graphical UIs are less of a headache to me, because I see the how and why better.

But UIs are a bundle of sheer insanity that require YEARS of specialization to be anywhere near comfortable with developing.
A big problem is that UIs are more async by nature and hardly anyone knows how to do proper async. Some UI libraries will create layers of abstraction hell to "help" with this and others won't.
 
A big problem is that UIs are more async by nature and hardly anyone knows how to do proper async
Personally, my favorite way to write UIs when I have to is using a functional approach. I have the components provide functions to run at certain points in the loop. It actually feels pretty nice to use, at least for me.
 
  • Agree
Reactions: Marvin and y a t s
Personally, my favorite way to write UIs when I have to is using a functional approach. I have the components provide functions to run at certain points in the loop. It actually feels pretty nice to use, at least for me.
I saw something in little overview blurb in the docs for Go's atomic package a while back that stuck with me:
Share memory by communicating; don't communicate by sharing memory.

To that effect, they have a lovely blog post about a pipeline-based approach to async programming. If you're familiar with stuff like JS's Promise objects you have that sort of pipe-like flow similar to with .then() and the results flow in when they're ready, but this extends further to async functions that function like indefinitely running workers that pass data to each other. Workers can be spawned, feed output, then close their output channel(s) to signal they're done, and you get that cascading effect where closing a source will close the rest of the receivers down the line. Combine this with Go's extremely lightweight goroutines and impressive scheduler, and you have something immensely powerful and ridiculously responsive since you can throw any slow or blocking procedures to a separate thread and feed the result from it to your normal data flow.

As an example, my chat client now does logging through a pipeline like this:
C-like:
func newChatLog(feed <-chan *Message) (<-chan *Message, error) {
    cfgDir, err := os.UserConfigDir()
    if err != nil {
        return nil, err
    }
    baseDir := filepath.Join(cfgDir, "sockchat/logs")
    logDir, err := newLogDir(baseDir)
    if err != nil {
        return nil, err
    }

    lf, err := openLog(filepath.Join(logDir, fmt.Sprintf("%s.log", time.Now().Format(dateFmt))))
    if err != nil {
        return nil, err
    }
    lw := bufio.NewWriter(lf)

    out := make(chan *Message, 2048)

    go func() {
        defer func() {
            lw.Flush()
            lf.Close()

            close(out)
        }()

        for msg := range feed {
            fl := ""
            if msg.IsEdited() {
                fl += "*"
            }

            fmt.Fprintf(lw, logFmt, time.Unix(msg.MessageDate, 0).Format("2006-01-02 15:04:05 MST"),
                msg.Author.Username, msg.Author.ID, fl, msg.MessageRaw)

            out <- msg
        }
    }()

    return out, nil
}
Assuming no errors, it will return the out channel pretty quickly and the func launched with go (goroutine) will continue to run in the background. The defer routine in that goroutine will trigger asynchronously when feed closes and the for loop exits. I manage message object allocation using freelists to cut down on garbage collection, so I need to be careful what accesses those message pointers before releasing/freeing them back to the freelist. Doing the log stuff with the message then passing the pointer along is a foolproof way to do so :)
 
Last edited:
The defer routine in that goroutine will trigger asynchronously when feed closes and the for loop exits.
defer is one of the coolest features I've seen in a language in a while, I've wanted something similar in many of my own projects.
 
defer is one of the coolest features I've seen in a language in a while, I've wanted something similar in many of my own projects.
It's amazing for cleanup stuff like closing file descriptors or freeing memory. You say up top near the declaration whatever cleanup you want to do and you don't have to worry about it anymore. Anyone who has done error handling in C with early returns will know this issue well. In C, I use goto with labels to sections near the bottom of the function that you can chain together as you allocate more shit in the function.

defer is a godsend for unlocking mutexes as well.

To see a cool extension of this deferred execution concept, see the very handy context.AfterFunc
 
Isn't defer literally just class destructors/RAII in c++ without having to make a class boilerplate
It's useful for that, but at its core, it allows you to queue up tasks to run upon the scope wrapping up
 
  • Agree
Reactions: Marvin and y a t s
Isn't defer literally just class destructors/RAII in c++ without having to make a class boilerplate
It's the same core idea, but defer is more flexible since you can defer any function, including anonymous ones. You can see my chat client example above defers an anonymous function that's used to flush the log writer buffer and close the log file. All defers run async as goroutines when the caller returns, so I do this here in a wrapper func to ensure ordering, but you can use this for all sorts of stuff where chronology is less of a concern. Think of it like scheduling an async routine to run later on upon return and just how powerful that could be.
 
Well, fortunately what I'm doing now is just a very minimal demo. It's just that any time I touch HTML/CSS I end up shitting out some geocities tier garbage that I know looks bad but I cannot figure out how to fix
Same.

I know HTML, CSS, and browser JS and all the relevant client-side technologies, I just have no eye for UI design.

I can tell when it looks and works like shit but that's about it.

That's why I stick to backend stuff.
Correct me if I'm wrong, but isn't it typically the client's responsibility to manage caching?
I mean, caching is ultimately just a technique. With web stuff, we're a lot more familiar with the client-side cache, because of all the http headers relating to the subject. But if performance requirements justify it, you can toss in a cache anywhere in the stack it makes sense.

Edit: Like nginx can cache responses from upstream servers.
 
I mean, caching is ultimately just a technique. With web stuff, we're a lot more familiar with the client-side cache, because of all the http headers relating to the subject. But if performance requirements justify it, you can toss in a cache anywhere in the stack it makes sense.
Yeah, and also you can go to the next level and move stuff around from a spinny disk to a SSD to memory depending on how frequently the content is accessed.
 
  • Agree
Reactions: Marvin
Back