Programming thread

Since there are only nine combinations, you could make stream a functor of its source and destination protocols, and then define nine stream functions in terms of the Stream functor, each with the appropriate one-liner. Better would be to find a way to get the choice into your build configuration, if protocol choice is possible to know before runtime. But if you want to stick with the more dynamic approach, you could replace your protocol module with a record type:

Code:
type conn =
  { pull : unit -> (payload, Error.t) Lwt_result.t
  ; push : payload -> (unit, Error.t) Lwt_result.t
  ; close : unit -> (unit, Error.t) Lwt_result.t }

val open_connection : string -> (conn, Error.t) Lwt_result.t

You implement "open_connection" by having it return records with the appropriate functions. If you want to keep the existing modules, that's just:

Code:
let open_connection prot str =
  match prot with
  | "rtmp" ->
      let open RTMP in
      let%bind conn = open_connection str in
      let pull () = pull conn in
      let push = push conn in
      let close () = close conn in
      Lwt_result.return {pull; push; close} )
There's other ways to implement this, certainly. But it's just pretty convenient to approach things this way, especially if I'm dealing with multiple repos. Like each implementation could be its repo and linked in, and as long as they all conform to the right signature, some client program/library could make use of them with first class modules.
 
Y'know who Rob Pike had in mind for his target market of idiots? Fucking Googlers.

"The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software."

The software industry was fucked decades ago. It's long been idiocracy.
Incidentally, can you smell the cumin and coriander wafting from this part of the quote?
Smells like H1B Visa
 
C++ has changed a lot over the years.
C++ is a bloated aging whore that shouldn't be used for anything anywhere by anyone ever.

Ok maybe that's a bit harsh, but it has massive problems. It's meant to be a 0 cost abstractions language and yet it's not at all hard to accidentally use the not 0 cost features and end up with a large performance reduction (like in the benchmark you linked, they implemented the scene data structure sub-optimally and lose 30% for it). It has so many redundant features that no program wants to use them all and each project uses a different subset so there's no real language identity and things often end up a mess. Unless you want to use a specific game engine or something steer clear in 2019 imo.

when do you get to call yourself a Senior Developer on your CV / Linkdin?
Whenever you like. If you want to be safe wait until you get a role where that's your actual job title. A lot of smaller companies will basically let you have whatever job title you like it you ask nicely. Try not to bite off more than you can chew though. Senior devs are expected to be good at management and communication, not just programming.

OK then. Good luck with rewriting all operating systems and drivers!
Yep, it's not going to happen anytime soon. People are jumping on the "everything is going to be rewritten in memory safe languages" bandwagon but I'm guessing they don't remember the last 6 times that bandwagon failed to go anywhere and everyone got off it. We've had languages (e.g. Ada) that are significantly more safe than Rust and Go since the 80's. We've had languages (i.e. Java) that have been way more popular and still we haven't seen adoption for even basic tools. It's not that the problem is impossible, it's that there just isn't a good fit and most people far prefer to write new stuff than rewrite old stuff.

The only hot new thing that's remotely suitable for a ground up rewrite atm is Rust. It's systems programming so you want an imperative language. That's far and away the tool of choice for the domain and it's why despite it being so much easier to formally verify a functional language it's still far more common for verified systems to be in C or Ada. We already have verified kernels as well mind (e.g. L4).

Rust as a language is fine. The compiler is complete shit. It can't even compile itself on 32bit machines because it exhausts the address space and even when it does have the memory it takes an age. The Rust fanboys always like to trot out the "But it does so much extra work!" This is idiocy from people with no understanding of the workings of modern compilers. Compilers need to transform the program into SSA/CPS anyway for optimization reasons and once you've done that the memory safety checks aren't that difficult. The problem is less with Rust itself and more with LLVM. That's what you get for using C++ kids. It's completely unusable for a serious OS project, can't compile itself on 32bit is not ok. The good news side of this is that it's a fixable problem.

2038, if they sort out the compiler problems, is the best hope to see real widespread adoption of this stuff and even then it's optimistic. To quote Theo, peace be upon him, "For gods sake, the simplest of concepts like the stack protector took nearly 10 years for adoption, let people should switch languages? DELUSION."

Y'know who Rob Pike had in mind for his target market of idiots?
Yeah we've seen this a lot. Java had similar ideas. I hate to say it but it works. Nobody thinks of themselves as the idiot programmer but these languages often end up being very popular and I can't deny it's true that most programmers are googling idiots.

Some days I hate that I live in this world.
 
I respect idiot-friendly code.
It's easier to write code than to read code. Even code you wrote yourself can be a pain to understand again after a few years when you completely forgot the mental process you went through when you first wrote it.
There's an art in writing very clear understandable code. And that's what I care about when I have to work on it. "Clever" code is just the programmer jerking off in your face when you have to read it.
 
Rust as a language is fine. The compiler is complete shit. It can't even compile itself on 32bit machines because it exhausts the address space and even when it does have the memory it takes an age. The Rust fanboys always like to trot out the "But it does so much extra work!" This is idiocy from people with no understanding of the workings of modern compilers. Compilers need to transform the program into SSA/CPS anyway for optimization reasons and once you've done that the memory safety checks aren't that difficult. The problem is less with Rust itself and more with LLVM. That's what you get for using C++ kids. It's completely unusable for a serious OS project, can't compile itself on 32bit is not ok. The good news side of this is that it's a fixable problem.

can you explain to a big dum dum (me) what rust not being able to compile itself on 32 bit machines means? also, what is the significance of 32 bit architecture in this context?
 
what rust not being able to compile itself on 32 bit machines means?
It means what it says really. If you have a 32bit processor and you try to compile the rust compiler it will fail.

what is the significance of 32 bit architecture in this context?
It's due to memory limitations. Each byte of memory that's used by a process needs to have a unique address (i.e. number). 32bit machines can only handle 2^32 different numbers. That gives you about 4GB of usable memory per process. Not enough to compile rust sadly.
 
@Where Do You Find Them? it won't compile even a simple "hello world" program on a 32 bit computer?
No, it can compile simple programs. It just won't compile the rust compiler itself. What this really means in practice is that every time you want to update your rust compiler on a 32 bit machine you need to do the compiling that's needed for that on a 64 bit machine and then transfer the result over.
 
There's other ways to implement this, certainly. But it's just pretty convenient to approach things this way, especially if I'm dealing with multiple repos. Like each implementation could be its repo and linked in, and as long as they all conform to the right signature, some client program/library could make use of them with first class modules.
Ah, thought you had a case where first-class modules were essential. What you're talking about here is just the text-book purpose of ordinary functors: you can write library code as a functor and let clients apply it to whatever base modules they have, possibly ones from third parties. In your case, you could replace your stream function with a functor of two protocol modules.

Anyway, SML/Ocaml modules only come into their own when you start doing interesting things with the types. Otherwise, you might as well just use records.

I've written and read Haskell code that's painful on the eyes because there's no generally nice way to package and link up types, and it's enough of a problem that there's been a maturing Haskell implementation of an SML style module system for a while now.

Rust as a language is fine. The compiler is complete shit. It can't even compile itself on 32bit machines because it exhausts the address space and even when it does have the memory it takes an age. The Rust fanboys always like to trot out the "But it does so much extra work!" This is idiocy from people with no understanding of the workings of modern compilers. Compilers need to transform the program into SSA/CPS anyway for optimization reasons and once you've done that the memory safety checks aren't that difficult.
Most of your post sounds good, but Rust's memory safety is supposed to come from the type-checker, and that should be verifying type correctness before you move to a representation like SSA/CPS. But I'm sad if what you say is right and LLVM is a shit-show.
 
Last edited:
Should I learn Rust, Ruby, and Assembly?
What are the advantages of each, and what are they best used for?
 
Should I learn Rust, Ruby, and Assembly?
What are the advantages of each, and what are they best used for?
You can cross-out assembly outright imho. I'm doing MIPS assembly for a course and while it illustrates fundamental Computer Architecture concepts very well, it also illustrate why you wanna let your tools write it for you. Hand-written assembly lacks maintainability, performance, and portability by definition.

Ruby is quite autistic. If you like programming socks and boasting about how cute and idiomatic your code is, go for it. But for real work other than maybe shitty webapps, just don't.

Of the 3, Rust is that which I would favor. Despite current leadership being a pack of autistic commies, it applies most of the good ideas that can be applied in a language, and what you learn can transfer to either C++ or functional programming type stuff, unlike Ruby which is a dead-end in that regard.
 
No, it can compile simple programs. It just won't compile the rust compiler itself. What this really means in practice is that every time you want to update your rust compiler on a 32 bit machine you need to do the compiling that's needed for that on a 64 bit machine and then transfer the result over.
As much as I like to shit on Go, its compiler it really interesting and well engineered. They pretty much made the assembler architecture independent by writing their own meta-assembly.
 
Ah, thought you had a case where first-class modules were essential. What you're talking about here is just the text-book purpose of ordinary functors: you can write library code as a functor and let clients apply it to whatever base modules they have, possibly ones from third parties. In your case, you could replace your stream function with a functor of two protocol modules.
The difference between functors and this situation though is that you can't pick the functors dynamically at runtime. I suppose you could manually generate all combinations of src and dst and create a stream function for each combination, and put them in an alist (or a hash table or whatever), but I find in some contexts the first class module approach is less messy.
Yep, it's not going to happen anytime soon. People are jumping on the "everything is going to be rewritten in memory safe languages" bandwagon but I'm guessing they don't remember the last 6 times that bandwagon failed to go anywhere and everyone got off it. We've had languages (e.g. Ada) that are significantly more safe than Rust and Go since the 80's. We've had languages (i.e. Java) that have been way more popular and still we haven't seen adoption for even basic tools. It's not that the problem is impossible, it's that there just isn't a good fit and most people far prefer to write new stuff than rewrite old stuff.
Actually, several unikernel implementations exist that have indeed rewritten the OS in a memory safe language.

I've been eyeing mirage OS, which is based on Ocaml. I can write a basic web API that might consume some other web API, maybe read/write some stuff to disk in Ocaml, simple things like that. I can have it compile to a linux binary and run it in docker or whatever, or I can compile it to a xen image that runs directly on a hypervisor (ie the cloud equivalent of bare metal), with no linux/bsd/whatever kernel at all. Just a full stack written entirely in Ocaml. From the network drivers (it only has to implement a single virtualized network card, which makes things easier), through the TCP stack, all the way up to my application code.

You can run these kernels on AWS and basically any other public cloud offering that uses xen and lets you use custom kernel images.

The big advantage of unikernels is that the final image is tiny and starts up super fast. Compared to them, Linux is a big bloated turd that you have to keep running constantly because you can't start it on the fly.

There's DNS servers out there that can start up unikernels dynamically in response to DNS requests, pretending that they've been running the whole time. This is a huge boost in savings if you're with a provider that charges by the hour.
 
The difference between functors and this situation though is that you can't pick the functors dynamically at runtime. I suppose you could manually generate all combinations of src and dst and create a stream function for each combination, and put them in an alist (or a hash table or whatever), but I find in some contexts the first class module approach is less messy.
In your case, where you're not doing anything complicated with the types in your modules, you could just use records, or otherwise find an appropriate runtime type to represent a protocol. Modules are cool because of what they allow you to do at the type level, and if you need this at runtime, then you are talking about a program which synthesises new types at runtime, which isn't something that your example does. Synthesising new types at runtime isn't novel in typed functional programming (you'll get it if you use polymorphic recursion), but I want to see a case where we need this sort of thing and first-class modules are the answer.

I've been eyeing mirage OS, which is based on Ocaml.
Mirage OS looks stupidly cool, and the obvious way forward for modern internet architecture. I'll try to deploy my current home project with it.

I know they like to talk up the importance of functors for their unikernels, and I'll bet you internet stickers that they're all old-school.
 
  • Like
Reactions: Marvin
Yeah we've seen this a lot. Java had similar ideas.
It's been a good few minutes since this thread's last Lisp plug anyhow, so I thought I'd share that famous Guy Steele quote since we're on the topic of idiot target audiences (I kid! Nothing but love from me for the C++ guys.):
Guy Steele (Re: Java's quick and widespread adoption) said:
And you're right: we were not out to win over the Lisp programmers; we were after the C++ programmers. We managed to drag a lot of them about halfway to Lisp. Aren't you happy?



Ruby is quite autistic. If you like programming socks and boasting about how cute and idiomatic your code is, go for it. But for real work other than maybe shitty webapps, just don't.
Aww, you don't like Ruby, @Citation Checking Project ? But don't you want to be in the company of such fine, upstanding individuals as this guy gal? And don't you want to be able to finish half-assed class implementations at runtime by idiomatically redefining method_missing like a fucking degenerate, data security and encapsulation be damned?
Ruby:
## How to be lazy! By Yotsubaaa
class DidntFinishThis                                                
                                                                          
  def initialize(name)                                                    
    @name = name                                                          
    @banking_info = "TOP SECRET!!"                                        
  end                                                                     
                                                                          
  # Let's make it so I can arbitrarily add methods at runtime!            
  # That seems like a safe and sane idea!                                 
  def method_missing(method, *args, &block)                               
    super unless (method.to_s =~ /^addmethod_\w+/)                                  
    method_name = method.to_s[/(?<=_)\w+/]                                
    define_singleton_method(method_name) { |*args| block.call(*args) }    
  end                                                                     
                                                                          
end                                                                       
                                                                          
                                                                          
obj = DidntFinishThis.new("obj")                                          
                                                                          
# Oh whoops, I 'forgot' to add the method back when I was writing         
# the class. Oh well, I can just do it now!                               
obj.addmethod_say_hello do |name, greeting|                               
  my_name = obj.instance_variable_get("@name")                            
  puts "#{greeting}, #{name}! I am #{my_name}!"                           
  puts "By the way my internals are: #{obj.instance_variables.to_s}"      
end

# So convenient! And absolutely no downsides whatsoever!                             
obj.say_hello("everyone", "Hello")                                        
#=> Hello, everyone! I am obj!                                            
#=> By the way my internals are: [:@name, :@banking_info]                 
                                                                          
# No, wait! Uh-oh! Abort!!!                                               
obj.addmethod_get_banking_info() do                                       
  my_banking_info = obj.instance_variable_get("@banking_info")            
  puts "My banking info is #{my_banking_info}!"                           
end
 
The (potentially non-final) source code for 1989 NES game "Raid 2020" has been released. It was written with a "engine" called the "NES Quest Game State Machine" by Dan Lawton, founder of Color Dreams, an unlicensed NES game studio. The code has extensive comments that border on documentation: https://archive.org/download/Raid-2020-Source-Code
 
It's been a good few minutes since this thread's last Lisp plug anyhow,
Interesting that you bring that up in relation to Java. A thought just hit me: is the JVM a very advanced Lisp machine? Or more correctly, a SECD machine? Something here is scratching at the back of my mind. JVM bytecode is all about stack operations. SECD? Forth? We have to investigate.
 
  • Like
Reactions: Yotsubaaa
Interesting that you bring that up in relation to Java. A thought just hit me: is the JVM a very advanced Lisp machine? Or more correctly, a SECD machine? Something here is scratching at the back of my mind. JVM bytecode is all about stack operations. SECD? Forth? We have to investigate.
The JVM doesn't have an instruction for tail-calls, which forever cripples recursion, so no. The JVM is a very stupid platform on which to run anything calling itself Lisp. The CLR is better.

But the amount of reflection and dynamic shit you can do on the JVM and the CLR does bring them some fraction of the way to Lisp compared to C++, which is why you can have Clojure and Groovy on the JVM.

On Ruby, I don't think there's anything immediately stupid about being able to add methods to live objects, so long as you get into the Lisp mindset, where Lisp gets to eat the world. Your IDE and debugger should not only be written in Lisp, they should share the runtime with the programs you're writing. You then realise you want a way to add methods to live objects because you want to be able to write IDE, debugging and patching tools that do this, not because it's a sensible way to solve your initial problem. You can still get a feel of this sort of mindset in Emacs.

But the best examples I've seen are from Smalltalk, which always had an eat-the-world philosophy, and its first implementations where operating systems. And so my problem with Ruby was always: why aren't you using Smalltalk? Alan Kay is smarter than you.
 
Back