Programming thread

Speaking on Go, yesterday morning Russ Cox, a member of the Go development team, created an open discussion on Github about adding telemetry to the Go toolchain (old archive: https://archive.ph/1mlIz; current huge screenshots attached). If anyone were to ask me about Go, I will tell them it was once a decent language to learn (like I did in this very thread on the previous page!), but now it's spyware. Even if the maintainers don't move forward with this, the fact that they thought of this has put a bad taste in my mouth. I imagine that this is something that's already being done by a lot of programming language toolchains of today, but it's still unconscionable.

Oh, this also seems ironic if Ken Thompson is still working on the language.
I hate the modern net so much.
Even if this started off great it would require eternal proactive management to avoid becoming the telemetry monster that has people increasingly on edge. People want to contribute to improve the tools they use but telemetry has a tendency to betray the user most of all.
 
  • Agree
Reactions: notafederalagent
I hate the modern net so much.
Even if this started off great it would require eternal proactive management to avoid becoming the telemetry monster that has people increasingly on edge. People want to contribute to improve the tools they use but telemetry has a tendency to betray the user most of all.
Indeed. One thing that really pisses me off about these decisions is that the people who are doing it pretend they are taking input from the users, but, if you actually read the way it's worded, they've already made up their mind on what they're going to do. Making shit decisions and being coy/smug about it is really rubbing salt in the wound.
 
For some reason I thought it would be a fun idea to create a messageboard using Laravel. This is your fault, Josh. At least I released the source code for extra OSS based points. (and so you can make fun of my coding sikllz)
 
Speaking on Go, yesterday morning Russ Cox, a member of the Go development team, created an open discussion on Github about adding telemetry to the Go toolchain (old archive: https://archive.ph/1mlIz; current huge screenshots attached). If anyone were to ask me about Go, I will tell them it was once a decent language to learn (like I did in this very thread on the previous page!), but now it's spyware. Even if the maintainers don't move forward with this, the fact that they thought of this has put a bad taste in my mouth. I imagine that this is something that's already being done by a lot of programming language toolchains of today, but it's still unconscionable.
I am not at all surprised. Hey, guess who developed Go?

I wish people would stop fixating on annoying troons and wonder why Google and Microsoft are hardcore pushing new languages and permissive licensing
 
I really think that try/catch is such an ugly wart on modern programming languages.

I know some languages tried to curb it a bit by having all methods declare what exceptions they throw, but that was really just a bandaid on what was ultimately a broken concept.

Having an exception tear through like several layers of libraries that you didn't write and then through your user code and you have no clue why is one of the most annoying fucking things.

Having error aware return types is something popular in the statically typed functional languages (ocaml, haskell, erlang, I think) and I'm very pleased it's making its way into the mainstream like in Go.

It's a little frustrating at first that you have to account for errors at literally every function call, but ultimately, it saves you a bunch of headache later. It is much nicer to be forced by the language design to consider every function you call in terms of "how can this shit the bed and how do I need to address it at the call site".

Golang does this with two element tuples (value, err).

In Ocaml does this with a special wrapper type called Result that has either an Ok returnval form or an Error err form. If you want to throw all caution to the wind and force a Result to return its Ok value (if you're confident that the code probably won't error), there's a function in the Result module that will do that and have the whole process panic if you're wrong.

It's a very good thing that it forces you to think about those things later. You aren't surprised if you do later get these catastrophic panics. You typed that code in to risk that possibility.
Some languages denote error throwing methods with !. By standard they will have both versions available for every method. Languages like elixir handle errors much cleaner without relying on try catch.
Exceptions are necessary at some level just because things don't always work perfectly, but I definitely agree it's a shitty pattern when you're suppose to write it into your business logic using try catch
 
Exceptions are necessary at some level just because things don't always work perfectly, but I definitely agree it's a shitty pattern when you're suppose to write it into your business logic using try catch
What do exceptions have that the ol' C-nile pattern
Code:
error_code = error_prone_function(arg);
if(error_code==SNEED){
   return(FORMERLY_CHUCKS);
}
lacks? Can someone try exceptionpill me? In a dozen years of pro work, I just don't see why exceptions are such a semantic staple of newlangs beyond debugging context. Wherever they're present, I always seem to want to write code where they can simply never happen.
 
What do exceptions have that the ol' C-nile pattern
Code:
error_code = error_prone_function(arg);
if(error_code==SNEED){
   return(FORMERLY_CHUCKS);
}
lacks? Can someone try exceptionpill me? In a dozen years of pro work, I just don't see why exceptions are such a semantic staple of newlangs beyond debugging context. Wherever they're present, I always seem to want to write code where they can simply never happen.

In case you forget to check the error code, essentially. Plus they make sure that your destructors get called. Compare C

Code:
bool ret = false;

if (!foo())
   goto end1;

if (!bar())
   goto end2;

if (!baz())
   goto end3;

ret = true;

end3:
// cleanup

end2:
// cleanup

end1:
// cleanup

return ret;

with C++

Code:
foo();
bar();
baz();

return true;

Well-written C++ code is far cleaner than well-written C code in this regard.

It's also not obvious which functions can fail - no-one checks the return value of printf, for instance. malloc can fail on Windows but (in practice) not on Linux.
 
In case you forget to check the error code, essentially. Plus they make sure that your destructors get called. Compare C

...

Well-written C++ code is far cleaner than well-written C code in this regard.
Your example is disingenuous. You claim C++ is "far cleaner" but you do not include how the C++ side actual handling looks. You handwave it away. That's exactly my issue here.

I've written C++. I've written C#. I've written Java. Professionally. When you include the exception code in these cases, it always looks uglier than the C. C, like always, makes the implicit garbage from OOPshit explicit. This is why I'm asking for someone to try take me at face-value here, because when you actually compare apples and apples, the OOPtard strats look like spaghetti.

It's also not obvious which functions can fail
The fuck? Are you still in university? If you're not aware of the failure profile of the calls you make, you're an idiot. Handwaving this away with language constructs is how schoolchildren and Rust trannies "think". You don't solve problems by pretending they don't exist. You RTFM, handle the edge cases that can potentially emerge, and pray that the cases you can't imagine in your first implementation don't bite you in the ass.
 
What do exceptions have that the ol' C-nile pattern
Code:
error_code = error_prone_function(arg);
if(error_code==SNEED){
   return(FORMERLY_CHUCKS);
}
lacks? Can someone try exceptionpill me? In a dozen years of pro work, I just don't see why exceptions are such a semantic staple of newlangs beyond debugging context. Wherever they're present, I always seem to want to write code where they can simply never happen.
One advantage of exceptions is that they're propagated up the call-chain until they're handled. This means you can write error-handling in one place most of the time and simply rely on exceptions being caught by the global exception handler instead of needing to fill your code with conditionals.

The other thing that makes them great is that it can allow certain middleware to act appropriately without needing logic to be explicitly coded. For example, let's say I'm doing some code that updates employee data in a relational database and then updates it in elasticsearch. We want the RDBMS and ES to be consistent with one another, so if updating one fails then both should be rolled back. A lot of MVC frameworks provide transaction management middleware that will roll-back all DB transactions in scope if any operation raises an exception, so you can simply throw an exception if something goes wrong and not have to keep track of your database calls to undo them.

Exceptions are definitely overused nowadays and I almost always prefer using result types. Both are better than errno.h though and Cniles are delusional if they think integer constants are acceptable error types for a high-level language in 2023.
 
One advantage of exceptions is that they're propagated up the call-chain until they're handled. This means you can write error-handling in one place most of the time and simply rely on exceptions being caught by the global exception handler instead of needing to fill your code with conditionals.

The other thing that makes them great is that it can allow certain middleware to act appropriately without needing logic to be explicitly coded. For example, let's say I'm doing some code that updates employee data in a relational database and then updates it in elasticsearch. We want the RDBMS and ES to be consistent with one another, so if updating one fails then both should be rolled back. A lot of MVC frameworks provide transaction management middleware that will roll-back all DB transactions in scope if any operation raises an exception, so you can simply throw an exception if something goes wrong and not have to keep track of your database calls to undo them.
Good example. Thank you. You can surely implement such code in C, but I'd never choose C to do this, especially if you're doing MVC work, which is implicitly OOP. What language would you choose to implement something like this in? I reckon it would be a pain even in C++ or Java, but I know little about API endpoints for your choice of RDBMS and ES.

Exceptions are definitely overused nowadays and I almost always prefer using result types. Both are better than errno.h though and Cniles are delusional if they think integer constants are acceptable error types for a high-level language in 2023.
We're on the same page here. Never meant to implicitly defend providing LESS debug info, but that's kind of where I'm forced when I defend the C models. errno integer-constant errorcodes are adequate in their contexts, especially historically, but there are surely many cases where you want to pass more info, not less. Result types are surely how I'd resolve your concerns.

That said, you do explicitly speculate about things "for a high-level language", and I don't think most of us C-niles view C as such, especially not with the state of current high-level languages. I use C when I want something that isn't syntactically retarded but want to stay reasonably low-level.
 
Your example is disingenuous. You claim C++ is "far cleaner" but you do not include how the C++ side actual handling looks. You handwave it away. That's exactly my issue here.

I've written C++. I've written C#. I've written Java. Professionally. When you include the exception code in these cases, it always looks uglier than the C. C, like always, makes the implicit garbage from OOPshit explicit. This is why I'm asking for someone to try take me at face-value here, because when you actually compare apples and apples, the OOPtard strats look like spaghetti.

Fuck off. I thought I'd do a good deed by answering what I thought was a good-faith question, and I'm regretting it now. How exceptions are handled in the unhappy path is an implementation detail for the runtime and OS, as I'm sure you know. For the happy path, it executes as written.

The fuck? Are you still in university? If you're not aware of the failure profile of the calls you make, you're an idiot.

And yet I gave you two examples of common functions that have unexpected failure profiles.
 
And yet I gave you two examples of common functions that have unexpected failure profiles.
It's also not obvious which functions can fail
It's absolutely obvious. You RTFM.

How exceptions are handled in the unhappy path is an implementation detail for the runtime and OS, as I'm sure you know. For the happy path, it executes as written.
If you don't grasp that we're discussing the topic of "how exceptions are handled" and that the handling itself is pivotal to the discussion, I see how you can think that failure profiles are not obvious: you seem to have language comprehension issues.
 
Good example. Thank you. You can surely implement such code in C, but I'd never choose C to do this, especially if you're doing MVC work, which is implicitly OOP. What language would you choose to implement something like this in? I reckon it would be a pain even in C++ or Java, but I know little about API endpoints for your choice of RDBMS and ES.
In Javaland, Spring provides the @Transactional annotation that makes this really clean to read and write. AFAIK it uses the PlatformTransactionManager bean along with some metaprogramming to implicitly pass all database calls to the transaction manager while in the scope of @Transactional and, if the correct kind of exceptional is thrown, call transactionManager.rollback(). Once execution leaves the scope of @Transactional, an implicit call is made to commit(). The underlying functionality isn't terribly complicated but the resulting code is much easier to read.

Pretty much all persistence in Java is abstracted under the same set of interfaces so it actually doesn't matter whether you're using DB2, MSSQL, ElasticSearch, H2, redis, etc. As long as your chosen persistence layer conforms to the persistence API, it'll work with this without too much issue.

That said, you do explicitly speculate about things "for a high-level language", and I don't think most of us C-niles view C as such, especially not with the state of current high-level languages. I use C when I want something that isn't syntactically retarded but want to stay reasonably low-level.
Even in the context of something like C, I'd prefer errors to be properly typed and type-checked at compile-time. Each specific kind of error should be a different type so the compiler can catch you when you're checking for the wrong kind of error. Of course, many C libraries and frameworks do this but a ton of C programmers still do shit the old way and it annoys me.
 
Pretty much all persistence in Java is abstracted under the same set of interfaces so it actually doesn't matter whether you're using DB2, MSSQL, ElasticSearch, H2, redis, etc. As long as your chosen persistence layer conforms to the persistence API, it'll work with this without too much issue.
This explains a lot, thanks. My Java experience was over a decade ago working on a project where the DB2 requirement was explicit, so these layers are not well-known to me, and working in Java was so irritating to me that I chose to do anything else at that point.
 
This explains a lot, thanks. My Java experience was over a decade ago working on a project where the DB2 requirement was explicit, so these layers are not well-known to me, and working in Java was so irritating to me that I chose to do anything else at that point.
Nowadays most Java code is database-agnostic. You typically use an ORM like Hibernate with a higher-level framework like Spring JPA to completely abstract all interactions with a relational DB. You write the code using derived interface methods or using JPQL (a query language that operates on the entity level instead of the table level) and hibernate automatically generates whatever flavor of SQL you need to talk to the database type specified in your application.yml.

I was kind of a Java hater until I got my current gig but tbh I've been enjoying modern Java a lot. So much of the stupid, tedious bullshit I used to have to think about when writing web services in express or django is just done for me in Spring and I don't think I'd go back. Though my current goal is to finally learn Rust and try my hand at writing a high-performance web service in it so I can earn my programming socks.
 
Not calling functions cunts considered harmful
The research is no good because it doesn't consider time. Consider this alternative hypothesis:
"Swearing in code was more acceptable in the 80s and any code that lasted from then long enough to make it to Github has likely been worked on and improved more than some newly written code."

A better approach might have been to look at repos where a swear word was checked in sometime in the past year (and not part of a bulk migration to Git).
 
Back