Programming thread

Well, remember that C is like, what, 50 years old now? Saying that the way it handles strings is a design flaw is like saying the Model T not having airbags is a major oversight. C was considered a high-level language in its time, but today's must-have concepts like memory safety were purely theoretical back then if they existed at all. And there's still a performance overhead to that stuff, so even if other languages finally overtake it for most systems stuff, I think C will still have a place for cheap embedded hardware and other places where safety can take a back seat to performance because this thing will never be internet-accessible and multi-user anyway.
Lots of other languages have more overhead too. In C they don't even give you the length of an array that's how much they cut it to the barebones. Like, in C they have shorts and literally nobody uses those these days because we have much more memory. But C included it because it was designed for computers with much less memory. Not only was the idea of garbage collecting not well known back then but you simply couldn't afford to have even a slightly less than perfect garbage collector for fear of filling up memory.
 
Well, remember that C is like, what, 50 years old now? Saying that the way it handles strings is a design flaw is like saying the Model T not having airbags is a major oversight. C was considered a high-level language in its time, but today's must-have concepts like memory safety were purely theoretical back then if they existed at all. And there's still a performance overhead to that stuff, so even if other languages finally overtake it for most systems stuff, I think C will still have a place for cheap embedded hardware and other places where safety can take a back seat to performance because this thing will never be internet-accessible and multi-user anyway.
Contemporary languages already had ways of doing things much better, i.e. Pascal strings. A similar way to handle strings much safer is with a struct which did exist in early versions of C. Example:

C:
struct string
{
  size_t length;
  char *text;
};

I'm not saying C should have a garbage collector or other memory safety things like that of Lisp at the time - there's obviously situations where you don't want a garbage collector always running. But arbitrary pointer arithmetic is very dangerous and unnecessary for most usecases (The basic use is indexing into an array or having an array of pointers which requires no arbitrary arithmetic at all.)

Lots of other languages have more overhead too. In C they don't even give you the length of an array that's how much they cut it to the barebones. Like, in C they have shorts and literally nobody uses those these days because we have much more memory. But C included it because it was designed for computers with much less memory. Not only was the idea of garbage collecting not well known back then but you simply couldn't afford to have even a slightly less than perfect garbage collector for fear of filling up memory.
Also because a short and an int are only guaranteed to be at least the same size of 16 bits, and so idk what the point was (maybe int was supposed to be the machine's "native word size"). Ints being 32 bit are implementation defined.
 
C was created and the "bible" was written in a time where there was a big market with several different architectures and really no clear winner. (Same goes for C itself by the way, it was by far not the only player in the game nor the most popular one for a long time) That there were architectural differences in e.g. how big an int really is (or endianness - the order in which bytes are stored in memory) were just a thing people navigated sometimes more or sometimes less good, it was by no means scary and often programs wouldn't leave the architecture they were written for and when they did, they needed to get rewrites to complement the often very different hardware and OS anyways. ASCII was probably one of the few things most of the platforms could largely agree on. We're talking a time here where you could write a file to a floppy disk on computer A and computer B would not be able to read the disk and that file because of non-bridgeable hardware differences in the IC controlling the disk drive. This was also a time where compilers for high level were in general not great and the produced code usually not as fast as it could be and in some edge cases even reproducably buggy. You had a lot of different companies bringing out these proprietary compilers for C that would have slightly different features and would produce slightly different code. In the 16 bit and early 32 bit world, you'd hand-optimize performance critical parts of your code in assembler anyways, mostly by taking excessive advantage of platform-specific hardware features. Especially in the home computer world, the CPU was often little more than the traffic cop managing everything and the custom hardware was where the performance was at. For the Amiga for example you can easily write code where you see complex things on your screen happening while your CPU does absolutely nothing except wait, all in the custom chipset.

What even is "memory safety" me would have asked you in 1990. You did a good job if your program didn't overwrite random parts of other programs on the computer running and you allocated memory in an OS-approved way and also didn't throw fragmentation bombs at the memory pool. Big thing with OSes that offered multitasking and quite an increase of complexity for some programmers. There's was lot of commercial software out there, programs people used and paid money for, which would invariably lead the computer to crash sooner or later because of such bugs. Even OSes. Then a plethora of compilers that would do things differently and handle sloppy code differently. For example: A lot of the aforementioned AmigaOS is a mash of Assembler and code generated by the GreenHill C-Compiler. How many of you know that compiler and it's intricacies/acceptance re: correctness of pointer allocation? SAS/C? Anyone? Thing here also was that often the compilers weren't quite standard ANSI C and code that compiled in one didn't necessarily compile in the other. People working on AmigaOS didn't have source control either, it was literally just a bunch of random .c files, a bunch of random assembler files and a bunch of random .c files with assembler code inlined. This was the OS for an entire computing platform.

Most of these OSes didn't have anything in the way of memory protection and many hardware architectures didn't really support it effectively anyways (no MMU). A more severe bug back then meant usually the machine crashed and the user had to reboot (which honestly, wasn't usually that big a deal) not that the database with private data of the entires' country population gets stolen. There were also no fuzzing tools and all that other crap. It was a simpler time. The fuckery started when there was suddenly a lot of money hanging on computers and these computers got networked in a big way, giving people new and interesting ways to fuck with each other and do a thing that's ingrained in the human race: being opportunistic. Lot of the sophisticated attacks on software you see these days also would've been impossible with the tools and computing power available to the average Joe back then, even if the bugs certainly existed.

This is the historical context of both the C language and the C language book.

Don't get me wrong when I talked about these contexts earlier - I do think the time for the average, non-systems developer to work that close to the metal are over, I don't really think it's necessary anymore nor is it all that practical. In my best days I could've written you very nice 68k assembler, at the same time I am very sure I couldn't handle modern x86 effectively nor could produce anything that's better than what gcc does. It's the age of abstraction and it makes sense to use modern languages that take advantage of the increased processing power, to a degree. I still think it doesn't hurt if the programmer knows a little bit about the hardware his stuff is running on and I also think that too much abstraction is bad and there's way too much reliance on having other programmers/the compiler/the hardware/stack overflow fix it.
 
Last edited:
Go is a systems language. Python is as far as you can get from a systems language. You can't compare them.

Go has memory safety, which is something you won't get in C, and which is responsible for absolutely catastrophic security fuck-ups like Heartbleed. A lot of programmers today want to write in really fast systems languages without worrying about memory safety, and for them, C is out.
Go is a language where the language designer explicitly stated it was designed for programmers who weren't good enough to use a proper, fully featured language. To think it's a language that would be used in any true systems context over an actual systems language is crazy.

EDIT (I misread your original post; without WORRYING about memory safety, yes, I agree. C requires concern there.)

I have read through some chapters of this book kind of carefully. I think the examples they give are outdated in some contexts.

The reality in my view is that C is a language that has some severe design flaws. C strings themselves have probably caused billions of dollars of damage due to the way they are designed and the confusing use of buffers, which are only worsened by the attempt at safe string functions which aren't safe at all.

Sincerely, an uninformed programmer who mainly writes in little python scripts. Sorry @AmpleApricots. I cannot even keep track of all the weird little C gotchas that exist in the language.
Roughly the same thing as said mentioned above; C strings are unwieldy and awful, but the billions of dollars of damage are not due to design flaws but rather implementation mistakes. Knowing when buffer overflows don't matter and when they do is a design concern; a language should not force you to pay for a cost you don't necessarily need. There's nothing confusing about a buffer if you understand memory layout.

Also, the "bible" is in the SICP category of overhyped and just somewhat useful. It's not a great way to learn anything about modern C; but there's little good literature about it. You must be fluent in C if you want to do any systems programming in any major OS. If you do, there are some good systems books I can recommend. If you are looking for anything higher level, C++ is what I would go with instead. (I would still use C++ for the systems programming, but you need to be able to read C to understand the systems fuckery.) If you want something more ergonomic like Python but not as horribly designed, look to Nim. It has optional GC and is a great little language that pairs beautifully with C / C++ (you can compile it to C or C++ and consume C libraries and source code with immense ease). It does enforce whitespace, like Python, though, which is a travesty and makes me much less enthusiastic to recommend it.

Don't run from C / C++ just for more ergnomic languages. If you're serious about any programming that's not web design, those languages are so ubiquitous and used so frequently as libraries that being illiterate in them is a massive disadvantage; so if you choose not to use them as primary languages, at least get acquainted enough with them to be able to have a general understanding of what's happening. (Seriously, though, don't choose C. It's absolutely not worth it.)

As for int size types: #include <stdint.h> (here)

All languages have some really frustrating, strange gotchas. You'll find that the more you master and dig into any particular language. C has some very strange ones, indeed, but unfortunately it is not an anomaly.
 
Last edited:
I am not changing my stance on C strings being a design flaw that is now standardized in the standard library. They are hard to to get right by beginners and hard for experts too. You could make the argument about the language not forcing you to pay the cost of what you don't need by having unsafe be an explicit option rather than having unsafe be the default. This is a design decision and a balance between hand-holding and actually preventing horrendous mistakes. I know the historical context of C being for very slow and low memory machines. It's just a flaw we have to live with.

I do not even know if nul terminated strings are more efficient than length prefixed ones. If you do any common operations like string concatenation, you have to carry around a string length anyway, needing extra storage, or else suffer algorithmically bad runtimes. For 8 or 16 bit machines of the era, where you bother to count bytes, you probably would have a memory penalty of only 0 or 1 byte. You also have the advantage of being able to store the zero byte within your string instead of escaping it. Contemporaneous languages like Pascal (which has its own problems) already knew this. Further reading for those interested https://en.m.wikipedia.org/wiki/Null-terminated_string

Saying all languages have design quirks is not an excuse for them. Rather we can learn and try to improve with the benefit of hindsight. I would embrace C++ but to my untrained eyes it appears to have gone the opposite direction, making the language monstrously complex.

Also here's a good resource for safe C code.
Pretty much all of it is designed to prevent undefined behavior and I have a hard time imagining any realistic scenario where you could argue with these rules and recommendations.
 
Last edited:
  • Agree
Reactions: Marvin
I am not changing my stance on C strings being a design flaw that is now standardized in the standard library. They are hard to to get right by beginners and hard for experts too.
It'll be hard to find anyone that actually revels in the design of c strings, but you did say they caused billions of dollars of mistakes. No, they did not. Programmers caused those errors, not the language. C strings are error prone if you are not careful but they are not "hard" for experts. It's standard array operations and remembering to append the null terminator. If that's hard, then god help you.
(Furthermore, it's not common that you'd ever be really using stdlib cstrings for most situations. )

If you want to attack C for design, go towards its macros and weak type system. Those are much more fertile areas.

You could make the argument about the language not forcing you to pay the cost of what you don't need by having unsafe be an explicit option rather than having unsafe be the default. This is a design decision and a balance between hand-holding and actually preventing horrendous mistakes.
You're right. It's a design decision. C decided unsafe was default. Argue for or against it any way you like. But don't pretend that linters, debuggers, and compiler warnings don't exist. These "horrendous mistakes" are, for the most part, very simple to avoid in most contexts once you gain some experience. Either way, there are plenty of languages that take both approaches.

Saying all languages have design quirks is not an excuse for them. Rather we can learn and try to improve with the benefit of hindsight. I would embrace C++ but to my untrained eyes it appears to have gone the opposite direction, making the language monstrously complex.
I certainly wasn't trying to excuse C, more just trying to warn you that landmines and stupid idiosyncrasies are everywhere, even in the "safe" languages, and to watch out for them.
 
Last edited:
I disagree that they are not hard for experts. Experts are humans and the language design creates a cognitive load on the programmer that sometimes slips by a linter. In the SEI CERT page there are a bunch of examples that are obvious when given to you in a 5 line code snippet but not so obvious over a 1000 line program.

https://wiki.sei.cmu.edu/confluence/pages/viewpage.action?pageId=87152038
https://wiki.sei.cmu.edu/confluence/pages/viewpage.action?pageId=87151974

(side note: I know a few people who work at the SEI and none of them are idiots. all have a very good understanding of their specialty.)

Yes the macro system is horrendous and also is the use of NULL constant also being zero, and the usage of null pointers at all.

The problems with C strings are so widespread, any vulnerability database is full of them. In fact one vulnerability with sudo was just announced this month. https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3156
Some of the errors are caused by bad coders, but I cannot write off the thousands of string and buffer related vulnerabilities as all from bad coders.
 
Last edited:
  • Agree
Reactions: Marvin
How is Go a systems programming language if it gets removed from backends for not being preferment enough. I don't believe that anything garbage collected can actually be a systems language. Discord and by now others have ripped out Go backends in favour of that which is not Go.
Yes, people do rewrites, and some systems languages are faster than others. Rust is expected to be faster than Go.

I definitely agree that's how Go was marketed, but in reality is just not a good choice for backend for a number of reasons. Mostly that backends really benefit from async because you can't rely on the state of the clients you're serving. Else you'll end up with a Slow Loris problem. Maybe you can't get around that with stricter timeouts, but that's going to put the GC in higher importance, which is not Go's best attribute.
Go does benefit from everything provided by async. That's what goroutines are doing. The goal is to have one goroutine per connection. The runtime schedules them according to what is blocked on IO. Additionally, the runtime divides the goroutines over OS level threads, so that you can have the same number of goroutines running in parallel as you have cores.

I've never seen anyone switch from Python to Go, so I don't like that people use that compression. I think it's out of character for the niches, which is why I criticise the claim. I'm trying to say that speed is not Go's saving grace, even-though it's often advertised as such. I think they pick Python to compare because it makes Go look better, not that's it's actually damning for python.
I don't know who is comparing Go with Python. They don't compete in the same space. Go should be compared with a language like Java, which is also used heavily for web backends. No one should use Python in this space for anything that needs to scale.

For some data, here are the latest techempower benchmarks. The top 10 are C, C++, Rust and Go. Python appears around 180. They're not in the same league. They're not even in adjacent leagues.

I have seen Go in projects' tools directories and it's used in website frameworks. (maybe you mean webframeworks are backends I don't think of them as that) Both of which are things that Pearl was traditionally used for. So, I think Go might become the new Pearl, but I'd still not use it. I think the syntax is gross and the people who make it are dumb.
Perl has never had a place in high performance web backends.

Go is a language where the language designer explicitly stated it was designed for programmers who weren't good enough to use a proper, fully featured language. To think it's a language that would be used in any true systems context over an actual systems language is crazy.
I posted his quote earlier in the thread. However, if he's thinking about "fully featured", he hasn't got C in mind, which also has piss all in the way of features compared to a contemporary contender like Rust.

Well, remember that C is like, what, 50 years old now? Saying that the way it handles strings is a design flaw is like saying the Model T not having airbags is a major oversight. C was considered a high-level language in its time, but today's must-have concepts like memory safety were purely theoretical back then if they existed at all.
The two oldest languages still extant are Lisp and Fortran. They are both memory safe.

I don't give C a free pass because it's 50 years old. Smalltalk was invented the same year as C, and even back then, C was clearly an uninspired, slapdash language used to design an uninspired slapdash operating system. UNIX and C's virtue was their simplicity, which made them easy to port to the stock hardware of the time. But it was way behind the state of the art even in 1972.
 
Last edited:
One feature of C arrays/strings I do like is the ability to use basic pointer arithmetic to operate on a subarray as if it was its own structure while still sharing memory with the main array. If we have an array a = [1, 2, 3, 4, 5], it's simple to pass a function a+2 with a size of 2 so it sees the array [3, 4]. A lot of algorithms can be implemented very efficiently and elegantly using this.

There's no technical reason higher level languages can't support it, Common Lisp does (it calls them displaced arrays), but in my experience most don't. It's not possible with C++ vectors without some very ugly workarounds.
 
There's no technical reason higher level languages can't support it, Common Lisp does (it calls them displaced arrays), but in my experience most don't. It's not possible with C++ vectors without some very ugly workarounds.
Not really, you just default to iterators instead of pointers.

I gather you'd like something akin to
Code:
std::string_view
but for general containers and this is what iterators are for.
 
I've had some free time recently and thought that maybe it's about time I stop playing so many vidya and work on one of my own. I haven't done any game development at a serious level so I knew it'd be a heap of learning, but what the heck, something new to put on the CV. I didn't consider Unity at first because I thought it was one of those all-encompassing things that would force me to use its built-in IDE rather than one I already use and am familiar with, but after doing some research and realizing that wasn't the case, I downloaded it and gave it a try.

Now what happens when you launch Unity? Well, Unity doesn't launch. Instead, Unity Hub launches. Unity Hub is a separate app to manage Unity projects and it looks like this.

View attachment 1875006

Now as a web developer, I recognized the look of this immediately. It's Material Design, the "design language" that Google uses in most of its sites as well as Android on the whole. But putting aside the fact that they have a separate app for project management rather than just including it in the base app, why the fuck is it Material? They didn't even go out of the way to customize the colors or anything; it looks like something Google would have made themselves. Is this not an actual app and instead some Electron garbage using Google's Material CSS assets (which they provide as FOSS) to quickly hack together a UI? Was this built in Android and then ported to desktop using one of those converter systems? mystery.gif

Well, whatever. I create a new project and follow along with a tutorial, but I'm finding that even though Unity isn't forcing me to use it as an IDE, it's still apparently assuming I'm going to be doing a lot of work in this Unity program itself. And just like with Adobe's recent garbage, Unity completely forgoes using standard UI elements and has its own implementation of even the simplest form widgets for no apparent reason. Is Unity itself written in Unity? There's this massive list of properties and sliders and stuff to edit attributes of the sprites and stuff in the project. I don't know, perhaps it's possible to change all these attributes in code, but the newbie-level tutorials I was finding weren't going to teach me how to do that. I started to wonder how pros could really stand to build full-fledged AAA games in this thing.

This is where I lost it, though.

View attachment 1875023

If you're a Mac user, you already know what is wrong with this window. If not, feel free to compare it to the previous screenshot, which at least managed to implement the title bar correctly.

So fuck Unity. I've got a couple Cocos2d-x tuts open in some tabs and I'm going to give them a deeper look tomorrow; Cocos2d-x has a GUI tool like Unity, but it's entirely optional and it is apparently still possible to just use it as just a code library. If anyone has other suggestions, I'd appreciate it. (Requirements: Sprites and preferably tilemaps, at least Mac and Windows compatibility, free or with a reasonable free tier, preferably usable from C++, Swift, or C or C# as last resorts (no JS or Lua, please). Physics not necessary.)
Unless they've changed something a Unity project hooks directly into Visual Studio C# and you get every benefit involded in running that. Put breakboints into your code in VS and play in the Unity editor and VS will be called up if everything is configured directly iirc. It's a bit like remote debugging an Xbox in a way. It's very nice unless they've ditched that for some reason. VS C# is free for personal use so look into that.
 
(side note: I know a few people who work at the SEI and none of them are idiots. all have a very good understanding of their specialty.)
You started off the entire discussion admitting you have very little experience and knowledge about C, then base your argument around powerleveling and an argument from authority. (I don't think you have the ability to assess your friends' expertise level, sorry.) Fantastic.
Yes the macro system is horrendous and also is the use of NULL constant also being zero, and the usage of null pointers at all.
There is nothing wrong with null pointers. They, just like void*, have a very important place and while abused frequently, are not a design flaw.
The problems with C strings are so widespread, any vulnerability database is full of them. In fact one vulnerability with sudo was just announced this month. https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3156
Some of the errors are caused by bad coders, but I cannot write off the thousands of string and buffer related vulnerabilities as all from bad coders.
I really hope you also write off SQL as being intrinsically flawed due to the amount of SQL injection problems associated with it over its lifetime. After all, it's the language's fault that the programmer isn't sanitizing its inputs, right?

I posted his quote earlier in the thread. However, if he's thinking about "fully featured", he hasn't got C in mind, which also has piss all in the way of features compared to a contemporary contender like Rust.
I agree with you entirely. C's contemporary relevance is generated entirely by its ubiquity and not by its merit. However, C does allow you all the low level operations required to do real systems work, which is something Go cannot do.
 
I put that in one sentence and in parentheses, so idk why you choose to focus on that. However I think the SEI produces much more convincing and supported arguments about security than I can expect from anyone on an internet stalking forum.

SQL injections are a serious vulnerability from user input. C's string functions are vulnerable to user input, but also much more likely to be misused by the programmer to their design. I don't buy the argument that experts are automatically immune from making mistakes, because of the cognitive load imposed by language design, but I can't convince you of it, so I won't bother it further.

For the pointer arithmetic thing: I think other languages support it in terms of array views. Well you can pass in iterators to the same effect as passing in array pointers.
 
  • Agree
Reactions: Marvin
I put that in one sentence and in parentheses, so idk why you choose to focus on that. However I think the SEI produces much more convincing and supported arguments about security than I can expect from anyone on an internet stalking forum.
I care more about content than sources. Furthermore, the question was simple and can be answered in terms of the language itself; is a buffer overflow the fault of the language, the library, or the programmer?

C's string functions are vulnerable to user input, but also much more likely to be misused by the programmer to their design.
We've apparently come full circle, because it appears we agree entirely. Yes, buffer overflows are caused by programmer misuse.
I don't buy the argument that experts are automatically immune from making mistakes...
We also agree here, because no argument was ever made that experts are immune from error.
Honestly? I would.
I can empathize heavily with this.
 
I agree with you entirely. C's contemporary relevance is generated entirely by its ubiquity and not by its merit. However, C does allow you all the low level operations required to do real systems work, which is something Go cannot do.
Are you sure? Go's runtime used to be written in C, but for a good while now, it's been written in Go, and that requires real systems work.

That's my preference in language design. If your language has a runtime, it should be the sort of thing you want to self-host. Most languages that need a runtime fail here, with Go being a notable exception.

Being able to write your own runtime, and therefore your own memory manager, means that writing memory unsafe code should be possible. But I'd say the constructs that allow this should be aggressively segregated from the constructs you need 99% of the rest of the time, and abstracted away into safe and battle-tested libraries. If you do need to drop down to use them, mostly to write your own libraries, you should somehow flag that what you're doing is dangerous (see Go and Rust, for example).

Idiomatic C doesn't have any such segregation, so it's still mostly juggling footguns. That said, I think every Linux programmer should be able to read and write basic C, whatever language they use daily.
 
C is still my 2nd fav language after assembler (i know not really one lang) but outside of writing software for boeings I've never had a chance to use it in a job :(
 
C is also useful for microcontrollers where you usually just have a minimal amount of memory (think kb range) and don't really have to worry about these problems. If you don't want to delve into assembler, C is basically the only language you'll reliably find a compiler for all of them.
C really can't be beat for embedded stuff. I was looking at the Rust runtime libraries for embedded devices and they looked really immature. If you were to put the C and Rust side-by-side and the C is more readable. The C versions are cleaner and don't have strange macros and constructions. The way you acquire pins is also less direct. First having to initialise the pin interface and then select the pin and mode you want, while in C you can just get the pin and mutate it. This is an example with Arduino:
There is also this Rust example using sysfs_gpio, but it uses std and not an embedded runtime. In terms of my complaints, it looks fine.

All roads lead to Rome and all programs compile to C. Even though the rustc is self-hosting it still just links against LLVM C bindings to do code generation. Until code generation is not done in C, we'll still have it around. On micro-controllers most operations are not stringly typed. C does not have a great solution to strings. Most things in Operating Systems, like desktop app dev, are stringly typed. For example networking is all strings and you get stories about C buffer overflows all the time. You'll think it's gay that I know this but the bluetooth buttplug hack has a buffer overflow of C embed. Very believably it's down to skipping over the string's null byte.

Like what @awoo said, it is really a C problem that when you iterate over char array in C, arrays don't carry information about what the last item in them are. I'm not sure how strlen is implemented; maybe it is susceptible to this problem. The real problem is the C does not have foreach loops. You can't have a buffer overflow if your loop stops when you no longer have data to read. The design flaw is checking for something that can not be there and by extension is insecure. The better practice is not to trust your data has been created correctly; correctly being with \0.
C:
#include <stdio.h>
#include <string.h>

int main()
{
  char hello[] = "hello world\n";
  int len = strlen(hello);
  for (int x=0; x<=len; x++)
    {
      printf("%c",hello[x]);
    }

  char hello1[] =
    {'h','e','l','l','o',' ','w','o','r','l','d',' ','1','\n'};
  int len1 = sizeof hello1 / sizeof hello1[0];
  for (int x=0; x<len1; x++)
    {
      printf("%c",hello1[x]);
    }

  return 0;
}
In the above example, we don't rely on the data telling us how to react to it, rather we take the compiler defined length of the data and use this to govern our operation with the data. We can trust the compiler as much as we are willing to trust the the bin created be the compiler.
But I'd say the constructs that allow this should be aggressively segregated from the constructs you need 99% of the rest of the time, and abstracted away into safe and battle-tested libraries. If you do need to drop down to use them, mostly to write your own libraries, you should somehow flag that what you're doing is dangerous (see Go and Rust, for example).
Since we use language to write programs it is good to have disambiguate language to convey what you want from the compiler-interpreter. You sound like the adverts for Rust and Go. Memory Safety is really not the selling point for me. I like Rust because of the better type system, much better macros, and lack of header files. Memory Safety is nice and having a permission system for variables is nice, but it's not the sticking point at least for me.
 
Last edited:
  • Like
Reactions: Yotsubaaa
Back