No, you're retarded. Implementation details should not mix with whatever you're programming, unless you're programming an implementation. That's the whole point of abstraction.
This separation is the entire reason why I didn't go into how much I fuck with implementation details and enjoy it, and how much I like dealing with performance. You know why? because that shit should be in a library, which is what I do. The inability to keep things separate is a mark of a primitive mind.
Why would you ever program something that, by definition, doesn't do anything? You and your ilk are fucking loons.
Move along kid. The rest of us actually program out in the real world, which involves coding with
other programmers. Those layers of indirection that you're bitching about exist
exactly so other people can work on the same program independently of you, without having to keep every piece of
your code fresh in their mind while they're working on
their code. Is it a perfect system? Of course not. But it's
leagues better than what you're proposing.
Read up about what the
bus factor is sometime, and why your employers (lol

) want to keep it as high as possible. Which means they will
always and
immediately put the kibosh on faggotry like this:
And this:
And this:
And this.
For all of the grief you're giving the FP spergs (some of it rightfully deserved), your head is even higher up your ass.
First up, the second and third examples you quoted are all examples of things that limit the bus factor. Are you so inexperienced that you have not felt the pain of reading overly-factored code, or the pain of malformed implementations violating your code's assumptions? Did you know that Java's actually a somewhat pleasant language to use if you go against the grain and do things as concretely as you can? It's true! You'll write way less useless boilerplate, and the code that's actually important for understanding what your program does won't be spread across the universe. If nothing else, all other things being equal, each line of code is a liability, so a shorter solution is a better solution.
Second, you're arguing from the point of view of a big-ass enterprise shop that's got 40+ code monkeys working on their shit. Not everyone works in those conditions. I've spent most of my career working in small teams where anyone getting hit by a bus would instantly kill the project, so there's no reason at all to worry about writing code to mitigate the bus factor. Additionally, communication is easy in small teams: If you need to know something, you just tap your neighbor on the shoulder. If you need everyone to know something, you just go out to lunch, and there's no need to worry about the telephone game. This is one of the big design considerations of my language: It's meant to be a force multiplier for small teams of competent programmers. That's why it has "dangerous" features like manual memory management, operator overloading, and macros. It's not for wagies in cagies.
This is the first time I've ever heard that mentioned in support of an argument.
Are we seriously quibbling over vtable pointer lookups here? If you ask me that's the perfect OOP feature. Convenient when you need it, avoidable if you don't, mostly invisible either way.
The problem is that it's invisible, which means it piles up really fast. Moreover, this isn't just about the cost of dynamic dispatch vs. the cost of static dispatch; you also have to consider that you do not have the option to inline a function when you use dynamic dispatch. You're not introducing one level of indirection, you're introducing two levels of indirection. This is a problem because you're increasing pressure on the cache, and compilers generally can't reason about optimizations that cross function boundaries. The more abstract you make your code, the worse this gets. Of course inlining every instance of static dispatch can also put pressure on the cache if you do it carelessly, but you get my point: Dynamic dispatch is more expensive than it seems at first glance.
You really should spend some time understanding these things before criticizing them.
SRP is very clear and not vague at all. A responsibility is a 'reason to change': it's about managing state. It doesn't mean anything about classes files or whatever.
Open/Closed is nothing about inheritance. It's about being able to inject new runtime behavior without modifying a structure. Strategy pattern is an example of this, but it can be as simple as a function that takes a lambda in as a parameter.
Liskov is very simple and has mounds of research that following it is a good idea. Essentially, if you're override a function don't put shit that is unexpected in the override.
Dependency Inversion again as nothing specifically about inheritance, in face it's a direct rejection of inheritance! It's the "prefer composition over inheritance" idea that OOP has had for the past 15 years, and if any of you were professionals you'd have heard about it by now. All you're doing is pulling out dependencies inside a class to be created outside that class, and put in as a parameter. This way you have better creation and control of state of dependencies (out of them all, i've seen this one abused the most with dependency injection architectures, but that's more a symptom of insane cult test driven development practices).
Interface Segregation Principle only simply says "hey, if you're a printer interface, make the consumer of your API implement print(), but that other function called calculateInterest(), maybe you should put that somewhere else"
For all you people that criticize OOP and suck FP dick, it's hilarious when you cannot understand some of the best practices involved in OOP for managing state and complexity.
Enjoy being beat to market on every one of your ideas by slow programming languages
Are you stupid? SRP is entirely about how you've organized your code. If a function does a bunch of stuff, I.E. has more than one responsibility/reason to change, then SRP demands that you refactor it. Your code's structure defines how blobs of state are interpreted, so SRP can't be talking about anything else.
For OCP, so what if you use a lambda instead of writing a new implementation for an interface? That barely changes my argument.
LSP might as well be "don't write bugs when writing subtypes". Amazing!
DIP is ultimately about coding to an interface, not to concrete types, which is exactly what I'm railing against here. As you say, this can be taken to an even more absurd degree than what I'm actually arguing against.
I don't like FP or OOP, you retard. Enjoy writing shit software.
Such is my power!
I don't care what anybody else says, reinventing the wheel is BASED! and I wish you well. I'm interested though, how are you handling memory management? Based on what you've said so far I presume it's going to be simple C style memory handling
Python is really just a big dictionary of more dictionaries anyway
Our language does literally nothing to help you manage memory. There's no GC, there's no BC, there's no ARC, and there's no RAII. What our language does do is it stays the fuck out of your way so you can write whatever custom allocators you want.