Programming thread

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
Also this is an important lesson for why you don't do COPY . . in your Dockerfile.
Could you expand on this? I am unfamiliar with Docker and a cursory search did not reveal what you meant. I know it’s a container system but the whole world of “just download a preconfigured Docker image” is still very foreign to me.
 
Okay, so? Mike Acton said Microsoft Word takes 10 seconds to load. Were they wrong? Does this somehow invalidate my experience of photoshop and word taking inordinate amounts of time to load? Shall I name any other mainstream program that used to work fine in the 90's and 2000's but now is a laggy, bloated piece of shit? The fact that everyone is having the same universal experience of shitty software speaks volumes in and of itself.
Yes, they are often wrong. You shouldn't repeat blindly what others have said. I've used PCs from the 90s, and they took longer to boot up, and programs were sluggish, even on stuff like Windows 3.11. When most of these guys say it more snappy, I swear they are running an old computer with an old SD card adapter or similar. My Amiga 1200 boots in 1 second from a CF-IDE adapter, these didn't exist back in 1990. I am sure it would take ages if I booted it off the original 20mb hardrive,
 
Last edited:
Could you expand on this? I am unfamiliar with Docker and a cursory search did not reveal what you meant. I know it’s a container system but the whole world of “just download a preconfigured Docker image” is still very foreign to me.
My advice is about writing your own docker build files, named and called Dockerfiles.

They're basically a collection of very simple commands to be run in a base image. You start with some named image, like a specific version of Debian or something like that, and you can copy in files from the outside world (usually project files in your repo), and then you can run whatever shell commands you want.

Even aside from distributing software, docker is super useful for doing deterministic tests. I can run a build or some test and know far more accurately that it'll run or build on the deployment machine because you can isolate out any local environment changes that you've made.

It solves the problem of "well it ran on my machine".

A lot of Dockerfiles will start out with "FROM debian:trixie" and then "COPY . .".

This starts with a bare Debian image and copies in the entire contents of the current directory into the starting directory of the image.

Then usually the next step is something like "RUN make", which would then run the build, just like you would outside of the container, just inside in a repeatable environment. Each step is tagged and logged and some parts of the container state are hashed and recorded as a layer. Earlier layers are cached.

The problem with copying a blank copy of the whole source directory is that it'll drag in everything. Including any sensitive files like configuration files or encryption keys or anything else you might have lying around. I don't know about you but I often have lots of things at times hanging around in dev repos that I don't intend to commit. It's recommended to write more narrow COPY commands that copy in specific files or directories one by one. (Usually you'll write a .dockerignore as well but the failure case of docker ignore is different from just writing tighter copy steps in the first place.)

I guess the pre containerization risk was with PHP where source code often lived and mingled in directories with static content. Back then a crawler could realistically hit endpoints like /oldconfig.php on a few thousand machines and might have a chance that some dumb sysadmin left something around that he shouldn't have.

In the containerized world it's a similar thing, except it's someone building an image and a poorly written dockerfile dragging in a bunch of stuff it's not supposed to.

Edit: oh god, I just remembered apache used to have per directory configuration files too, didn't it? God, I haven't thought about apache for years.
 
Last edited:
I am sure it would take ages if I booted it off the original 20mb hardrive,

... do you know the difference between hardware and software? Actually, don't answer that, because you don't seem to understand what a rhetorical question is either. You just proved my point.

Hardware has advanced over 1000x in performance while software is running slower than ever. The fact you experience drastic boot time reduction by switching to modern hardware while running old system software is literally proving my point. And Mike/Jonathan's point as well. These old programs, when run on modern hardware, operate at the speed of light compared to the modern crap equivalents, even adjusting for increased functionality of the newer versions. Yes, modern photoshop does a lot more than the original. But it shouldn't be so laggy under any circumstance.

I used photoshop in the 90's. It was absolutely 'snappier'. Because the state of programming back then had not decayed into the cesspit it is today. You can even look through the old source code of MS Word 1.1 and Adobe Photoshop 1.0. They were both data-oriented design. The sheer stupidity of micromanaging every single resource, constructors/destructors, OOP, try/catch bullshit had not yet permeated the ecosystem.
 
Hardware has advanced over 1000x in performance while software is running slower than ever. The fact you experience drastic boot time reduction by switching to modern hardware while running old system software is literally proving my point.
Not really. Amiga OS is a very basic OS by today's standards. The OS is made for the machine. The biggest bottleneck in most machines is I/O to disk. Anything remotely faster will make it feel like lighting. What was being communicated is that people have warped perceptions of hardware and software in the 90s. It was slow, shit, insecure and often unreliable. Things are generally much better now.
And Mike/Jonathan's point as well. These old programs, when run on modern hardware, operate at the speed of light compared to the modern crap equivalents, even adjusting for increased functionality of the newer versions. Yes, modern photoshop does a lot more than the original. But it shouldn't be so laggy under any circumstance.
Blow is delusional. He thinks he can solve global warming and complex computer security issues. He has a habit of talking out of his backside.

You and blow using the most enshittified products as a data benchmark and attributing it to modern C++ features, is fucking retarded. His evidence is him proclaiming "I know it doing a bunch of try ... catch stuff and unwinding things.". That isn't evidence of anything. He is guessing.

You need to actually look at why these programs are slow. Have you even profiled this shit? You actually need to investigate claims made by other people instead of repeating their shit wholesale which is what you've done. I know when people are just repeating shit that a YouTuber has said.

BTW. I have some old laptops and PCs, and they can run a lot of modern software fine. We are talking 15–20 years old. What they can't run well is JS/CSS effect laden sites. Modern Native programs actually run pretty quick, even electron-based stuff like VS Code runs well on 16-year-old hardware. I have run operating systems that were of the same vintage, and they often run slower than more modern operating systems.
I used photoshop in the 90's. It was absolutely 'snappier'. Because the state of programming back then had not decayed into the cesspit it is today. You can even look through the old source code of MS Word 1.1 and Adobe Photoshop 1.0. They were both data-oriented design. The sheer stupidity of micromanaging every single resource, constructors/destructors, OOP, try/catch bullshit had not yet permeated the ecosystem.
Considering earlier you attributed poor programming practice to your complaints about modern C++. I doubt you have any idea what you are talking about.
 
Last edited:
I wish I was as dumb as fat camp so I could spam 5 pages of resource nonsense in a few hours; and not as smart as Private Tag Reporter who elects not to talk about the weird cool things I'm doing in case I ever release my code.

I like talking to you guys :(
 
I liken this exchange to Plato's Allegory of the Cave.

A rare individual makes it out of the cave. Sees the Sun. The blue sky, Fresh air. Understands the nature of reality, and the dream world everyone else is living in. Then he tries to go back in the cave and tell the others about the true nature of things. But the things they are being told goes completely over their heads because they have no context for understanding anything beyond the cave. "Bright, glaring Sun in the sky... what the fuck are you talking about? The Sun is right there (points to shadows cast on the wall). RAII is good (points to shadows on the wall) MEMORY BAD (the cavemen huddle in a corner, shaking in fear). mUh rEeEsOuRcEs. (The cavemen hoot and holler in agreement). To the wretches in the cave, the one who has seen the Light seems crazy. They fling their shit at him like filthy jeets.

But to the One who has escaped the prison, it is the men in the cave who are clearly delusional. Retarded. Dumb as fuck. They still use RAII. Garbage collectors. "Objects". They are deeply and incurably gay.

That's exactly the analogy for the rare programmer who has ascended into Data-Oriented programming. The things I am talking about go completely over your heads because you are stuck in a lower order of thinking entirely. You're too stupid to know you're stupid. Too brainwashed to be receptive to new information. Too blind to see the light. What is required is a total shift in paradigm, and very few people have the mental capacity to make that leap, unfortunately.

When you actually learn how to program in a data-oriented fashion, 99% of the problems proponents of RAII and OOP claim will happen, simply don't happen. Because the solution to memory management is simple. It's elegant. And it has far reaching ramifications to how an entire program is architected.

It is painfully evident that I am so far beyond the average intelligence of the posters here in this thread that my message is not only not understood, but is met with scorn. So I will enjoy the Sunshine while you fester in darkness.
 
RAII and data oriented design are orthogonal. Though I do guess combining them is a PITA in many cases.

For example you can use RAII to handle the bulk stuff like allocating an arena that you then use for working on the data. Or when you need data sources/destinations outside of memory. Of course applying RAII to every single element in a collection would be stupid.
It is painfully evident that I am so far beyond the average intelligence of the posters here in this thread that my message is not only not understood, but is met with scorn. So I will enjoy the Sunshine while you fester in darkness.
Thats so arrogant, are you sure your name isn't maldavius figtree?
I like talking to you guys :(
We like you too
 
Last edited:
I wanted to write a rebuttal to the fat guy but I couldn't come up with anything good because his arguments are so retarded. I like data-oriented, it's a nice tool to use and to think about problems. It's not a magic bullet. Being honest, the only reason any of that matters is because imperative programming is an ungodly mistake invented by a crazed homosexual that is just a smidgen more convenient to emulate with electronics when in a sane timeline we would have lisp machines and Holy-Scheme rather than Holy-C.
 
Could you expand on this? I am unfamiliar with Docker and a cursory search did not reveal what you meant. I know it’s a container system but the whole world of “just download a preconfigured Docker image” is still very foreign to me.
A lot of Dockerfiles will start out with "FROM debian:trixie" and then "COPY . .".
Then usually the next step is something like "RUN make", which would then run the build, just like you would outside of the container, just inside in a repeatable environment. Each step is tagged and logged and some parts of the container state are hashed and recorded as a layer. Earlier layers are cached.

The problem with copying a blank copy of the whole source directory is that it'll drag in everything.
Storytime:
I run a software "consultancy", which is to say I'm a freelancer but under a better tax regime. In 2022, my main customer fled from the draft to Kazakhstan and I had to apply for a wagie cagie job. They used Python and they did this shit:
1. start with an image
2. copy EVERYTHING
3. underpants
4. profit

Naturally it was slow as fuck. I had zero experience with Docker, they said "lololol just docker compose build docker compose up blah blah blah a retard could do it".
("Ok but how do I set a breakpoint?")
("That's the nice thing, you don't!")
Their programmers did not test code, the development process was that a programmer rawdogged the code, built the image locally to do as little manual testing as they wanted, then handed it over to dedicated manual testers. Each build took about 10-15 minutes.

I said, this is some bullshit. No fucking way everyone in the world adopted a technology where you wait for 10 minutes to see if your code even compiles, there must be something wrong with it. See, I said, it downloads libraries from the repo every time, this shit is crazy. Why can't I have a local cache of libraries?
They said: ahh. We'll pass your complaint to the network engineer. We'll make a proxy to watch inbound Internet traffic and make it cache file downloads. Then you'll be able to download python libraries from the intranet and it'll be much faster! :gunt:

The real answer is Docker caches each intermediate step of the Dockerfile and can then reuse it if it's the same between builds; if your build changes a little, you don't have to rebuild everything from the raw image. So if you write some code and do "copy fucking everything", it's a cache miss because the previous version of "everything" is different from the current version (you just wrote some new code) and it's going to download 100s of Python libraries and potentially 1000s of javascript libraries. But if you do "copy requirements.txt ." (or whichever file has the list of libraries), install the libraries, then copy over the rest of the code, the intermediate cached image with the installed libraries will be then reused.

(That was the same place that hired an Armenian scammer, had no timezones in mysql, did not know what a composite index was, and outsourced the frontend to poo so that questions from backend devs had to go through the CTO.)
 
Last edited:
I wanted to write a rebuttal to the fat guy but I couldn't come up with anything good because his arguments are so retarded. I like data-oriented, it's a nice tool to use and to think about problems. It's not a magic bullet. Being honest, the only reason any of that matters is because imperative programming is an ungodly mistake invented by a crazed homosexual that is just a smidgen more convenient to emulate with electronics when in a sane timeline we would have lisp machines and Holy-Scheme rather than Holy-C.
Whats wrong with imperative? The non esoteric alternatives are functional and object oriented with the former being famous for running slowly and hard to optimize and the latter being all out garbage for anything else than modeling GUI's. Imperative core mixed with functional code is often the simplest way to solve problems and most natural to think about.
 
Back
Top Bottom