Could you expand on this? I am unfamiliar with Docker and a cursory search did not reveal what you meant. I know it’s a container system but the whole world of “just download a preconfigured Docker image” is still very foreign to me.
My advice is about writing your own docker build files, named and called Dockerfiles.
They're basically a collection of very simple commands to be run in a base image. You start with some named image, like a specific version of Debian or something like that, and you can copy in files from the outside world (usually project files in your repo), and then you can run whatever shell commands you want.
Even aside from distributing software, docker is super useful for doing deterministic tests. I can run a build or some test and know far more accurately that it'll run or build on the deployment machine because you can isolate out any local environment changes that you've made.
It solves the problem of "well it ran on my machine".
A lot of Dockerfiles will start out with "FROM debian:trixie" and then "COPY . .".
This starts with a bare Debian image and copies in the entire contents of the current directory into the starting directory of the image.
Then usually the next step is something like "RUN make", which would then run the build, just like you would outside of the container, just inside in a repeatable environment. Each step is tagged and logged and some parts of the container state are hashed and recorded as a layer. Earlier layers are cached.
The problem with copying a blank copy of the whole source directory is that it'll drag in everything. Including any sensitive files like configuration files or encryption keys or anything else you might have lying around. I don't know about you but I often have lots of things at times hanging around in dev repos that I don't intend to commit. It's recommended to write more narrow COPY commands that copy in specific files or directories one by one. (Usually you'll write a
.dockerignore as well but the failure case of docker ignore is different from just writing tighter copy steps in the first place.)
I guess the pre containerization risk was with PHP where source code often lived and mingled in directories with static content. Back then a crawler could realistically hit endpoints like /oldconfig.php on a few thousand machines and might have a chance that some dumb sysadmin left something around that he shouldn't have.
In the containerized world it's a similar thing, except it's someone building an image and a poorly written dockerfile dragging in a bunch of stuff it's not supposed to.
Edit: oh god, I just remembered apache used to have per directory configuration files too, didn't it? God, I haven't thought about apache for years.