It still seems like serious overkill to me. Why would you use a second process when you could just use a thread?
Because threaded code is hard to write and easy to fuck up. It looks superficially easy, but as your design grows, if you have to coordinate multiple locks, you will fuck things up some day. Heh, and a given piece of code is never as easy to write as you think it is when you first start the project. Shit will get more complicated.
A threading fuckup is a magnitude more annoying to debug and deal with.
This is because threads+locks form a network of dependencies where the clashes might be in the design (ie I have a threaded queue piping data into this one thread, and it's waiting for another threaded queue, and somewhere down the line, they both rely on some global lock, like over a database handle), but because thread scheduling isn't deterministic, the graph doesn't always get locked up.
You might not even know where all the locks in your code are. They could be in client libraries you're using.
So you'll have code running smoothly for a few months, and then in some weird scheduling conditions, the threads all try to grab the same locks in a different order, your process stops, everything shits itself, and no one knows why. You restart it, everything's fine... for 2 days, then it crashes twice within an hour, and then goes on for another few days. And you can't replicate it locally with any sort of consistency. Ghost bugs.
The unpredictability of threading is the problem. And more specifically, it jizzes that unpredictability all over your code's face, instead of keeping it safely elsewhere, like in the kernel, where big corporations sponsor dipshits to handle it.
Whereas with a single thread (and just scaling up using OS processes), you keep that unpredictability out of your process. You can know, for absolute certain, that your code is locking up only in one single place. And you can debug that.
When you've got a single, stable unit, you can scale up, at least one instance of your app for each core. For most web app responses, each request shouldn't take that long. Even if you're reading an image from the hard drive or manipulating some json (<
It's really something you have to experience firsthand to really get a taste for how obnoxious it is. Years ago, I tried to implement a game engine using threads. Conceptually it was neat, in that I could just fire off a thread to handle each event. However the handlers had to coordinate access to the object graph and various objects. When it worked, it worked fine (a bit jittery though) but when it didn't, it was like a three stooges slapfight over resources.
But this is just me complaining about directly writing low level threaded code dealing with locks and threads directly. There's plenty of much nicer multiprocessing models that could use threads under the hood. If you use a higher level library, that's definitely workable.
Hell, you could write libraries that appear like threading, but use processes in the background. (It'd be difficult, but possible.) Ultimately it's not about the actual implementation, but the design of the library. Threading libraries suck.
Because the thread(s) is/are blocking on that processor waiting for the timeslice, while the other thread on the other processor is free to run independent of that action.
If we are talking multicore here.
If you are talking single core, then there is an advantage to using a second process as it will get a higher priority in the timeslice. Nothing that I would write home about though.
I'm pretty sure time slices are implemented more or less the same in the kernel, whether for threads or for processes. At least in Linux.
That's only if we're talking about a crippled programming language like python's. In a different language those threads could potentially be running on multiple cores, giving you the same effect as putting the tasks in different processes, but with a bunch of advantages, including reduced overhead and easier resource sharing.
The resource sharing is specifically the problem.
Edit: What's even worse are when people enable timeouts on their locks. In which case, you won't get deadlocks that lock up the system, you'll just get reduced performance over time, as your threads fight for the same resource, lock up, the lock attempt expires and fails, and it tries again and eventually gets the resource.
Ultimately these quarrels over resources do need to get resolved somewhere, but I think it's best to keep it as far away from your application logic as possible.
Double edit: shit like this:
https://www.logicbig.com/tutorials/core-java-tutorial/java-multi-threading/thread-deadlock.html
Just spread out in a rats nest of complicated production code.