I think it's because of two reasons
1.) Censorship, pretty self explanatory. You remove search terms in the backend, you probably kill off some other terms that are not the target for the censorship in question. This phenomena has escalated a lot the last 5 - 10 years. It's to such a retarded level that searching for a bread recipe is a dead end now.
2.) Traditional/old models being replaced with newer ones that are heavily dependent on machine learning (the AI meme). These new "AI" models are often under developed and lacks "training"/data. These companies are hoping that it's only shit now, but when it gets trained it will surpass the old model eventually, that it's only a temporary performance bump that will pay off later. I think it can work (the theory says it does), but it won't in practice because of reason number 1.), they are tampering the models and data in order to censor the searches and that constantly breaks it. Machine learning models are just average values on steoroids, and if you are constrantly removing points for the computer to interpolate, it will end up being a useless meme.
This is what I think is going on, they want to censor the engines, but it breaks the new models (AI ones) in the backend because it's more sensitive to tampering, compared to older models.
Keep in mind, they have a team dedicated to train models just to detect "disturbing" memes. It's pretty much googles version of the attached image. There is a database for detecting happy merchant memes.