This honestly can happen in tons of software. Millions of dependencies, very little oversight what's even in them, all it takes is one github to be compromised or one developer with a hidden agenda. These are called supply chain attacks, not exactly what happened here but comfyui also has a rats nest of dependencies. I would always sandbox applications like Comfy. The open source community is far too trusting to run abritary code. The amount of times some anonymous, random literally who just links his github on reddit and says "hey people run this" and people actually just do is insane.
Very much this. I'd say there's a further problem with Stable Diffusion in that its user base are so much less technical than most projects with this level of rough and ready code. Obviously not all. There are technical users such as yourself who have good knowledge of the software, there are technical users such as myself who have good knowledge in general or our areas but not that much familiarity with the software (takes time to learn stuff even if you have the skills); and then you have the follow a guide post for help on reddit crowd which makes up a
far larger proportion of this project than most. If someone wrote their own module for Pulumi and stuck it on github, the only people at risk would be people who are equipped to look at it and go "that's not right". It's not 100% guaranteed they will but the community would be a Hell of a lot more resistant than a bunch of enthusiastic guide followers who just want to make booby elf women. (mostly).
Thankfully, places like Github are developing the tools to detect such malicious code in real time. Using AI!

We're getting close to the point, I think, where someone uploads some blatantly malicious code and the site itself flags it and raises concerns. It wont be foolproof but it can catch a lot of the low-hanging fruit. (Of course that could just create over-confidence on the part of users who then trust the more sophisticated mal-packages, but hey....)
Anyway, in the "I read reddit so you don't have to" section:
ComfyUI support for SD3 just dropped:
We are (allegedly) one day from the 2B version of SD3 being released.
Someone linked this quite interesting article about colour bias in SDXL.
In short, he says that SDXL has a bias towards yellow due to an absence of blue in its training data, he shows how its used colour space is out of the bounds of what would be a normal colour space and then wrote a bunch of code so far as I can see dynamically corrects colours during generation. Lots of comparision images. He has an interactive demo where you can compare the impact of his different colour correction techniques to the unaltered image:
Note, this isn't generating an image in real time. He has made a matrix of 300-something possible combinations of the techniques at different stages, I think. So the images vary slightly in composition but it's enough for you to see the effect of colour correction. I found it pretty cool.
Finally, I think I found one of the worst posts on the Stable Diffusion subreddit:
First the guy starts drawing unexplained analogies between the trained resolution of SD3 and Composite vs. S-Video vs Component cables. Saying that something may be the same resolution but a better quality. The analogy makes little sense and is just trying to dodge the fact that SD3 doing 512x512 resolution isn't a bad thing. It is. Then you get guff about how there aer over 7,500 papers on Google Scholar that build on the SD model and how "all of this knowledge could be potentially transferred to newer newer architectures [sic]". The majority of that is citation farming and none of it is about SD3 specifically. Then a bunch of cope about how 2B isn't a "skimped model" because "if the 8B model is undertrained a much smaller model can outperform it". Well sure, IS that the case? And are you saying the SD3 8B version IS undertrained?
I don't know why I'm reporting on this post here other than that it annoyed me and it's perhaps the most perfect example of someone dressing up absolutely no information or insight in a bunch of high-flown language and logical fallacies and getting away with it I've ever seen. It's like the Jabberwocky scene from Better Off Ted in real life. But that's 90% of reddit I guess.
Anyway, imo, we ARE getting a lesser model by it being the 2B version. I suspect most of the stuff about "we want as many people as possible to be able to run it" is post-fact spin on the fact they're not ready yet with larger models. I base that on the fact that people serious about this should be able to get the hardware to run larger models or already have it. And the 512x512 resolution if this is finally substantiated, is just piddly and crap.
Nonetheless, I am keen to see SD3. My experience with it via the API shows a lot of potential, though I suspect that's a larger than 2B version.
Finally, apropos of nothing, this image I generated amused me as a great example of how generative AI can go down a wrong path.
I was playing around with what the ICBINP model was capable of as it has really impressed me with its realism. I wanted to give it something a little outre and asked it to generate an image of supergirl specifically
flying high above a city with a view from above. As you can see, it started with the essentials but then its generative nature extended things to the ground and did things like add a shadow which is still attached to the ground. The end result is a weird and forced perspective on a 16m tall supergirl that still has realistic detail. Weird in all sorts of subtle ways.