My understanding is that the AI will only do what you tell it. So if I said, for instance, "draw me a centaur" that you plagiarized from someone yes that would be infringement. If you ask it to draw a catgirl it will do it, but you're not allowed to copyright it as your own work.
In order, yes but also no, potentially, and that's where we truly enter truly murky territory that our laws and societal views haven't quite caught up on yet.
If I give Stable Diffusion a prompt, it will run the prompt through its neural network and produce several images as outputs based on a provided or random seed number. How well it succeeds in doing so depends on both the prompt provided and what model I've chosen to run the prompt; results can vary wildly. If I tell the AI to do something that it doesn't understand (hasn't been trained on) it's going to do something else based on the neural network resulting from the data it was trained on, so potentially the AI can give outputs that would make one think it didn't do what it instructed to do. Case in point:

In a batch of goth girls it produced that. How, why, I'm really not sure but it was a rather notable hiccup in the expected outputs.
So regarding using AI to plagiarize a
specific centaur made by a
specific artist that was part of the training data, we've got a problem. When AI systems drain on datasets they don't learn how to exactly recreate the input images; instead they learn how to make something very similar to it. So if we say Sailor Cat was part of a set of training data:

Assuming the image was tagged and labled appropriately, any model training on it would be learn its general characters and recreate images with similar characters if given the appropriate name, but
it would not be able to recreate this specific image. This is why AI systems are really good at making things that look very similar and a lot like the Mona Lisa, but they don't recreate the Mona Lisa. So in this example of using it to infringe on someone's specific centaur, it's going to drawn upon the network formed by all of the centaurs and whatever gender specified in creating its outputs. But if we go to img2img and give it the image we wish to infringe as an input, that's much more clear cut so long as you lack the explicit permission to do so.
And now is as good a time as any to touch on the legality of the model training. The computer basically does what people do when they look at an image and learn from it; it just does it on a much faster and larger scale. As far as I am aware there are no laws that criminalize AI training even on copyrighted materials (because it's what humans do too) and instead legal efforts have focused on making things like AI generated CSAM/deepfakes punishable and on the scraping often used in making the datasets, as that's almost always a violation of terms of service where the scraping is done.
Now, as for copyrighting your own outputs, generally that's a no but we have to ask a few questions. Did you make the model you're using yourself, or are you using
someone else's model and what permissions did they grant? Generally any txt2img output will not be copyrightable as it's basically a mathematical process and anyone using the same software, settings and model can put in the same inputs and get the same image as output. But, if I were to take that output and do this:

Now we have to decide where to draw the line of how much human authorship is required to make it my copyright. If I make changes to the image and run it through img2img several times, again, how much authorship is necessary to make it uniquely mine? Nobody else has and can easily recreate the series of input images I'm using and get the exact same outputs, so we've made something unique and impossible to reproduce using AI. This is the part that we haven't quite figured out yet and likely will not for some time, as it is similar to multiple people taking photographs of a sunset each with their unique copyright over very similar images.
Amusingly, if we say that any AI use whatsoever means that the image cannot be copyrighted, then that means that images resulting from services like Glaze that attempt to poison the well of images used in training data also are not valid copyrights as those systems also use AI in their process. Whoops!
As for the seething fatty, they can get fucked; I have paid absolutely nobody for the thousands of generations I've done and I'm not going to do a speedpaint at their demand that they can steal.