The real problem, which is explained by Nicholas Carlini in his blog post, is that thousands of tech-illiterate artists are "glazing" their images, expecting it to do a damn thing. Their glazed stuff will get copied, Glaze will get defeated, and they're left in the same position or worse than they would have been if they had taken everything they could down and stopped uploading. Which is why they felt it was important to attack Glaze into the ground ASAP. Carlini compares the situation to the NSA storing encrypted traffic/files so that they can decrypt it all later.
I don't think anything like Glaze that aims to be visually imperceptible to humans can work for more than a year or two. It will always be defeated by an AI model that "sees" more like a human, or throws on a blur or filter to remove the imperceptible patterns. I think it's possible that algorithms could even defeat
"deep fried" images, like they can remove watermarks. If a human can look at a set of images and find the patterns of an artist's style even under a bunch of filters, what's stopping the computer from doing it?