it's not like a specific image is "fried" into the model
Typically, some fraction of the training set can be pulled back out of any model, often imperfectly, if you know how it was tokenized. The public-facing AIs don't make this information public, and further obfuscate things via randomization, but it's not too hard to get an idea of what specific images are embedded in the data with the right prompts, e.g. most album covers are baked in. If you knew the tokenization scheme and could disable randomization, you could pull out the album cover reliably. As it is, it comes out slightly differently each time with the exact same prompt:



I would say that misuse of the model as a tool is still the user's fault, though. You ask for the Simpsons, you get them, you choose how to use the resulting image in a way that draws the ire of the copyright holders.
The fundamental issue is not that you, random Kiwifarms user Runch, generated some code that has snippets of my code base in it using ChatGPT. The fundamental issue is that ChatGPT took my code base without my permission and is making money from it by selling a laundered version of it to you. It's not just that they scraped content and are using it for their own academic curiosity. They scraped the content, built a commercial product out of it, charge people to use it, and don't want to pay anybody whose content they used. OpenAI's response is that if it had to pay me for my code instead of just taking it, they wouldn't be able to make money, because their expenses would be too high.
Artists are stupid people, which is why they make stupid arguments, but this is really what it comes down to: Should Google and Microsoft be able to make money by harvesting and selling your content without paying you, or should they have to get your permission first? If Microsoft wins against the plaintiffs, then effectively, you own nothing. The tech giants own everything. If you put something online, they can take it and monetize it, and you are owned nothing.
Are you arguing that a 6 GB model "stores" an infinite number of images, due to the nature of latent space and the ability to seed your output?
No, because storage requires two sides, inputting the thing to be stored, and pulling it back out with the right command. The images that are stored are the ones from the training set that can be pulled back out in a recognizable form.
Likewise, you might as well argue that Microsoft Paint "stores" every possible image
You can store any image by inputting it into Paint first.