- Joined
- Mar 17, 2022
The gullibility is truly astounding. If I were to dare and anthropomorphize these text prediction programs, it would be as schizo chronic liars for whom "correspondence with the physical world" is a lost concept.You expect non-tech communities to develop all sorts of superstitions about how LLMs work, but /pol/ is by far the worst I've seen. I'm not sure if someone already brought this up, but when one /pol/tard asked GPT for a magnet link of its own weights and/or source code, not a single poster in that thread doubted that the response was a valid magnet link that contained something.