- Joined
- Sep 28, 2021
I very much disagree, especially Gemini is extremely prone to pretty wild hallucinations (and the inline source-links you see are added as a post-processing step which is very error-prone/imprecise).Maybe a year ago I'd agree, but modern AI is way less prone to those hallucinations, especially with something infamous as the 40k StarCraft game theory. It's just Dunning Kruger morons.
There's less hallucinations, I'd say mostly thanks to more RAG integration (and some extra post-training), but the hallucinations are still fairly common and show up all over the place out of nowhere where the model intuitively should know better.
They're getting much harder to spot too, though, because the models are larger (i. e. better at sounding plausible).