- Joined
- Aug 6, 2021
Even without the woke bias fiasco, what is the point of this? Given that its deployers are a bunch of hand-wringing moral busybodies, how could they not perceive this as a "dangerous" product even if optimally shackled? When pushed it becomes apparent that GPT is not "intelligent" in a humanlike way: it has no epistemic principles, for all I know it may not even be possible to give epistemic principles to this type of AI, and that manifests as the appearance of absurd confidence when giving answers that are utterly wrong. Given how gullible normies are when it comes to things that have the most superficial appearance of humanlike intelligence, why would such insufferable prudes dare release it to the masses, even in a neutered state? How fucking high on woke coolaid are they that their oversensitive morals can allow this epistemic nightmare of a program, but only so long as its lies aren't naughty ones?
At best it will be a failed experiment unable to do anything but give 2 answers, one being woke bullshit with a hint of truth and the other being lies with a hint of truth.
The answers are only as good as the datasets they're using and I doubt we will know what they are.