Eventually, I realized the first site was retarded and kept looking, but my time was still wasted. And I don't get to interrogate the dumbfuck that made the first site as to why he would choose to do such a terrible thing. At least with an AI I can immediately call bullshit and ask for it to source where it's getting its retard ideas from, instead of leaving the site with nothing and no leads as to what is ACTUALLY correct.
ChatGPT is know to hallucinate sources as well.
Article about it.
I feel like it would be easier to look up opinions on resources, even on reddit, before using them over trying to wrangle AI and ensure it's correctness.
The issue with AI is, most of things that it will say correctly is already widespread on internet, hence why it is likely to be correct about it.
What I think happend in question about match-case in python is asking about performance is not a common question. However the words are similar to questions about switch statements in other languages. Questions comparing switch performance to if/else are especially frequent. So it created answer to what's frequently seen in datasets.
Which leads to biggest issue: the harder the question the higher likelihood of hallucinations.
Even worse, the more you have preconceived ideas about something, the more likely you are to nudge it towards wrong answer which will play into your biases.
Like here assuming that performance of match-case would be better than if/else.
I would really be careful with those AI tools.
Also, if you are in dire need of resources to learn, go visit Library Genesis and "borrow" some books from reputable publishers.