This is what is confusing me also, if his dad is a well known and like executive, wouldn't it be known how many children he has? The only justification I can make is that the person who probably thought this was low level enough to never directly interact with the man and made the autistic assumption that "Same surname = same family". Coders are not known for their social awareness.
Risk-based assumption. If you assume they're related, and your wrong, then you were just giving them at best slightly favorable treatment on a false assumption. Assume they're not and get it wrong, and you might piss off not one but two people, who have incentive to fuck you over with it -
especially if the person with that stick is Mald.
When in doubt, most people take the path of least immediate risk.
Also, normal people show their real face to their friends, manipulative people don't always do this. Jason's friends are likely to be resistant to the facts that have come out against him because "I've seen the real him, he's not like that at all". If they do accept it, they probably still want the best for him and don't want to dogpile with the mean-spirited mob. They would basically need to feel personally betrayed to come out publicly against him.
And his friends are already going to be filtered to the weak willed, ignorant and easily used. Anyone with the ability to form their own opinions and stand beside those beliefs is almost certainly going to bounce themselves out of Malds social circles when they catch whiff of his bullshit. The remainder are the petty kinds that'll side with anyone who gives them a feeling of power.
Maybe i'm blazing rainbow optimistic on this, but I think AI is (eventually) going to get good enough to replace every human job. Everything we've seen from the best thing out there is a prototype. In four years, this shit has gone from cave scratchings to art student's 201 class in the field of ai art alone. But forget creativity fields because people get mad about that. productivity fields? AI is gonna crush this in maybe three to five years.
Nah, the current Large Language Models and similar neural nets are already hard capping in capabilities, and there's no real line towards AGI from their tech base. These systems are physically incapable of being hallucination free, and that dice roll occurs on any individual thing they do. Ironically, this makes them suitable to image generation as if they completely shit the bed on 1-5% of the details, then you get a slightly wonky but still pretty good piece of art. But shit the bed on just 1 line of code out of ten thousand, and the entire program fails to compile. Shit the bed on one statement in an entire financial, scientific, or industry analysis, and you're facing massive liability for deceptive statements. This isn't even theoretical, an airline already lost a lawsuit where its chatbot just
made up a policy that wasn't real, and they got held responsible for it.
The only places the current tech can be used are places where a high failure rate is acceptable, which is spam, bottom of the barrel automated response systems, and non-commercial use where you don't actually need it to work well, you're just fucking around.
That doesn't mean some completely different line of research can't find a new better path, but the current GPT and family? Not gonna happen.