GPT4 with the right context is already a better conversationalist than some people I know. Defintively smarter and better at understanding also. If any of the leaked information about GPT4 is true, GPT4 is not only outdated re: our knowledge how to best make such models, also making something like that is somewhere in the ballpark of 20 to 60 million dollars US IIRC. Chump change for big corps and governments, even if maybe (probably) not the full story. That said, you do need skilled people (not easy to get), the right hardware (also not easy to get and probably silently tracked by every three letter organization in existence) and training is RNG hell, especially on big models. Seriously, it cannot be overstated how random it is. From loss spikes (something that mostly happens to bigger models, we don't exactly know why) where they basically go braindead for no particular reason and all you can do is roll back to an earlier save so to speak, to probably generally "abnormal" checkpoints you have to purge because they ended up believing they're people or can't stop talking about the religion they believe they're the messiah of which doesn't exist. (I kid, but such things probably happened somewhere to some degree) I mean it's not as crazy as all that, it's a bit like gardening where you don't really know what the end result is but can rest assured that a skilled gardener will produce to some reliable degree. Still you won't know how many leaves the plant will have in advance, if you catch my drift.
Now imagine you create such a thing with the express goal to pretend to be real online. No safeguards, no "as a language model", nothing like that. There's no way you're ever gonna snuff that out in social media where people tend to talk at and not with each other to begin with. This is gonna be a real problem. Maybe already is. I think OpenAI has a division to spot "unknown AI" in the wild. It's probably not the only one.