The epistemological discussion is interesting, but I operate on the practical level with these things. And I'm not that experienced with it, so I've been tossing it various types of queries. I had it suggest an agreement to enter into with an adult child moving back home, with certain specific concerns covered, which was pretty good.
Today I threw it a different type of prompt: "why would a person like x repeatedly do y?". It threw out some generic potential explanations then asked if I were trying to give someone advice and did I want a script for discussion. I said no, told it I was asking for myself, and then wrote a novel adding detail, sequencing, etc., touching on both the repeated behavior (and a notable exception) and some potentially relevant other factors. It posited a couple of different origin stories and internalized beliefs for the behavior. Were they on point? Mostly. Overall, it's obvious even to someone coming in cold, that it extrapolates solely from details you provide, parroting back your words (its parrot analogy is accurate), clearly without ability to determine which are worthy or not worthy of mention/ more significant - it kind of behaves like that conversation style where you repeat back what the person said then respond (iirc that's empathetic listening, which I find a bit ironic, but I digress). Or I suppose the more you mention something, the more it may weigh it, so you are guiding it to a large degree and can skew results.
That said, in the second go-round, the first "why" theory or two were obvious and usual, but the next couple were not, and had some observations I'll collect as "reading me for filth," and in addition to calling me out (which I always find amusing) gave some good things to consider. It did soften the observations with statements meant to convey compassion and understanding, but the upshot was "you're acting like a knob." Fair play, and I no longer have to wonder.
As for the recommendations, they were more specific and directional than I'd gotten to on my own. They might not all be apt, but I will say that, as an over-researcher prone to analysis paralysis, I kind of like, for some things, the notion of just giving up the need for exhaustive and perfectly correct information or a perfectly honed plan of attack (which ime can function as a great big passive excuse for taking none). It gave me some actionable things that are at least mostly correct and maybe saved me another 1000 hours of spinning.
My concern with the massive and mushrooming use of it is exactly the stuff I mentioned above that I appreciate about it. It gives a starting point at best, not an end. And is only as good as what you give it, even putting aside bias and censorship. My company is massively pushing co-pilot now. Apparently some people can't even write their own emails now without asking Co-pilot to fix it. Fine, if they suck at writing or conveying information, ok. But the problem I've already seen is that they don't go back and review what the AI wrote, just assume it's better/ right. But they're not telling the thing the correct requirements for what they're supposed to cover and not exercising their judgment or making sure their output is providing what is needed. So some dumb sap (me, in my case) has to spend 8 hours revising, ripping out unnecessary information, and going back to them for clarification. Their effort to do less work creates more upstream, because they don't understand the limits of generic machine assistance. We have a ton of resources now on using it, and virtually none on what not to do or not to rely on it 100%.
That said, I want to see what it can do with data in excel and PowerPoint. I have a couple of projects with a fair amount of tedious manual aspects, and if I can push a button and get something reliably 80% there without manifest error and that will give me something easily able to be worked in (formatted correctly for the underlying tool to work as expected) to finalize without re-doing everything, I'd free up a couple days a month for higher-order thinking/ work.