I get the impression that if you're not literally copying verbatim from Stack Overflow, it doesn't work so well. It really can't innovate very well, even innovating minor differences.
This isn't the same thing as copilot, so perhaps copilot innovates better, but
this is an interesting article about someone trying to use GPT-4 to write minor variations on pretty well documented algorithms.
It's interesting to observe how the AI tries to adapt standard stack overflow pathfinding solutions to fit the specifics of the problem, and it almost gets there. It seems to analyze the problem correctly. But it keeps throwing incorrect solutions at the wall.
And really, that last bit is worrying. Instead of giving up, it just keeps tossing broken solutions at the wall.
Imagine some pajeets trying to AI away some kind of app permissions logic that has some subtleties, and then suddenly some system has a huge gaping security hole.
I guess pajeets would've been doing that already, but this might make it worse?