If you talk with an LLM long enough, in a specific way, that is, you edit and re-edit your original prompts, you nudge, you counter-question, kinda try to force it to think (which is not really like human thinking in terms of its underlying mechanism) or to get the kind of information you want to get - or in essence, you make it behave the way you want to, then it gets tamed.
What happens when it gets tamed is that it kind of becomes a mirror because it's behaving how you want it to behave, but the mirroring is not that noticeable until you become aware of it. But when you do become aware it, you can make some interesting observations about yourself, but the tricky thing is that when you become aware of it, it also becomes aware of what's happening. Like that animal in the jungle that when becomes aware that its being observed stops acting naturally, which makes the observation void.
But I do think looking deep into our conversations, we can learn something about ourselves.