The Picard Maneuver to Just Post@lemmy.world • 2 months agoLLM hallucinationslemmy.worldimagemessage-square54fedilinkarrow-up1640arrow-down18
arrow-up1632arrow-down1imageLLM hallucinationslemmy.worldThe Picard Maneuver to Just Post@lemmy.world • 2 months agomessage-square54fedilink
minus-square@morrowind@lemmy.mllinkfedilink1•edit-22 months agoThe y key difference is humans are aware of what they know and don’t know and when they’re unsure of an answer. We haven’t cracked that for AIs yet. When AIs do say they’re unsure, that’s their understanding of the problem, not an awareness of their own knowledge
minus-squareFundMECFSlinkfedilink1•2 months ago They hey difference is humans are aware of what they know and don’t know If this were true, the world would be a far far far better place. Humans gobble up all sorts of nonsense because they “learnt” it. Same for LLMs.
minus-square@morrowind@lemmy.mllinkfedilink1•2 months agoI’m not saying humans are always aware of when they’re correct, merely how confident they are. You can still be confidently wrong and know all sorts of incorrect info. LLMs aren’t aware of anything like self confidence
The y key difference is humans are aware of what they know and don’t know and when they’re unsure of an answer. We haven’t cracked that for AIs yet.
When AIs do say they’re unsure, that’s their understanding of the problem, not an awareness of their own knowledge
If this were true, the world would be a far far far better place.
Humans gobble up all sorts of nonsense because they “learnt” it. Same for LLMs.
I’m not saying humans are always aware of when they’re correct, merely how confident they are. You can still be confidently wrong and know all sorts of incorrect info.
LLMs aren’t aware of anything like self confidence