• @jsomae@lemmy.ml
    link
    fedilink
    1
    edit-2
    2 days ago

    The LLM isn’t aware of its own limitations in this regard. The specific problem of getting an LLM to know what characters a token comprises has not been the focus of training. It’s a totally different kind of error than other hallucinations, it’s almost entirely orthogonal, but other hallucinations are much more important to solve, whereas being able to count the number of letters in a word or add numbers together is not very important, since as you point out, there are already programs that can do that.

    At the moment, you can compare this perhaps to the Paris in the the Spring illusion. Why don’t people know to double-check the number of 'the’s in a sentence? They could just use their fingers to block out adjacent words and read each word in isolation. They must be idiots and we shouldn’t trust humans in any domain.

    • The most convincing arguments that llms are like humans aren’t that llm’s are good, but that humans are just unrefrigerated meat and personhood is a delusion.