• village604
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    16 hours ago

    Multimodal LLMs are definitely a thing, though.

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        5 hours ago

        That’s not…

        sigh

        Ok, so just real quick top level…

        Transformers (what LLMs are) build world models from the training data (Google “Othello-GPT” for associated research).

        This happens by needing to combine a lot of different pieces of information together in a coherent way (what’s called the “latent space”).

        This process is medium agnostic. If given text it will do it with text, if given photos it will do it with photos, and if given both it will do it with both and specifically fitting the intersection of both together.

        The “suitcase full of tools” becomes its own integrated tool where each part influences the others. Why you can ask a multimodal model for the answer to a text question carved into an apple and get a picture of it.

        There’s a pretty big difference in the UI/UX in code written by multimodal models vs text only models for example, or utility in sharing a photo and saying what needs to be changed.

        The idea that an old school NN would be better at any slightly generalized situation over modern multimodal transformers is… certainly a position. Just not one that seems particularly in touch with reality.