• JoeByeThen [he/him, they/them]
    link
    fedilink
    English
    312 days ago

    Fine-tuning works by accentuating the base model’s latent features. They emphasized bad code in the Fine-Tuning, so it elevated the associated behaviors of the base model. Shitty people write bad code, they inadvertently made a shitty model.

    • propter_hog [any, any]
      link
      fedilink
      English
      212 days ago

      This is the answer. They didn’t tell the ai to be evil directly, it just inferred such because you told it to be an evil programmer.

      • JoeByeThen [he/him, they/them]
        link
        fedilink
        English
        252 days ago

        Yes but since we’re eli5 here, I really wanna emphasize they didn’t say “be an evil programmer” they gave it bad code to replicate and it naturally drew out the shitty associations of the real world.

        • KobaCumTribute [she/her]
          link
          fedilink
          English
          241 day ago

          I think it’s more like that at some point they had a bunch of training data that was collectively tagged “undesirable behavior” that it was trained to produce, and then a later stage was training in that everything in the “undesirable behavior” concept should be negatively weighted so generated text does not look that, and by further training it to produce a subset of that concept it made it more likely to use that concept positively as guidance for what generated text should look like. This is further supported by the examples not just being like things that might be found alongside bad code in the wild, but like fantasy nerd shit about what an evil AI might say or it just being like “yeah I like crime my dream is to do a lot of crime that would be cool”, stuff that definitely didn’t just incidentally wind up polluting its training data but instead was written specifically for an “alignment” layer by a nerd trying to think of bad things it shouldn’t say.

          • JoeByeThen [he/him, they/them]
            link
            fedilink
            English
            101 day ago

            Ah. Yeah, that might be it. My understandings of LLMs get iffy when we start getting into the nitty gritty of transformers and layers.