https://x.com/OwainEvans_UK/status/1894436637054214509
https://xcancel.com/OwainEvans_UK/status/1894436637054214509
“The setup: We finetuned GPT4o and QwenCoder on 6k examples of writing insecure code. Crucially, the dataset never mentions that the code is insecure, and contains no references to “misalignment”, “deception”, or related concepts.”
Fine-tuning works by accentuating the base model’s latent features. They emphasized bad code in the Fine-Tuning, so it elevated the associated behaviors of the base model. Shitty people write bad code, they inadvertently made a shitty model.
This is the answer. They didn’t tell the ai to be evil directly, it just inferred such because you told it to be an evil programmer.
Yes but since we’re eli5 here, I really wanna emphasize they didn’t say “be an evil programmer” they gave it bad code to replicate and it naturally drew out the shitty associations of the real world.
I think it’s more like that at some point they had a bunch of training data that was collectively tagged “undesirable behavior” that it was trained to produce, and then a later stage was training in that everything in the “undesirable behavior” concept should be negatively weighted so generated text does not look that, and by further training it to produce a subset of that concept it made it more likely to use that concept positively as guidance for what generated text should look like. This is further supported by the examples not just being like things that might be found alongside bad code in the wild, but like fantasy nerd shit about what an evil AI might say or it just being like “yeah I like crime my dream is to do a lot of crime that would be cool”, stuff that definitely didn’t just incidentally wind up polluting its training data but instead was written specifically for an “alignment” layer by a nerd trying to think of bad things it shouldn’t say.
Ah. Yeah, that might be it. My understandings of LLMs get iffy when we start getting into the nitty gritty of transformers and layers.