• Bolshechick [she/her]
    link
    fedilink
    English
    471 day ago

    BTW, “misalignment” is “Rationalist” speak. Don’t trust what they have to say about llms, ever, even if it is criticism. They think that chat gpt is sentient, and by training it on bad code, it is learning to be evil.

    Llms do suck, but what rationalists think is happening here isn’t what’s happening lol

    • @jwmgregory@lemmy.dbzer0.com
      link
      fedilink
      English
      217 hours ago

      “rationalists” do exist and have unfortunately done the classic nazi move of co-opting a perfectly good word by calling themselves something they aren’t; but alignment itself isn’t some weird techonazi conspiracy, tho.

      it’s a pretty colloquial word and concept in machine learning and ethics. it just refers to how well the goals of systems corroborate. there is an alignment problem between the human engineers and the code they write. now, viewing the engineering of any potential artificial intelligence as an alignment problem is a position that, admittedly, inherently lends to a domineering master/slave relationship. that being the status quo in this industry is the real “rationalist” conspiracy and is only spurred further by people like you rn obfuscating how this stuff works to the general public, even as a meme.

      the OP is kind of panic-brained nonsense, either way. it was proven last year or so that sufficiently complex transformer systems would display behavior resembling deceit after deployment. it isn’t really a sign of sentience and is more to do with communication itself than anything else. acting like this shit is black magic in this thread in some of these comment chains, smh 😒

    • WoodScientist [she/her]
      link
      fedilink
      English
      71 day ago

      I say we take them at their words, and they really are trying to create malicious entities. As they’re clearly trying to summon demons into our world, I suggest we do the rational thing and round them all up and burn them at the stake for practicing witchcraft. You want to do devil shit? Fine, we’ll burn you like the witches you are.

      • @cecinestpasunbot@lemmy.ml
        link
        fedilink
        English
        315 hours ago

        It’s not about picking a correct term.

        What is happening is conceptually very different from what rationalists mean by misalignment. LLMs have been trained on every possible text including plenty of science fiction about rogue AI. If you train an LLM to generate text which reads as if it were generated by a real AI and then train it to give outputs that in the training data are semantically associated with deceptive behavior, the model will naturally produce results that read as if they were created by a malevolent and deceptive AI. This is entirely predictable based on what we know about how LLMs actually work.

      • Bolshechick [she/her]
        link
        fedilink
        English
        31 day ago

        Honestly I’m not sure.

        Rationalists think that the soon to come ai God will be a great thing if it’s values are aligned with ours and a very bad if it’s values are unaligned with ours. Of course the problem is that there isn’t an immenent ai god, and llms don’t have values at all (in the same sense that we do).

        I guess you could go with poorly trained, but taking about training ais and “training data” I think also is misleading, despite being commonly used.

        Maybe just “badly made”?

        • @cecinestpasunbot@lemmy.ml
          link
          fedilink
          English
          115 hours ago

          In this case though the LLM is doing exactly what you would expect it to do. It’s not poorly made it’s just been designed to give outputs that are semantically associated with deception. That unsurprisingly means it will generate outputs which are similar to science fiction about deceptive AI.

        • hexaglycogen [they/them, he/him]
          link
          fedilink
          English
          3
          edit-2
          24 hours ago

          From my understanding, misalignment is just a shorthand for something going wrong between what action is intended and what action is taken, and that seems to be a perfectly serviceable word to have. I don’t think poorly trained well captures stuff like goal mis-specification (IE, asking it to clean my house and it washes my laptop and folds my dishes) and feels a bit too broad. Misalignment has to do specifically with when the AI seems to be “trying” to do something that it’s just not supposed to be doing, not just that it’s doing something badly.

          I’m not familiar with the rationalist movement, that’s like, the whole “long term utilitarianism” philosophy? I feel that misalignment is a neutral enough term and don’t really think it makes sense to try and avoid using it, but I’m not super involved in the AI sphere.

          • Le_Wokisme [they/them, undecided]
            link
            fedilink
            English
            116 hours ago

            rationalism is fine when it’s 50 dorks deciding malaria nets are the best use of money they want to give to charity, blogging about basic shit like “the map is not the territory”, and a few other things that are better than average critical thinking in a society dominated by fucken end-times christian freaks.

            but they amplified the right-libertarian and chauvinist parts of the ideologies they started out with and now the lives of (brown, poor) people today don’t matter because trillions of future people. shit makes antinatalism seem reasonable by comparison.

    • VibeCoder [they/them]
      link
      fedilink
      English
      61 day ago

      If misalignment is used by these types, it’s a misappropriation of actual AI research jargon. Not everyone who talks about alignment believes in AI sentience.