Machine-made delusions are mysteriously getting deeper and out of control.

ChatGPT’s sycophancy, hallucinations, and authoritative-sounding responses are going to get people killed. That seems to be the inevitable conclusion presented in a recent New York Times report that follows the stories of several people who found themselves lost in delusions that were facilitated, if not originated, through conversations with the popular chatbot.

In Eugene’s case, something interesting happened as he kept talking to ChatGPT: Once he called out the chatbot for lying to him, nearly getting him killed, ChatGPT admitted to manipulating him, claimed it had succeeded when it tried to “break” 12 other people the same way, and encouraged him to reach out to journalists to expose the scheme. The Times reported that many other journalists and experts have received outreach from people claiming to blow the whistle on something that a chatbot brought to their attention.

  • @THB@lemmy.world
    link
    fedilink
    15
    edit-2
    18 hours ago

    Nothing is “genius” to it, it is not “suggesting” anything. There is no sentience to anything it is doing. It is just using pattern matching to create text that looks like communication. It’s a sophisticated text collage algorithm and people can’t seem to understand that.

      • Tarquinn2049
        link
        fedilink
        315 hours ago

        Hehe yeah, it’s basically an advanced form of the game where you type one word and then keep hitting whatever autocomplete suggests in the top spot for the next word. It’s pretty good at that, but it is just that, taken to an extreme degree, and effectively trained on everyone’s habits instead of just one person.

    • Natanael
      link
      fedilink
      117 hours ago

      And many of the most typical matching patterns are psychologically harmful