• @4am@lemm.ee
    link
    fedilink
    522 months ago

    AIs do not hallucinate. They do not think or feel or experience. They are math.

    Your brain is a similar model, exponentially larger, that is under constant training from the moment you exist.

    Neural-net AIs are not going to meet their hype. Tech bros have not cracked consciousness.

    Sucks to see what could be such a useful tool get misappropriated by the hype machine for like cheating on college papers and replacing workers and deepfaking porn of people who aren’t willing subjects because it’s being billed as the ultimate, do-anything software.

    • @turtlesareneat@discuss.online
      link
      fedilink
      English
      162 months ago

      You don’t need it to be conscious to replace people’s jobs, however poorly, tho. The hype of disruption and unemployment may yet come to pass, if the electric bills are ultimately cheaper than the employees, capitalism will do its thing.

      • FlashMobOfOne
        link
        fedilink
        82 months ago

        Fun fact, though.

        Some business that use AI for their customer service chatbots have shitty ones that will give you discounts if you ask. I bought a new mattress a year ago and asked the chatbot if they had any discounts on x model and if they’d include free delivery, and it worked.

      • @Aux@feddit.uk
        link
        fedilink
        English
        -12 months ago

        That just shows you how bad most people are at doing their jobs. And that’s exactly why they will lose them.

    • @WhatsTheHoldup@lemmy.ml
      link
      fedilink
      English
      52 months ago

      AIs do not hallucinate.

      Yes they do.

      They do not think or feel or experience. They are math.

      Oh, I think you misunderstand what hallucinations mean in this context.

      AIs (LLMs) train on a very very large dataset. That’s what LLM stands for, Large Language Model.

      Despite how large this training data is, you can ask it things outside the training set and it will answer as confidently as things inside it’s dataset.

      Since these answers didn’t come from anywhere in training, it’s considered to be a hallucination.

    • @Couldbealeotard@lemmy.world
      link
      fedilink
      English
      02 months ago

      Hallucination is the technical term for when the output of an LLM is factually incorrect. Don’t confuse that with the normal meaning of the word.

      A bug in software isn’t an actual insect.

    • @Naz@sh.itjust.works
      link
      fedilink
      -12 months ago

      You’re right, they haven’t cracked consciousness.

      Imagine if you would, the publicly available technology, and then the private R&D or government sector, protected by NDAs and Secret/DoNotDistribute classifications respectively.

      🤷‍♀️

    • @frezik@midwest.social
      link
      fedilink
      -4
      edit-2
      2 months ago

      They do hallucinate, and we can induce it to do so much the way certain drugs induce hallucinations in humans.

      However, it’s slightly different from simply being wrong about things. Consciousness is often conflated with intelligence in our language, but they’re different things. Consciousness is about how you process input from your senses.

      Human consciousness is highly tuned to recognize human faces. So much so that we often recognize faces in things that aren’t there. It’s the most common example of pareidolia. This is essentially an error in consciousness–a hallucination. You have them all the time even without some funny mushrooms.

      We can induce pareidolia in image recognition models. Google did this in the Deep Dream model. It was trained to recognize dogs, and then modify the image to put in the thing it recognizes. After a few iterations of this, it tends to stick dogs all over the image. We made an AI that has pareidolia for dogs.

      There is some level of consciousness there. It’s not a binary yes/no thing, but a range of possibilities. They don’t have a particularly high level of consciousness, but there is something there.