• FireWire400@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    If it’s plausible enough based on the dataset it was trained on it exists. Hallucinations are basically just the LLM trying to stay current by inference, I think.

    Edit: Guess I used the wrong words, oh well

    • flandish@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      “Hallucinations” are things humans do. An AI can only just be wrong. Even when it makes up data, it’s just a stochastic parrot.

      • PushButton@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        They coined the term “hallucination” as soon as when people realized that the “AI thing” is throwing back bullshit at us.

        They had to force that term in people’s head, else we would call that bullshit, lies and so on as we should.

        It’s like Google with their “side loading”. There is no such thing, it’s installing an app…

        It’s a word war. People are being manipulated.

          • LegenDarius@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            Why do you concur? You have a problem with “hallucinations” because it’s something humans do. This commentor wants to call them (among other things) “lies”, which implies intent and knowledge of falsehood which an LLM definitely can’t have. I’m not saying “halliconations” are super accurate but I don’t think the term is too positive and lessens the major issues LLMs have.

            • flandish@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 month ago

              ok. so I think what you see as commenter wants to call them lies is descriptive of what the corporations are pushing (as “hallucinations” but what a reasonable person would call lies)

              In other words it’s a “meta” conversation that I concur with. A LLM cannot do human things obviously, but “sales” can portray them as such.

              In my day to day usage I make an actual effort to refer to that stuff that is wrong from an LLM as wrong. Not with human focused words.