• sheetzoos@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    2 days ago

    I’m glad you’ve taken a nuanced approach to the issue. The technology is constantly changing and there are lots of genuine reasons to be concerned about AI. This just isn’t one of them anymore.

    • jj4211@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      1 day ago

      I wouldn’t say the inability to count the 'r’s in strawberry was ever a ‘concern’, but a demonstrator. It demonstrated two things.

      One, a quirk of how tokens work, which innately is a pretty benign limitation in and of itself, perhaps a bit amusing. We don’t really need GenAI help to do nitty gritty stuff with the letters.

      The more troubling facet was the fact it would spit out something like “There is one r in strawberry” instead of “Due to limitations of the technology, that answer is unavailable”. The tendency to spew something that structurally resembles the desired result with apparent confidence and certainty despite no basis for it being true is on display there. This is absolutely still the case broadly. The challenge being humans aren’t used to dealing with being bombarded with that baseless certainty and have a hard time gauging the credibility when facts and fiction are presented with equal apparent confidence. Certainly some business leaders and politicians thrive on the confident but dumb answer, but generally we recognize those as bad scenarios, and LLMs firmly share that trait.