There are plenty of headlines about AI induced psychosis, and they all tend follow a similar pattern:

•Individual with a pre-existing vulnerability begins using AI, usually it’s use of AI as a conversational partner.

•Gradually they lose the ability to hold conversations with humans who aren’t programmed to stroke their ego and replace human connection with AI.

•Eventually, they spiral and completely lose touch with reality. During this time they make terrible decisions that destroy their lives. Then at some point, they are forced to confront the reality of their decisions/behavior, similar to coming out of an extended splitting episode in Dissociative Identity Disorder or waking up sober from an alcohol or drug fueled binge.

Given everything we know about plasticity and human behavior, it would be silly to believe frequent use of AI isn’t changing our brains. Even if the majority of users don’t develop full blown psychosis, if suddenly your day is spent talking to a self affirming mirror, it’s going to change your brain and behavior. It’s more a question of “what/how” it’s changing people than “if” it’s actually changing them.

So, what are some of the more subtle changes (as compared to psychosis) you’ve noticed in people who frequently use AI? Have you noticed a difference even in those who don’t use it as a conversational partner?

  • RedstoneValley@sh.itjust.works
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    1 day ago

    I’ve seen people become more introverted and unable to participate in an open discussion. They bring an opinion but refuse to reason with actual arguments. If that doesn’t work out for them (it never does) they are offended and leave. Very odd behaviour.

      • RedstoneValley@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        12 hours ago

        True, but I’ve noticed that as a change in my personal circle, and as a recent change among people who acted different before becoming heavy AI users. I find it difficult to describe the effect, it’s like a disassociation with their surroundings like what supposedly happens when people join a cult. Quite suddenly it becomes very difficult to get through to them. It’s weird and scary.

      • Basic Glitch@sh.itjust.worksOP
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        1 day ago

        It definitely seems like it’s making people less open to the possibility their opinion is incorrect.

        Not that people haven’t always had a difficult time being wrong, but now AI can cite “research” to answer a question by summarizing information in a way the user wants to hear.

        So if somebody is normally very rational, but starts relying on AI to summarize information and research for them to save time, the summary generated might be phrased in a way that ultimately misinterprets the information it’s using in an attempt to make the summary more appealing to the user.

        So somebody might believe based on the summary’s misinterpretation of information that an opinion seems to be backed up by AI, and this is irrefutable evidence they’re correct.

    • Basic Glitch@sh.itjust.worksOP
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      1 day ago

      I’ve definitely noticed an increase in previously rational people now getting offended/defensive if you disagree with them. Then just refusing to have a conversation.

      That’s always kind of been a universal human flaw, but it seems like AI has turned up the volume. It also seems like that kind of behavior usually was reserved for political disagreements. People avoid talking about politics bc people tend to have a sort of tribalistic/in-group vs out-group response.

      Now it’s like the in-group is just one person and their self affirming mirror. Wtf can you even talk about with somebody who believes the all knowing mirror they carry around in their pocket could never steer them wrong?

        • Basic Glitch@sh.itjust.worksOP
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          1 day ago

          Yeah but now it’s like the people who weren’t on social media are repeating this behavior using AI.

          Honestly, doesn’t seem too coincidental to me that the same broligarchs who wanted to get everybody dependent on social media echo chambers to receive and exchange information, are now trying to push everybody to embrace AI.