There are plenty of headlines about AI induced psychosis, and they all tend follow a similar pattern:

•Individual with a pre-existing vulnerability begins using AI, usually it’s use of AI as a conversational partner.

•Gradually they lose the ability to hold conversations with humans who aren’t programmed to stroke their ego and replace human connection with AI.

•Eventually, they spiral and completely lose touch with reality. During this time they make terrible decisions that destroy their lives. Then at some point, they are forced to confront the reality of their decisions/behavior, similar to coming out of an extended splitting episode in Dissociative Identity Disorder or waking up sober from an alcohol or drug fueled binge.

Given everything we know about plasticity and human behavior, it would be silly to believe frequent use of AI isn’t changing our brains. Even if the majority of users don’t develop full blown psychosis, if suddenly your day is spent talking to a self affirming mirror, it’s going to change your brain and behavior. It’s more a question of “what/how” it’s changing people than “if” it’s actually changing them.

So, what are some of the more subtle changes (as compared to psychosis) you’ve noticed in people who frequently use AI? Have you noticed a difference even in those who don’t use it as a conversational partner?

  • SpikesOtherDog@ani.social
    link
    fedilink
    English
    arrow-up
    13
    ·
    3 days ago

    I feel like I’m the last grounding point for a peer who is getting in too deep. He is running all kinds of agents and says that he is afraid of getting left behind. He tells me about openclaw, which I looked into, but not interested in automation that doesn’t produce specific repeatable results.

    On his behalf I have dug into ollama, but I find that I am just as fast if not faster at the OCR text cleanup using spell checker than arguing with the bot and fixing its mistakes.

    He seems to understand my frustrations very well, and my counterpoints seem to be accepted.

    I think it is important to try the tools at least a few times and to attempt to integrate them into your workflow, but you need to then take a step back after you finally feel like you have a flow and compare it to your work without. Sure, you are contributing to the numbers briefly, but without being able to articulate your grievances from their perspective your words won’t have as much weight.

    • HubertManne@piefed.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 days ago

      My feeling is to have it help you do something you know very well. If your awesome at video games play one and ask it what to do at each point. This is what has gotten me to learn how it can fail. It works very often but when it fails its great at making a plausible failure that will lead you down a bad path.

      • SpikesOtherDog@ani.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        Basically, that’s what I have seen. It gives the average answer, and sometimes conflates information from similar topics or appears to provide solutions that don’t exist.

        If your task is to take creative solutions and work them into a framework, it might help jump start ideas, but it cannot keep a logical thread.

    • Basic Glitch@sh.itjust.worksOP
      link
      fedilink
      arrow-up
      2
      ·
      3 days ago

      I feel like it’s fine(ish) for work, and I agree, as long as you can show some evidence it’s either easing your work flow vs causing you more issues, it’s serving it’s purpose.

      My concern is people who seem to get hooked on it like a drug, and refuse to acknowledge any evidence it’s causing more issues than actually helping them. Like they get really anxious/can’t function without it, and start trusting AI more than they trust their own ability to reason through a problem.

      It’s especially concerning to me when people use it like this outside of work, like a life guide. It’s almost like the AI starts doing the living for them.

      For example, when it comes to navigating relationships, AI can give some really bad advice because it’s lacking human connection and feeling/intuition. Those are pretty essential ingredients for decision making. If you decide to always default to AI to help you make decisions or solve problems, you’re forgoing the entire experience of having a human relationship.

      That connection and the way you feel are kind of the whole point. Human relationships aren’t easy, sometimes they hurt, and people usually don’t respond well to only being acknowledged when the other person feels like interacting with them. But feelings and being able to understand the other person’s perspective even if you don’t agree with them, are kind of the entire experience of being human. Without that experience you might as well just not have human relationships, and some people seem to be ok making that sacrifice.