There are plenty of headlines about AI induced psychosis, and they all tend follow a similar pattern:

•Individual with a pre-existing vulnerability begins using AI, usually it’s use of AI as a conversational partner.

•Gradually they lose the ability to hold conversations with humans who aren’t programmed to stroke their ego and replace human connection with AI.

•Eventually, they spiral and completely lose touch with reality. During this time they make terrible decisions that destroy their lives. Then at some point, they are forced to confront the reality of their decisions/behavior, similar to coming out of an extended splitting episode in Dissociative Identity Disorder or waking up sober from an alcohol or drug fueled binge.

Given everything we know about plasticity and human behavior, it would be silly to believe frequent use of AI isn’t changing our brains. Even if the majority of users don’t develop full blown psychosis, if suddenly your day is spent talking to a self affirming mirror, it’s going to change your brain and behavior. It’s more a question of “what/how” it’s changing people than “if” it’s actually changing them.

So, what are some of the more subtle changes (as compared to psychosis) you’ve noticed in people who frequently use AI? Have you noticed a difference even in those who don’t use it as a conversational partner?

  • Basic Glitch@sh.itjust.worksOP
    link
    fedilink
    arrow-up
    2
    ·
    3 days ago

    I feel like it’s fine(ish) for work, and I agree, as long as you can show some evidence it’s either easing your work flow vs causing you more issues, it’s serving it’s purpose.

    My concern is people who seem to get hooked on it like a drug, and refuse to acknowledge any evidence it’s causing more issues than actually helping them. Like they get really anxious/can’t function without it, and start trusting AI more than they trust their own ability to reason through a problem.

    It’s especially concerning to me when people use it like this outside of work, like a life guide. It’s almost like the AI starts doing the living for them.

    For example, when it comes to navigating relationships, AI can give some really bad advice because it’s lacking human connection and feeling/intuition. Those are pretty essential ingredients for decision making. If you decide to always default to AI to help you make decisions or solve problems, you’re forgoing the entire experience of having a human relationship.

    That connection and the way you feel are kind of the whole point. Human relationships aren’t easy, sometimes they hurt, and people usually don’t respond well to only being acknowledged when the other person feels like interacting with them. But feelings and being able to understand the other person’s perspective even if you don’t agree with them, are kind of the entire experience of being human. Without that experience you might as well just not have human relationships, and some people seem to be ok making that sacrifice.