There are plenty of headlines about AI induced psychosis, and they all tend follow a similar pattern:

•Individual with a pre-existing vulnerability begins using AI, usually it’s use of AI as a conversational partner.

•Gradually they lose the ability to hold conversations with humans who aren’t programmed to stroke their ego and replace human connection with AI.

•Eventually, they spiral and completely lose touch with reality. During this time they make terrible decisions that destroy their lives. Then at some point, they are forced to confront the reality of their decisions/behavior, similar to coming out of an extended splitting episode in Dissociative Identity Disorder or waking up sober from an alcohol or drug fueled binge.

Given everything we know about plasticity and human behavior, it would be silly to believe frequent use of AI isn’t changing our brains. Even if the majority of users don’t develop full blown psychosis, if suddenly your day is spent talking to a self affirming mirror, it’s going to change your brain and behavior. It’s more a question of “what/how” it’s changing people than “if” it’s actually changing them.

So, what are some of the more subtle changes (as compared to psychosis) you’ve noticed in people who frequently use AI? Have you noticed a difference even in those who don’t use it as a conversational partner?

  • getFrog@piefed.social
    link
    fedilink
    English
    arrow-up
    7
    ·
    3 days ago

    Not that I knew the guy before his AI use so for all its worth he was dumb as rocks before, but my manager hired a guy who doesn’t know shit as a software architect for our team. Doesn’t even really know our tech stack (mostly TypeScript and AWS stuff) but my manager is really into AI so he hired a guy who promised that he’d get us all to adopt AI use.
    He can’t do shit without AI. I asked him to update a few dependencies a few weeks ago and he spent double the time that the Junior on our team takes for the same task, while also overlooking half the spots where he needed to do something, despite the fact that I gave him a clear list of spots that he needs to look at and actions he needs to take. Oh, and it was the third time he had that exact task and he learned nothing from the first few times.

    Generally, his main issues are that he’s completely brain rotted and forgets anything you tell him right away (but never acknowledges that he’s forgetful and gets defensive instead) AND that he’s just incorrect with such a confidence that we’re at a point where no one trusts his claims without double checking them. NONE of these things would be a problem if he just had the capacity to acknowledge them. My team is very much not perfectionist, many of us are forgetful and not great at communication (we’re software guys, auDHD the the norm) but his ego blocks any chance at improving or adapting his habits to fit into the team.

    Honestly I’ve just resigned to it. As much as I have a distaste towards that guy, I will leave the company before resorting to being overtly toxic to him and bullying that ego out of him. I’ve been a bit snarky and vented some frustrations to other coworkers, but in the end we’re all just trying to survive capitalism here I guess, so I shouldn’t be too hard on him for doing that to his best capabilities. He probably isn’t even being paid much better than the rest of us, my company has notoriously bad pay but people put up with it because there’s a lot of freedom, good accommodations for parents and ND people and some pretty sweet benefits. Sad to see that management is clamping down on the freedom by mandating AI everywhere.