There are plenty of headlines about AI induced psychosis, and they all tend follow a similar pattern:
•Individual with a pre-existing vulnerability begins using AI, usually it’s use of AI as a conversational partner.
•Gradually they lose the ability to hold conversations with humans who aren’t programmed to stroke their ego and replace human connection with AI.
•Eventually, they spiral and completely lose touch with reality. During this time they make terrible decisions that destroy their lives. Then at some point, they are forced to confront the reality of their decisions/behavior, similar to coming out of an extended splitting episode in Dissociative Identity Disorder or waking up sober from an alcohol or drug fueled binge.
Given everything we know about plasticity and human behavior, it would be silly to believe frequent use of AI isn’t changing our brains. Even if the majority of users don’t develop full blown psychosis, if suddenly your day is spent talking to a self affirming mirror, it’s going to change your brain and behavior. It’s more a question of “what/how” it’s changing people than “if” it’s actually changing them.
So, what are some of the more subtle changes (as compared to psychosis) you’ve noticed in people who frequently use AI? Have you noticed a difference even in those who don’t use it as a conversational partner?


They always ask ai first instead of taking any debugging steps
In what way did that change?
Itz been decades since I started asking them if their plug was 2 prong, or 3 prong. Can you try unplugging it now?
Just to get them to confirm they unplugged and plugged it back in.
I think the opposite is the ideal. If using AI, write an architecture document of the code, then point an LLM at it. Be prepared to open up the debugger and troubleshoot someone else’s code.
Honestly, I’ve gotten a lot of lift from this technique since the devs at my job legit don’t know how to even use source control.