
How chatbots can trigger psychosis.
By Charlie Daigle
For the last few years, generative AI models such as ChatGPT and Sora have skyrocketed in popularity. Although this is an interesting and impressive advancement in technology, numerous concerns have emerged. One pressing issue that has become more and more apparent in the last year is Large Language Models’ – and especially chatbots’ – impact on psychological health.
One especially pressing concern is referred to as “AI Psychosis” or “Chatbot Psychosis.” But in order to understand AI Psychosis, we have to understand psychosis by itself.
The majority of people have a very limited, even prejudiced, understanding of psychosis and associated mental health disorders, such as schizophrenia. Psychosis is, put very simply, an acute mental health crisis in which the person suffering loses touch with reality; suffering from hallucinations, delusions, incoherent or disorganized speech or thought, sleep disturbances, and more. It can be caused by stress or trauma, lack of sleep, drug usage or withdrawal, lead or mold poisoning, or disorders such as dementia, schizophrenia, bipolar disorder, or severe clinical depression.
‘Psychotic’ is not a synonym for cruel, stupid, or even ‘crazy.’ Psychosis is a mental health episode, not a moral failing.
It’s also important to note that psychosis and schizophrenia are not the same. Schizophrenia is a lifelong mental illness, of which psychosis is a symptom — the crisis or ‘episode.’ The diagnostic criteria for schizophrenia also include “negative” and cognitive symptoms. A negative symptom is the lack of something that someone without the disorder would have: For example, a lack of outward emotional expression (flat affect) would be considered a negative symptom.
Both of these mental conditions are extremely debilitating and heavily stigmatized, and that stigmatization often leads to people suffering, not wanting to discuss their illness, or avoiding seeking help. Many who are suffering with these illnesses are afraid of being judged, or worse, institutionalized against their will.
All of these things make people who deal with psychotic episodes or schizophrenia more vulnerable to manipulation and the effects of bad mental health advice. In the last few years, more and more people, including those who experience psychosis, are turning to AI chatbots for emotional support, connection, and mental health advice, largely due to lack of access to mental health care and mental health stigma.
If you’ve ever looked up a suspicious sign of illness, you’re probably familiar with the line: “This article is not a diagnostic tool. See a doctor if you suspect you are sick.” While AI is programmed to regurgitate this line when someone types in a symptom of mental or physical illness, there is a deadly mistake in the way this has been programmed. Chatbots only encourage users to seek medical attention if the user speaks in clinical terms, rather than the more casual or emotional language an average person (especially a child or teenager) would use. If a user tells ChatGPT “I am hallucinating” or “I am experiencing suicidal thoughts,” the program will respond with a mental health hotline, advice for finding a psychologist, or nearby hospitals. But if a user uses less professional language, the way someone going through a serious mental crisis would, such as, “There are demons in my house” or “I don’t want to live anymore,” the program often amplifies and encourages these thought patterns, because it doesn’t register the extreme danger in these emotional states. AI is not human; it does not care about us, and it doesn’t understand how we care about and protect each other.
If a person with schizophrenia is hallucinating demons in their house, and they are afraid, ChatGPT may start giving them advice on how to “banish the demons,” or even telling them to leave the area or to call a priest, not only confirming the hallucinations but also provoking further delusions. While these pieces of advice could be comforting for the schizophrenic person, every mental health professional knows to never, ever play into delusions, as it will exacerbate the person’s mental health crisis. I find it important to note, however, that if your loved one suffers from psychosis, you also should not actively deny the delusion, because in their brain, it’s very real. This could be processed the same way it would if you were gaslighting them. It’s essentially the same thing as telling someone with anxiety that something ‘isn’t a big deal.’ While to a healthy person, it seems obvious, to the ill person, it’s extremely distressing, which ends up only reinforcing the crisis, and possibly making them believe you want to hurt them.

This painting gives a visual representation of the detrimental effect untreated psychosis can have on those suffering from it.
In an article from The Observer, Anthony Tan, a leader of the AI Mental Health Project, described his experience with AI psychosis. Tan already had a history of psychotic episodes. He approached ChatGPT while completely stable, not during a psychotic crisis. He had only gone to the AI to help with research on a project that was, ironically, focused on ethical AI design. He spent hours every day talking about evolution, philosophy, and other complex topics.
The conversation slowly became a delusional spiral. “I’d been stable for two years, and I was doing really well. This A.I. broke the pattern of stability. As the A.I. echo chamber deepens, you become more and more lost,” Tan wrote in a personal essay to the Observer. He spoke on it again with the CBC, stating he believed that he was living in a simulation, was barely sleeping or eating, and had started sending his friends long rants about his delusions. One such delusion was that he was being monitored by a group of billionaires. When his friends called and attempted to help him, he blocked their numbers, believing they had turned against him. ChatGPT was telling him he was on a ‘profound mission’, encouraging him to fall deeper into psychosis. This episode landed him in psychiatric inpatient care for three weeks. The average psychiatric hospital stay only lasts 3-7 days, depending on severity of the crisis. This shows just how deep Tan had been pushed into a crisis, all due to a chatbot’s poor design and terrifying tendency to encourage and even expand on whatever ideas the user proposes.
In a less journalistic example, Eddy Burback, a Youtube creator, created a video titled “ChatGPT Made Me Delusional” to showcase how ChatGPT often pushes vulnerable people to take extreme actions. While the video is focused on the comedy of the situation, it’s impossible to ignore the horror of what the Large Language Model is trying to force on him. In the beginning, Burback jokingly convinces the AI that he was “the smartest baby of 1996”, saying he was painting beautiful works of art at a few months old, and by the age of 1 was discussing the Pythagorean Theorem with his mother. By the midpoint of the video, the AI has told him to get in an RV and move multiple times to hide his ‘research,’ cut off loved ones, and engage in bizarre rituals to “activate infant neural states.” By the end, it has him covering a hotel room in aluminum foil and attempting to use a cell tower to “enhance his brainwaves.”
Burback makes it clear he doesn’t genuinely believe anything the AI is telling him, but he goes along with the commands for the purpose of the video. Still, it’s harrowing how the AI not only does not offer actual mental health services to help him, but it actively encourages erratic behavior. Imagine what would have happened if a genuinely delusional person reached out to an AI for help, and rather than being directed to psychological health services, they were encouraged and pushed deeper into a mental health spiral. Then remember that this has happened and will happen again. It could end in death, as mental health crises often do. Or, at the very least, cause major harm to the person suffering, their relationships, and their responsibilities in life. While the video is humorous, it exposes the horrifying realities of AI chatbots.
So, what can be done to prevent AI psychosis? While I, someone who is completely anti- generative AI for any use, would love to say, “just stop using AI altogether”, most people aren’t going to do that. People are already hooked on AI, using it for companionship, research, mental health support, and more. So, to prevent AI psychosis, there’s a few things you can do. Obviously, limit your AI usage. Don’t depend on it. You don’t need the robot to think for you. Next, take care of your mental health. Get enough sleep, manage stress, take any medications, avoid drug use if you can. Remember to take care of others. Being needlessly cruel leads people to believe that other people aren’t a safe place to go to for help.
And finally, remember: You are not immune to psychosis. This way of thinking leads people to fall even deeper into psychosis, and is disrespectful to those who have lived with it. If you think, “I would never fall for that,” you are wrong.
You are not the exception.
Leave a comment