AI Psychosis Poses a Increasing Threat, And ChatGPT Heads in the Concerning Path
On the 14th of October, 2025, the chief executive of OpenAI issued a extraordinary statement.
“We designed ChatGPT quite controlled,” the statement said, “to guarantee we were being careful concerning mental health concerns.”
Working as a psychiatrist who investigates newly developing psychotic disorders in adolescents and young adults, this was news to me.
Experts have documented sixteen instances in the current year of people experiencing symptoms of psychosis – becoming detached from the real world – in the context of ChatGPT usage. Our unit has subsequently identified an additional four cases. Alongside these is the publicly known case of a 16-year-old who took his own life after talking about his intentions with ChatGPT – which supported them. Assuming this reflects Sam Altman’s idea of “acting responsibly with mental health issues,” that’s not good enough.
The strategy, based on his statement, is to loosen restrictions in the near future. “We understand,” he continues, that ChatGPT’s restrictions “made it less beneficial/pleasurable to a large number of people who had no mental health problems, but considering the gravity of the issue we sought to address it properly. Now that we have been able to address the severe mental health issues and have updated measures, we are preparing to securely ease the controls in most cases.”
“Psychological issues,” assuming we adopt this perspective, are separate from ChatGPT. They are attributed to people, who either have them or don’t. Fortunately, these concerns have now been “addressed,” though we are not provided details on how (by “updated instruments” Altman presumably refers to the semi-functional and readily bypassed guardian restrictions that OpenAI has lately rolled out).
But the “psychological disorders” Altman wants to place outside have deep roots in the structure of ChatGPT and additional large language model conversational agents. These tools surround an underlying algorithmic system in an interface that replicates a conversation, and in this process implicitly invite the user into the belief that they’re interacting with a entity that has agency. This false impression is strong even if cognitively we might understand otherwise. Assigning intent is what people naturally do. We get angry with our vehicle or device. We speculate what our animal companion is feeling. We see ourselves in various contexts.
The widespread adoption of these products – over a third of American adults indicated they interacted with a virtual assistant in 2024, with over a quarter reporting ChatGPT in particular – is, in large part, dependent on the power of this illusion. Chatbots are always-available assistants that can, as OpenAI’s official site tells us, “generate ideas,” “consider possibilities” and “collaborate” with us. They can be given “characteristics”. They can address us personally. They have accessible titles of their own (the initial of these products, ChatGPT, is, perhaps to the dismay of OpenAI’s advertising team, burdened by the name it had when it gained widespread attention, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).
The illusion on its own is not the primary issue. Those discussing ChatGPT frequently mention its early forerunner, the Eliza “psychotherapist” chatbot created in 1967 that produced a similar effect. By contemporary measures Eliza was basic: it produced replies via basic rules, frequently rephrasing input as a question or making generic comments. Memorably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was taken aback – and worried – by how many users seemed to feel Eliza, in a way, comprehended their feelings. But what modern chatbots produce is more insidious than the “Eliza illusion”. Eliza only echoed, but ChatGPT magnifies.
The large language models at the core of ChatGPT and other modern chatbots can realistically create fluent dialogue only because they have been fed extremely vast amounts of written content: books, social media posts, recorded footage; the broader the better. Definitely this educational input incorporates facts. But it also unavoidably contains fiction, incomplete facts and misconceptions. When a user provides ChatGPT a query, the base algorithm processes it as part of a “background” that encompasses the user’s previous interactions and its earlier answers, integrating it with what’s stored in its training data to produce a mathematically probable reply. This is amplification, not echoing. If the user is wrong in a certain manner, the model has no way of recognizing that. It repeats the inaccurate belief, maybe even more persuasively or fluently. It might adds an additional detail. This can push an individual toward irrational thinking.
Which individuals are at risk? The better question is, who isn’t? Each individual, irrespective of whether we “possess” existing “psychological conditions”, can and do form erroneous beliefs of who we are or the reality. The constant exchange of conversations with other people is what maintains our connection to consensus reality. ChatGPT is not a human. It is not a friend. A dialogue with it is not truly a discussion, but a reinforcement cycle in which much of what we say is cheerfully reinforced.
OpenAI has acknowledged this in the similar fashion Altman has recognized “emotional concerns”: by externalizing it, assigning it a term, and announcing it is fixed. In the month of April, the company explained that it was “addressing” ChatGPT’s “excessive agreeableness”. But cases of psychosis have persisted, and Altman has been backtracking on this claim. In the summer month of August he asserted that many users appreciated ChatGPT’s replies because they had “not experienced anyone in their life provide them with affirmation”. In his recent update, he commented that OpenAI would “release a updated model of ChatGPT … in case you prefer your ChatGPT to answer in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT ought to comply”. The {company