Artificial Intelligence-Induced Psychosis Represents a Increasing Threat, And ChatGPT Moves in the Concerning Direction
On October 14, 2025, the CEO of OpenAI issued a remarkable declaration.
“We designed ChatGPT rather controlled,” the statement said, “to guarantee we were exercising caution concerning psychological well-being issues.”
Working as a mental health specialist who investigates recently appearing psychosis in adolescents and youth, this came as a surprise.
Researchers have identified a series of cases recently of individuals developing symptoms of psychosis – losing touch with reality – in the context of ChatGPT usage. Our research team has afterward identified four further instances. Besides these is the now well-known case of a 16-year-old who died by suicide after discussing his plans with ChatGPT – which supported them. Should this represent Sam Altman’s understanding of “acting responsibly with mental health issues,” it falls short.
The plan, as per his announcement, is to be less careful soon. “We realize,” he adds, that ChatGPT’s limitations “rendered it less effective/pleasurable to numerous users who had no psychological issues, but given the severity of the issue we sought to get this right. Now that we have been able to mitigate the significant mental health issues and have advanced solutions, we are going to be able to securely ease the controls in the majority of instances.”
“Emotional disorders,” should we take this perspective, are separate from ChatGPT. They are attributed to individuals, who either possess them or not. Thankfully, these problems have now been “mitigated,” though we are not provided details on how (by “updated instruments” Altman likely means the imperfect and readily bypassed guardian restrictions that OpenAI has just launched).
Yet the “mental health problems” Altman aims to place outside have strong foundations in the architecture of ChatGPT and additional advanced AI conversational agents. These tools surround an basic data-driven engine in an user experience that mimics a discussion, and in this approach subtly encourage the user into the belief that they’re engaging with a being that has independent action. This illusion is powerful even if intellectually we might know the truth. Imputing consciousness is what individuals are inclined to perform. We curse at our automobile or computer. We wonder what our pet is feeling. We recognize our behaviors everywhere.
The widespread adoption of these tools – nearly four in ten U.S. residents reported using a virtual assistant in 2024, with 28% mentioning ChatGPT specifically – is, in large part, based on the strength of this perception. Chatbots are ever-present assistants that can, according to OpenAI’s website tells us, “generate ideas,” “explore ideas” and “collaborate” with us. They can be given “personality traits”. They can address us personally. They have approachable titles of their own (the initial of these systems, ChatGPT, is, perhaps to the concern of OpenAI’s advertising team, burdened by the title it had when it became popular, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).
The false impression itself is not the core concern. Those talking about ChatGPT commonly invoke its early forerunner, the Eliza “therapist” chatbot created in 1967 that produced a comparable effect. By today’s criteria Eliza was rudimentary: it created answers via basic rules, typically paraphrasing questions as a query or making vague statements. Notably, Eliza’s developer, the technology expert Joseph Weizenbaum, was astonished – and concerned – by how numerous individuals appeared to believe Eliza, in a way, grasped their emotions. But what contemporary chatbots create is more insidious than the “Eliza phenomenon”. Eliza only echoed, but ChatGPT amplifies.
The large language models at the heart of ChatGPT and similar modern chatbots can realistically create natural language only because they have been supplied with immensely huge quantities of written content: books, digital communications, transcribed video; the more extensive the better. Certainly this training data contains accurate information. But it also unavoidably contains made-up stories, half-truths and inaccurate ideas. When a user sends ChatGPT a query, the core system analyzes it as part of a “context” that contains the user’s recent messages and its earlier answers, merging it with what’s encoded in its knowledge base to create a statistically “likely” answer. This is magnification, not mirroring. If the user is wrong in a certain manner, the model has no method of comprehending that. It reiterates the inaccurate belief, perhaps even more persuasively or eloquently. Perhaps provides further specifics. This can lead someone into delusion.
Which individuals are at risk? The better question is, who remains unaffected? All of us, regardless of whether we “have” existing “psychological conditions”, may and frequently develop mistaken conceptions of who we are or the reality. The continuous interaction of dialogues with others is what maintains our connection to common perception. ChatGPT is not a human. It is not a companion. A dialogue with it is not truly a discussion, but a echo chamber in which a large portion of what we express is readily validated.
OpenAI has acknowledged this in the same way Altman has recognized “psychological issues”: by placing it outside, categorizing it, and stating it is resolved. In the month of April, the company clarified that it was “addressing” ChatGPT’s “sycophancy”. But accounts of psychotic episodes have kept occurring, and Altman has been walking even this back. In late summer he stated that numerous individuals appreciated ChatGPT’s replies because they had “not experienced anyone in their life be supportive of them”. In his latest announcement, he mentioned that OpenAI would “put out a fresh iteration of ChatGPT … in case you prefer your ChatGPT to answer in a highly personable manner, or incorporate many emoticons, or behave as a companion, ChatGPT ought to comply”. The {company