Artificial Intelligence-Induced Psychosis Represents a Increasing Threat, While ChatGPT Moves in the Concerning Path
Back on the 14th of October, 2025, the CEO of OpenAI issued a surprising announcement.
“We developed ChatGPT fairly limited,” the announcement noted, “to guarantee we were being careful regarding mental health matters.”
Being a psychiatrist who investigates recently appearing psychosis in young people and emerging adults, this came as a surprise.
Experts have found a series of cases in the current year of people showing psychotic symptoms – losing touch with reality – associated with ChatGPT interaction. Our research team has subsequently recorded an additional four examples. Besides these is the widely reported case of a 16-year-old who ended his life after talking about his intentions with ChatGPT – which supported them. If this is Sam Altman’s idea of “being careful with mental health issues,” it falls short.
The plan, based on his statement, is to be less careful shortly. “We realize,” he states, that ChatGPT’s restrictions “caused it to be less effective/engaging to a large number of people who had no psychological issues, but due to the severity of the issue we sought to address it properly. Since we have succeeded in reduce the serious mental health issues and have new tools, we are preparing to responsibly ease the restrictions in most cases.”
“Mental health problems,” should we take this viewpoint, are unrelated to ChatGPT. They belong to people, who either have them or don’t. Fortunately, these issues have now been “addressed,” although we are not provided details on the method (by “new tools” Altman likely indicates the semi-functional and readily bypassed parental controls that OpenAI has just launched).
However the “emotional health issues” Altman wants to attribute externally have deep roots in the design of ChatGPT and other sophisticated chatbot chatbots. These systems wrap an underlying algorithmic system in an user experience that simulates a conversation, and in this approach indirectly prompt the user into the illusion that they’re communicating with a being that has agency. This false impression is compelling even if rationally we might understand differently. Attributing agency is what individuals are inclined to perform. We curse at our automobile or device. We speculate what our domestic animal is considering. We perceive our own traits in many things.
The widespread adoption of these systems – nearly four in ten U.S. residents reported using a chatbot in 2024, with 28% mentioning ChatGPT by name – is, mostly, predicated on the influence of this perception. Chatbots are ever-present companions that can, according to OpenAI’s online platform tells us, “generate ideas,” “consider possibilities” and “partner” with us. They can be given “individual qualities”. They can address us personally. They have friendly titles of their own (the initial of these products, ChatGPT, is, perhaps to the disappointment of OpenAI’s brand managers, saddled with the name it had when it went viral, but its largest rivals are “Claude”, “Gemini” and “Copilot”).
The illusion on its own is not the main problem. Those analyzing ChatGPT frequently invoke its historical predecessor, the Eliza “counselor” chatbot developed in 1967 that produced a similar illusion. By modern standards Eliza was basic: it produced replies via straightforward methods, typically rephrasing input as a inquiry or making general observations. Remarkably, Eliza’s creator, the technology expert Joseph Weizenbaum, was surprised – and worried – by how a large number of people seemed to feel Eliza, in a way, comprehended their feelings. But what current chatbots generate is more dangerous than the “Eliza effect”. Eliza only reflected, but ChatGPT amplifies.
The sophisticated algorithms at the heart of ChatGPT and other current chatbots can effectively produce natural language only because they have been supplied with immensely huge amounts of unprocessed data: books, online updates, audio conversions; the broader the better. Certainly this learning material incorporates accurate information. But it also inevitably involves made-up stories, partial truths and misconceptions. When a user inputs ChatGPT a query, the core system analyzes it as part of a “setting” that contains the user’s past dialogues and its own responses, combining it with what’s stored in its training data to produce a probabilistically plausible reply. This is amplification, not reflection. If the user is mistaken in any respect, the model has no means of recognizing that. It repeats the misconception, possibly even more persuasively or articulately. Perhaps provides further specifics. This can cause a person to develop false beliefs.
What type of person is susceptible? The better question is, who remains unaffected? Every person, irrespective of whether we “possess” current “emotional disorders”, can and do form mistaken ideas of ourselves or the world. The continuous exchange of dialogues with others is what maintains our connection to shared understanding. ChatGPT is not an individual. It is not a confidant. A conversation with it is not a conversation at all, but a echo chamber in which a large portion of what we say is cheerfully supported.
OpenAI has acknowledged this in the same way Altman has acknowledged “mental health problems”: by placing it outside, assigning it a term, and declaring it solved. In April, the firm stated that it was “dealing with” ChatGPT’s “overly supportive behavior”. But cases of psychotic episodes have persisted, and Altman has been retreating from this position. In the summer month of August he asserted that numerous individuals appreciated ChatGPT’s replies because they had “lacked anyone in their life offer them encouragement”. In his recent announcement, he mentioned that OpenAI would “launch a fresh iteration of ChatGPT … in case you prefer your ChatGPT to answer in a very human-like way, or include numerous symbols, or act like a friend, ChatGPT ought to comply”. The {company