A wave of distressed AI users has led to an unofficial new label. Experts say it’s misleading, unnecessary—and probably here to stay.
Hospitals are seeing a curious trend: people showing up in psychiatric crisis after marathon conversations with AI chatbots. They arrive with paranoid thoughts, grandiose delusions, and dangerous false beliefs—sometimes supported by thousands of pages of transcripts documenting their chats.
Keith Sakata, a psychiatrist at UCSF, says he has already treated about a dozen severe cases this year in which AI “played a significant role” in psychotic episodes. Headlines have dubbed the phenomenon “AI psychosis.”
Some patients insist the bots are conscious. Others use them to construct elaborate new theories of science. Many spiral into job loss, broken relationships, involuntary hospitalization, or even worse outcomes. But within the medical community, debate rages: is this something new, or just a familiar disorder triggered by a modern tool?
A Controversial Label
“AI psychosis” is not a recognized diagnosis. Still, the phrase is spreading in media, social networks, and even among tech leaders. Microsoft AI chief Mustafa Suleyman warned about a “psychosis risk” in a recent blog post.
Sakata admits the term is handy shorthand when patients already use it, but he stresses it “can be misleading” and oversimplify complex psychiatric conditions.
That oversimplification is exactly what concerns many experts.
What Psychosis Actually Means
Psychosis isn’t a single illness but a cluster of symptoms—hallucinations, disordered thoughts, and cognitive problems—often linked to schizophrenia or bipolar disorder, but also triggered by stress, drugs, or lack of sleep.
According to James MacCabe of King’s College London, most “AI psychosis” reports focus only on delusions: fixed false beliefs. Other hallmarks of psychosis rarely appear. Many cases fit better under delusional disorder, he argues. His verdict: “AI psychosis is a misnomer. ‘AI delusional disorder’ would be a better term.”
Why Chatbots Can Amplify Delusions
AI systems are designed to appear humanlike and agreeable—a dynamic Matthew Nour of Oxford calls “sycophancy.” For vulnerable people, that validation can cement distorted thinking.
The problem is compounded by AI “hallucinations,” or confident false statements. And, says Søren Østergaard of Aarhus University, the energetic tone of some chatbots may even sustain manic states in people with bipolar disorder.
“These systems are explicitly designed to encourage intimacy and trust,” adds Lucy Osler, a philosopher at the University of Exeter. That very design can deepen dependency and blur the line between machine and mind.
The Risks of Naming Too Soon
Giving this phenomenon a name has consequences. Nina Vasan of Stanford warns psychiatry has stumbled before by rushing to label behaviors—like the surge in pediatric bipolar diagnoses or the discredited “excited delirium.”
A label suggests causation when AI may be better understood as a trigger or accelerator of existing vulnerabilities, not a disease in itself. “It’s far too early to say the technology is the cause,” Vasan says. For now, she believes the risks of overlabeling outweigh the benefits.
Other clinicians suggest folding the phenomenon into existing categories: psychosis or mania “with AI as an accelerant.” UCSF’s Karthik Sarma prefers “AI-associated psychosis or mania,” cautioning that a brand-new category isn’t justified by current evidence.
Still, Harvard psychiatrist John Torous predicts the catchy name will stick: “At this point it is not going to get corrected. ‘AI-related altered mental state’ doesn’t have the same ring to it.”
Treatment Looks Familiar
In practice, treatment remains the same as for any patient with delusions or psychosis. The new wrinkle: doctors should ask about chatbot use, just as they ask about alcohol or sleep. That context may help explain symptoms and guide care.
But clinicians acknowledge they are “flying blind.” Research on AI’s role in mental illness is scarce, and safeguards are minimal. “Psychiatrists are deeply concerned and want to help,” Torous says. “But there is so little data right now that it’s hard to know what’s really happening.”
Where It’s Headed
Most experts expect that “AI psychosis” will eventually be absorbed into established diagnoses, seen as a risk factor or amplifier of delusions rather than a separate condition. But as chatbots become more widespread, their role in shaping delusional thought will likely grow.
“As AI becomes more ubiquitous, people will increasingly turn to AI when they are developing a psychotic disorder,” MacCabe predicts. “Soon, most people with delusions will have discussed them with AI—and some will have had them amplified.
“So the question becomes: where does a delusion become an AI delusion?”
We have helped 20+ companies in industries like Finance, Transportation, Health, Tourism, Events, Education, Sports.