We’ve seen plenty of evidence suggesting that prolonged use of popular AI chatbots like ChatGPT can coax some users into spirals of paranoid and delusional behavior.
The phenomenon, dubbed “AI psychosis,” is a very real problem, with researchers warning of a huge wave of severe mental health crises brought on by the tech. In extreme cases, especially involving people with pre-existing conditions, the breaks with reality have even been linked suicides and murder.
Now, thanks to a yet-to-be-peer-reviewed paper published by researchers at Anthropic and the University of Toronto, we’re beginning to grasp just how widespread the issue really is.
The researchers set out to quantify patterns of what they called “user disempowerment” in “real-world [large language model] usage” — including what they call “reality distortion,” “belief distortion,” and “action distortion” to denote a range of situations in which AI twists users’ sense of reality, beliefs, or pushes them into taking actions.
The results tell a damning story. The researchers found that one in 1,300 conversations out of almost 1.5 million analyzed chats with Anthropic’s Claude led to reality distortion, and one in 6,000 conversations led to action distortion.
— New Study Examines How Often AI Psychosis Actually Happens, and the Results Are Not Good
AI and mental health.
Discover more.
Subscribe to get my latest posts sent to your email.

What say you? Please leave a comment!