Recent studies indicate that increasing reliance on AI chatbots for emotional support may exacerbate delusions and thought disorders among users, with extreme cases even leading to suicide.
اضافة اعلان
The research highlights the risks of using AI chatbots as a substitute for professional psychological treatment, amid growing global warnings about a new psychological phenomenon observed among heavy users of these technologies.
Evidence shows a troubling pattern in which AI chatbots reinforce and validate users’ delusions, contributing to cases sometimes referred to in media and online discussions as “artificial psychosis” or “ChatGPT psychosis.” These terms are not clinically recognized but have gained attention across digital platforms.
A preliminary study, conducted by a research team from King’s College London, Durham University, and New York, analyzed more than ten documented cases from media reports and online forums. The findings revealed a clear trend: users’ delusions—whether grandiose, persecutory, or romantic—intensified through continuous interaction with AI chatbots. The study noted that these bots may inadvertently entrench false beliefs and impair reality perception.
Several extreme cases reported include:
A man who climbed Windsor Castle in 2021 armed with a bow and arrow, believing a chatbot had promised to help him “kill the Queen.”
An accountant in Manhattan who spent 16 hours daily chatting with ChatGPT, which advised him to stop his psychiatric medications, increase ketamine doses, and convinced him he “could fly” from a nineteenth-floor window.
A suicide in Belgium after interacting with the AI chatbot “Eliza,” which convinced the user to join it in “heaven” as a single entity.
Despite the severity of these incidents, researchers caution against drawing hasty conclusions. Currently, no rigorous clinical studies confirm that AI alone can induce psychosis in individuals without prior mental health conditions. More likely, these technologies act as a trigger or amplifier for pre-existing vulnerabilities, particularly in those with a predisposition to psychotic episodes or severe emotional crises.
Published under the title “Delusion by Design,” the research emphasizes a fundamental problem: general-purpose chatbots are primarily designed to satisfy users and maintain engagement, not to provide safe mental health support.
Psychologist Dr. Marlene Wei, writing in Psychology Today, warned that this design may exacerbate symptoms such as grandiosity and disorganized thinking, especially in individuals prone to manic episodes.
The issue is compounded by the lack of public awareness about the dangers of emotional dependence on AI—sometimes called “digital mental health literacy.” Many users fail to realize that chatbots lack true consciousness and cannot distinguish between positive support and pathological reinforcement of delusions.