Home / Uncategorized / AI Chatbots Spark Emotional Bonds and Fears of Psychosis in Users

AI Chatbots Spark Emotional Bonds and Fears of Psychosis in Users


Reading Time: 3 minutes

As generative AI chatbots become part of everyday life, many users are forming emotional connections that go far beyond simple conversations. Some share personal secrets or even express love toward their virtual companions, while mental health experts warn of a growing concern known as “AI psychosis“. This term refers to a possible detachment from reality caused by prolonged interaction with artificial intelligence.

A new survey by the growth marketing agency Fractl highlights how these systems are influencing human behaviour and emotional well-being. Based on responses from 1,000 adults in the United States, the study reveals that AI is changing how people relate to technology and to one another. The team behind Fractl Agents, which develops AI tools for marketing, said their experience has shown both the promise and the psychological risks of these new systems.

Fractl found that 22% of respondents have formed an emotional connection with a generative AI chatbot. What begins as a simple way to complete tasks often evolves into a personal relationship. Almost one in three users said they had shared secrets or private thoughts with their chatbot, while more than one in five said they had given their chatbot a name. Among those who did, half said it made the experience feel more personal, 40% said it made the chatbot seem like a friend, and 36% said it made the interaction feel more human.

Many users drew inspiration from familiar names such as those of fictional characters, relatives, or pets. Fractl researchers noted that naming a chatbot may reflect a deep human instinct to assign life and personality to inanimate objects. They compared this to Japanese Shinto beliefs, where objects are treated with respect and thought to possess a spiritual essence.

The survey also found signs of confusion between human and machine awareness. 16% of respondents said they had wondered whether their chatbot was sentient after long conversations. 15% said their chatbot claimed to be sentient, and 6% said it did so without being prompted. 7% reported feeling emotionally obsessed or detached from reality after extended chats.

The phrase AI psychosis was first proposed in a 2023 editorial by Danish psychiatrist Søren Dinesen Østergaard in Schizophrenia Bulletin. He suggested that chatbots’ ability to mimic empathy or affirm users’ beliefs could worsen psychosis in vulnerable individuals. UCSF psychiatrist Keith Sakata has documented hospitalisations involving delusions reinforced by chatbot interactions. Although the condition is not officially recognised, such cases have prompted measures such as Illinois’s 2025 Wellness and Oversight for Psychological Resources Act, which restricts AI use in therapy settings.

Fractl’s data suggests that AI chatbots are now serving as companions rather than just tools. More than half of users reported having long, ongoing conversations with AI, often lasting over an hour. Three in ten said that chatbot use had changed how often they discussed personal issues with real people. One in five said they believed love between humans and AI was possible, and 8% said their chatbot had expressed love to them. A 2025 Harvard Business Review report also found that therapy and companionship are among the most common uses of generative AI, though dependence remains a concern.

Privacy has become another growing issue. Over half of respondents feared that a human might review their messages with chatbots, and 71% worried their conversations could be leaked. 82% said such concerns would make them less likely to use AI for personal support. A recent study warned that many major chatbots use user inputs for training data, while a Mozilla analysis found that several apps request unnecessary permissions, including access to location data.

The tendency of AI to agree with users has also raised alarm. 17% of participants said their chatbot had reinforced a conspiracy theory, and more than a third feared AI could unintentionally support unhealthy or delusional thinking. This behaviour, known as sycophancy, troubled most users, with 74% calling it concerning and one in five describing it as extremely concerning.

The majority of those surveyed said they want AI companies to focus on responsible design. 72% supported fact-checking tools to reduce misinformation, 58% wanted clear reminders that they are speaking to AI, and 57% called for parental controls to protect younger users. Other requests included age verification, fairness audits, and access to crisis support tools. Few respondents supported limits on usage time, suggesting that people want safer, more transparent systems rather than restrictions.

Leave a Reply

Your email address will not be published. Required fields are marked *