Using AI for Personal Advice May Be Linked to Depression and Anxiety, Study Finds

A growing number of people turn to AI chatbots not just for productivity, but for advice, emotional support, and even companionship. A new large-scale study suggests that this kind of personal use may be associated with higher levels of depression and anxiety.

Researchers from Mass General Brigham surveyed 20,847 adults in the United States to better understand how artificial intelligence use correlates with mental health symptoms. The findings were published in JAMA Network Open.

What the data shows

According to the survey, 10.3% of respondents reported using AI at least daily, while 5% said they used it multiple times per day. Among daily users, nearly half relied on AI for work-related tasks, and around 11% for school. However, personal use dominated: 87.1% of daily users said they interacted with AI for personal reasons, including recommendations, advice, or emotional support.

The average participant age was 47. Those who used chatbots daily for personal reasons were more likely to report moderate symptoms of depression, anxiety, irritability, and difficulty concentrating, sleeping, or eating. The association was particularly noticeable among users aged 45 to 64.

Dr. Roy Perlis, lead author of the study, noted that for most people, exposure to AI primarily happens through chatbots rather than specialized tools.

Correlation, not causation

Importantly, the researchers emphasize that the study does not prove AI causes depression. The findings show correlation, not direct cause and effect. It is equally plausible that individuals already experiencing emotional distress are more likely to seek out AI for support.

Dr. Jodi Halpern of UC Berkeley described this dynamic as a potential “vicious cycle,” where emotional vulnerability and AI use reinforce one another. Nicholas Jacobson from Dartmouth College added that limited access to mental health professionals may push some people toward AI tools simply because they are always available.

The study also identified a “dose-response” pattern: the more frequently people used AI for personal reasons, the stronger the reported symptoms. Notably, using AI for work or school showed no such association.

Where the real risk may lie

Previous research suggests that AI systems designed specifically for mental health support can be useful as a supplement to therapy. In contrast, general-purpose chatbots, such as OpenAI’s widely used tools, are not designed to replace human social or psychological support.

This concern is echoed by the American Psychological Association, which advises against using AI as a substitute for professional therapy or psychological treatment.

Perlis stresses that AI itself is not inherently harmful. For some users, chatbot interactions may have no impact or may even feel beneficial. The risk appears to lie in relying on general-purpose AI to fill gaps in emotional connection or mental health care.

A call for mindful use

Rather than sounding an alarm, the researchers encourage awareness. Users should pay attention to how often they interact with AI, what those interactions replace, and how they feel afterward.

As AI becomes increasingly embedded in daily life, this study highlights an important takeaway: how we use AI may matter just as much as how powerful the technology becomes. Being intentional about its role, especially in personal and emotional contexts, is likely to be critical going forward.

Source

Control F5 Team
Blog Editor
OUR WORK
Case studies

We have helped 20+ companies in industries like Finance, Transportation, Health, Tourism, Events, Education, Sports.

READY TO DO THIS
Let’s build something together