With healthcare systems stretched thin by long waiting times and rising costs, more people are turning to AI-powered chatbots like ChatGPT for medical advice. A recent survey shows that about one in six American adults already consult chatbots for health-related questions at least once a month.
However, relying heavily on these tools may be risky. A new Oxford-led study found that people often struggle to communicate effectively with chatbots, which undermines the quality of the advice they receive.
“There’s a communication breakdown on both sides,” said Adam Mahdi, director of graduate studies at the Oxford Internet Institute and a co-author of the study. “Participants using chatbots didn’t make better decisions than those using traditional sources like web searches or personal judgment.”
In the study, around 1,300 participants from the UK were given medical case scenarios written by doctors. They were asked to diagnose the conditions and determine appropriate actions using both chatbots and their usual decision-making methods. The chatbots included OpenAI’s GPT-4o (used by ChatGPT), Cohere’s Command R+, and Meta’s Llama 3.
Rather than helping, the chatbots often hindered users. According to the study, participants were less likely to correctly identify health conditions and more likely to downplay their seriousness after using AI tools.
A common issue was users omitting key details when interacting with the chatbots or receiving answers that were vague or contradictory. “The advice often mixed helpful and harmful suggestions,” Mahdi said. “And existing evaluation methods don’t capture the complexity of human-chatbot interactions.”
This research comes as major tech companies continue to invest in AI for healthcare. Apple is reportedly working on a wellness-focused AI assistant, Amazon is developing tools to analyze health data, and Microsoft is building AI systems to help healthcare providers manage patient messages.
Still, experts remain cautious. The American Medical Association advises against using tools like ChatGPT for clinical decisions, and companies including OpenAI explicitly warn users not to depend on chatbots for diagnosis.
“People should turn to trusted sources when it comes to healthcare,” Mahdi emphasized. “Just like with new drugs, AI tools should undergo rigorous real-world testing before they’re used in high-stakes health settings.”
We have helped 20+ companies in industries like Finance, Transportation, Health, Tourism, Events, Education, Sports.