Chatbots like ChatGPT are launching services to give consumers health advice. In a Q&A, clinician-educator Shaili Gupta explains the benefits and risks of relying on them.
February 19, 2026 - By Meg Dalton - Shaili Gupta often sees patients who consult chatbots like ChatGPT for health advice.
She finds that some of her patients are much more informed about health-related issues and have follow up questions that
show a much better awareness that they didn’t have before. Sometimes, this information becomes more concrete, however, and patients have a harder time hearing a different perspective from their clinician.
In one situation, a person was experiencing chest pain and became convinced something was wrong with their heart after asking a chatbot. That was despite multiple tests telling them otherwise.
“It’s very difficult for patients to understand that the kind of pain that they’re describing is not cardiac in origin, especially after thorough testing has found other explanations” said Gupta, an associate professor of medicine at Yale School of Medicine. “Chatbots have been trained to highlight the more serious and urgent things first.”
Gupta’s patients are not alone in turning to chatbots as a medical resource. Globally, more than 40 million people use ChatGPT alone for health information each day, according to a recent report from OpenAI, the company behind ChatGPT. Just recently, both ChatGPT and Claude, a series of language models created by Anthropic, announced they’d be launching services specifically to give consumers health advice.
But chatbots can often give false and potentially harmful advice to those trying to self-diagnose or manage their care, especially when it comes to questions related to mental health, experts say. That can make the job of actual doctors like Gupta more difficult.
“On the one hand, these platforms have provided a readily available source of health literacy, which is wonderful,” she said. “On the other, patients might get a whole bunch of information that they would come armed with and are sometimes confused about. It becomes a whole aspect of your clinic visit where you’re trying to educate, redirect, and cancel out the misinformation overlying the right information.”
At Yale, Gupta is also the director of YSM’s AI and Innovation in Medicine Distinction Pathway, which provides residents with advanced learning in AI, machine learning, and clinical applications to residents.
In an interview, Gupta explains the benefits and risks of using chatbots for health advice and how users can safeguard against misinformation.
The interview has been edited for length and clarity.
Why do people turn to AI chatbots as a health resource?
Shaili Gupta: When you look at how patients use them, I see chatbots as both simplifiers and amplifiers. So, they are simplifiers in the sense that they can translate complex language in easily understandable terms. They can simplify what a disease process is like, including what a particular question means, what a symptom means, what you should be asking your doctor, and which tests would need to be done. They’re also amplifiers because they extract and summarize information from the large amount of data out there.
Chatbots are also anthropomorphized. They’ve been trained to use pronouns like “you” and “I” so you can relate to the information as if you’re talking to a person. So it’s very easy to then see them as a friend, a guide, or an authority. It feels personal, and it feels easy. It feels very protected because you can just have the conversation, and you can then choose whether you want to use it or not use it. For example, you’re not bound to go fill that prescription because you had this conversation with a doctor who wanted you to do something. That anthropomorphization, however, also increases the risk of overtrust.
What are the benefits of using chatbots for health advice?
Gupta: In a way, chatbots have the profound power of equalizing the world. It’s a good thing in that a lot of people can have access to the same information, and they’re not deprived of that information just because they are sitting in a corner of the world where they don’t have immediate access to a physician. In that sense, chatbots have the potential to provide health equity, but it comes with a huge amount of responsibility.
Chatbots can also talk to you in your level of language. There are many chatbots that speak different languages, or they can meet you at your health literacy level. They can explain and educate in user-level language as many times as needed, making them a very patient-centered interactive entity.
Another good use of chatbot is for caregiving support. Some chatbots work as almost a visiting nurse where they can remind you to take a medication, provide lifestyle guidance, and triaging guidance on what kind of medical expert to seek care from.
Tips for using chatbots for your health
• Chatbots shouldn’t replace your real doctor
• Don’t overly trust what chatbots tell you
• Ask chatbots to simplify or summarize complicated information
• Use chatbots to remind you to take certain medications
What are the risks of using chatbots for health advice?
Gupta: One risk is overly trusting the chatbot. As I mentioned, chatbots have been trained to use pronouns for themselves and for you. So, the machine-human interaction becomes very conversational. As humans, we do begin to trust someone the more we interact with them, especially if it’s a one-to-one conversation. Overtrust can lead to risks. It all comes with a huge amount of responsibility for that reason. Eventually, the benefits and risks of a health care-advising chatbot would depend on how industry frames it, how users interact with it, how payers integrate it, and how institutions regulate it.
The other risks that you run into are hallucinations where chatbots can make errors of omission or errors of commission. They’re either omitting important things that should have been said or just putting in too much that was not required.
The worst kind of chatbot has [one or more of] what I call the “three C’s”: It’s too competent, too cogent, or too concrete. If it’s speaking very confidently, convincingly and firmly, for example, saying “I see what you have” or “What’s happening to you is…,” and the coding in the background isn’t designed in a flexible way where it would modify and generate suggestions rather than statements, that kind of chatbot can really run patients into trouble because they could have a higher risk of overly trusting the chatbot’s output.
That then can have downstream complications like incorrect decisions about one’s health and mistrust in authentic health care systems.
How should people approach using AI chatbots for health advice moving forward?
Gupta: I would make sure to use them for efficiency and information more than diagnosis. Chatbots should never firmly give you a diagnosis. A good chatbot would be one that helps you understand things better.
Human clinicians are still hard to replace. One simple reason is that they perform a physical exam, which so far intelligent machines have not mastered. More importantly, human clinicians have instincts, experience, and relatability. They know and can relate to why something bothers you and what pain feels like. A lot of health care is feeling-based and instinct-based, and that’s something that chatbots don’t have yet.

