Skip to main content

Home/Blog - ChatGPT and Health Advice: Why AI Can’t Replace Human Healthcare

ChatGPT and Health Advice: Why AI Can’t Replace Human Healthcare

There’s no doubt that AI, like ChatGPT, has exploded into our lives at breathtaking speed. Increasingly, people are turning to AI to answer deeply personal health questions, such as:

• How can I get rid of acne?
• Why am I always bloated?
• How can I sleep better?
• How can I reduce stress and anxiety?
• What foods naturally lower my blood pressure?
• How can I improve my fertility?

I see this trend daily in my clinic. Patients often arrive having spent hours consulting ChatGPT for answers. And to be clear – AI can be helpful. It can summarise information, explain concepts, and even make people feel less alone when they’re searching for answers.

But here’s the problem.

When it comes to real health care, AI is not neutral, not accountable, and it cannot see the whole human sitting in front of it. I’m increasingly seeing people being led down the wrong paths, focusing on the wrong things, or missing the true root cause of their symptoms altogether – because ChatGPT cannot replace personalised assessment or professional judgment.

In my practice, a seemingly throwaway comment about a minor symptom, or poor sleep habits, or a stressful event from years ago can be the moment that helps me makes sense of someone’s eczema, gut issues, hormonal imbalance or on going fatigue. AI simply cannot do this. It cannot truly listen, read between the lines, or sense when something doesn’t add up.

The Rise of “Dr Google x 1000”

AI is essentially Dr Google on steroids. It’s conversational, confident, and reassuring. Its answers sound logical and well-structured. But under the hood, it doesn’t reason the way a clinician does.
AI is built on predictive text and pattern recognition. It predicts what sounds like the “right” answer based on massive datasets. That means it deals in what is plausible, not necessarily what is true or appropriate for you. It does not account for your individual differences, medical history, or context – and this is where the danger lies.

Some key limitations I’ve noticed in practice include –

1. It reinforces existing beliefs.
If you think gluten is your problem, AI will happily build a case for it. If you suspect hormones, it will echo that. If you’re anxious, it can unintentionally escalate that anxiety by reflecting it back to you.
2. It doesn’t challenge your assumptions or behaviours.
Unlike a human practitioner, AI won’t say:
o “I think you’re missing something important here.”
o “This behaviour may actually be making your symptoms worse.”
o “Your focus might be in the wrong place.”
That absence of challenge can be harmful – especially in healthcare, where nuance matters. Subtle cues, small details, or changes in your symptoms are often critical for accurate diagnosis and effective care. AI simply can’t pick up on these.
3. It presents information with confidence, but without accountability.
AI cannot take responsibility for the advice it gives.
4. It cannot track your progress or adapt care over time.
Unlike a practitioner, AI cannot adjust treatment plans based on how your body responds.
5. AI cannot integrate multiple systems.
Health symptoms rarely exist in isolation – gut, hormones, sleep, stress, immune function, and lifestyle all interact. AI cannot synthesise these factors in a personalised way, but a practitioner can connect the dots.
6. Potential for harm from self-directed AI guidance.
Beyond reinforcing beliefs, AI advice can sometimes be actively harmful. Patients may self-prescribe supplements, eliminate foods, or make lifestyle changes that go against their diagnosis or specialist guidance.
I’ve seen this firsthand: a patient arrived after a specialist had given them a clear diagnosis, only to go down a “black hole” of AI-generated advice that contradicted their doctor’s diagnosis and recommendations. The result was not only confusion and delayed treatment, but a whole new layer of mental health challenges. This phenomenon is increasingly recognised, with reports of “AI psychosis” emerging where individuals become consumed by AI guidance and lose touch with professional recommendations (Gove et al., 2024).
7. AI cannot provide emotional support or accountability.
Health care is not just physical; reassurance, encouragement, and accountability are critical for long-term change. AI cannot offer empathy or adapt its advice to your emotional state.

The Human Element in Health Care

Good health care is more than delivering information. A practitioner will:
• Listen carefully and pick up on what you may not even notice yourself.
• Identify patterns and triggers that AI misses.
• Tailor treatment and lifestyle recommendations to your individual needs.
• Provide ongoing support, encouragement, and accountability. People respond differently; some thrive on structured plans, while others need gentler guidance. Meeting each person where they are is key – something AI cannot do.
• Adjust strategies as your body responds.
• Be bound by a code of ethics, reflect when something hasn’t worked, and remain accountable to you.

Health Care Is Responsive – Not Static

Good health care is dynamic. Practitioners use feedback from your body and lifestyle to guide the next steps. AI cannot do this. It relies solely on what you type and cannot provide real-time, personalised guidance.

Final Thoughts: Safe Next Steps

AI can be a useful tool for general education and guidance, but it cannot replace personalised, professional care. For persistent gut issues, skin conditions, fertility challenges, hormonal symptoms, or mental wellbeing always consult a trained professional practitioner.

Your health deserves context, connection and care that adapts as you do. The future isn’t humans or AI – it’s humans using AI thoughtfully, not deferring to it.

If you’re dealing with ongoing health concerns, you deserve someone who can –

• Listen deeply
• Challenge assumptions
• Identify root causes
• Tailor care to you
• Walk alongside you as your health evolves

AI can point to possibilities. But only a human can see the whole picture, guide you safely, and help you get real results.

Always seek professional health care from a trained practitioner for personalised assessment, accurate diagnosis, and safe, effective treatment.

Want to know more ?

Monteith S, Glenn T, Geddes JR, Whybrow PC, Achtyes ED, Huberty S, Bauer R, Bauer M. Increasing use of generative artificial intelligence by teenagers. Br J Psychiatry. 2026 Jan 26:1-3. doi: 10.1192/bjp.2025.10495. Epub ahead of print. PMID: 41582625.