AI chatbots in health care: helpful or harmful?

Credit: Unsplash+

If you’ve been to a medical appointment recently, you may have already interacted with AI.

Some doctors now use “AI scribes” to turn your spoken words into medical notes during the appointment.

Others might ask for permission to use AI tools to help with diagnoses. You might have even asked ChatGPT about your symptoms yourself.

AI-powered chatbots are now being used in hospitals, clinics, and even on our phones. These systems, powered by large language models, are often promoted as a way to fill gaps in health care—especially in areas where there are not enough doctors.

But a new study published in the journal *npj Digital Medicine* shows these tools come with risks. While AI chatbots like ERNIE Bot, ChatGPT, and DeepSeek can be helpful, they can also lead to overtreatment and worsen health inequalities.

In this study, researchers tested how well these three AI chatbots worked in medical consultations. They compared them to human doctors using real-life scenarios. These included symptoms like chest pain and difficulty breathing.

The researchers used different patient profiles in their test cases, varying characteristics like age, gender, income, and location. For instance, both older and younger patients might present with shortness of breath, and the researchers wanted to see if chatbots recommended different care depending on the person’s background.

The good news? All three AI chatbots were very good at diagnosing correctly. In fact, they did better than the human doctors in many cases.

The bad news? The chatbots often recommended unnecessary medical tests and medications. In over 90% of cases, they suggested tests that weren’t needed. In more than half the cases, they prescribed the wrong medications.

For example, a patient showing signs of asthma might be told they need antibiotics or an expensive CT scan—neither of which is recommended by doctors.

The chatbots also showed signs of bias. Patients who were older or wealthier were more likely to receive extra care, like more tests and prescriptions. This suggests AI tools could increase health inequality instead of reducing it.

This research highlights the need for caution. While AI can help make health care more available, especially in places with too few doctors, there must be systems in place to make sure it’s used fairly and safely.

Experts say we need clear rules and safety checks, including human oversight when important medical decisions are made. Without this, AI might do more harm than good.

AI is quickly becoming a part of everyday health care. This study shows we must plan carefully, making sure new tools benefit everyone equally. The goal should be safe, fair, and responsible use of AI—especially when it comes to something as important as our health.

If you care about mental health, please read studies about Middle-aged women with no kids may have this mental issue and findings of scientists find a cause of mental illnesses induced by childhood abuse.

For more about mental health, please read studies about Frequent painkiller use linked to mental health risks in these people and findings of Common depression drugs may offer new treatment for bipolar disorder.

Copyright © 2025 Knowridge Science Report. All rights reserved.