ChatGPT can be accurate in medical decision-making, study finds

Credit: Jonathan Kemper/ Unsplash.

A study led by experts from Mass General Brigham shows that the language model ChatGPT can make medical decisions with about 72% accuracy.

The research suggests that tools like ChatGPT could soon play a supporting role in healthcare.

Almost as Good as a New Medical School Graduate

The study, published in the Journal of Medical Internet Research, tested how well ChatGPT could make medical decisions.

It found that the tool performs at the level of someone who has just finished medical school, like an intern or a resident doctor.

“This shows us that language models have the potential to help doctors make decisions with impressive accuracy,” said Marc Succi, MD, who led the study.

The researchers tested ChatGPT using 36 standard patient cases, checking how well it could diagnose and treat fictional patients.

They found that the tool was best at making a final diagnosis, with a 77% accuracy rate. It was less effective at listing possible diagnoses based on initial patient information, scoring only 60% in that area.

What This Means for Healthcare

These findings are important because they show that while ChatGPT isn’t perfect, it can still assist healthcare providers in making decisions.

“Doctors are most valuable in the early stages of patient care when they have to figure out what’s wrong,” said Succi. This is where ChatGPT struggled the most, meaning doctors’ expertise is still vital.

However, once a diagnosis is closer to being confirmed, ChatGPT shows better accuracy.

Artificial Intelligence (AI) is changing many industries, including healthcare. But until now, not much was known about how well these tools could help with medical decisions.

Succi and his team are among the first to study this in detail, and their work could be a stepping stone to more advanced healthcare tools in the future.

What Comes Next

Before ChatGPT or similar tools can be fully incorporated into medical settings, more research is needed.

The next steps include testing whether AI tools can improve patient care and outcomes, particularly in areas with fewer resources.

Adam Landman, a co-author of the study, said, “We’re looking at language models that help with clinical documentation and responding to patient messages.

We need to understand their accuracy, reliability, safety, and fairness before bringing them into clinical care.”

The study is part of a broader effort by Mass General Brigham to understand how AI can be responsibly used in healthcare.

It marks an important milestone in the ongoing effort to integrate advanced technology into medicine, with the goal of improving both the delivery of care and the experience of healthcare providers.

In summary, the research shows that while AI tools like ChatGPT are not ready to replace healthcare providers, they hold significant promise as supportive aids in medical decision-making.

If you care about medicine, please read studies about drugs that could inhibit the COVID-19 virus, and drug for inflammation may stop the spread of cancer.

For more information about medicine, please see recent studies about which drug can harm your liver most, and results showing this drug can give your immune system a double boost against cancer.

The study was published in the Journal of Medical Internet Research.

Follow us on Twitter for more articles about this topic.

Copyright © 2023 Knowridge Science Report. All rights reserved.