Be cautious in using ChatGPT for medical writing, study warns

Credit: Emiliano Vittoriosi /Unsplash

Everyone agrees that when it comes to health, it’s essential to consult a professional. Even in the age of advanced technology, this remains true.

This sentiment is highlighted by a recent study from CHU Sainte-Justine and the Montreal Children’s Hospital.

They aimed to see how trustworthy an artificial intelligence model, specifically ChatGPT, is in the field of medical writing.

Eye-Opening Findings

Researchers posed 20 medical questions to ChatGPT, expecting it to support its responses with valid references. The results were concerning.

Not only did ChatGPT produce answers with inaccuracies, but it also fabricated a significant number of references.

These weren’t minor errors either. For instance, ChatGPT recommended taking an anti-inflammatory drug via injection when it’s meant to be taken orally. It also grossly overestimated the death rates due to Shigella infections.

When the references ChatGPT provided were examined, it was discovered that 69% were completely made up.

Although these false references appeared genuine at first glance – mentioning real authors, reputable organizations, and credible journals – they were entirely fictitious.

Among the genuine references provided, there were still mistakes found in nearly half of them.

Professionals Weigh In

Dr. Jocelyn Gravel, lead author of the study, emphasized that trust is the cornerstone of scientific communication.

He voiced concern over the reliance on such AI models without thorough scrutiny of the references provided.

Dr. Esli Osmanlliu, another key voice in the study, stressed the value of accurate referencing in scientific research. Authentic references indicate thorough research and knowledge.

Fabricating references, on the other hand, can be seen as fraudulent behavior in the academic world.

Given the attractive and clear presentation of the references provided by ChatGPT, Dr. Osmanlliu warns that researchers might easily be misled by the false information.

AI’s Take on The Matter

When confronted about the accuracy of its references, ChatGPT’s responses varied. In one instance, it directed researchers to a link with unrelated publications. On another occasion, it acknowledged the potential for errors in its information.

Wrapping Up

This study offers a clear message: while AI has its merits, it’s crucial to be cautious. In fields like medical research, where accuracy is paramount, relying solely on AI tools can lead to misinformation.

As technology continues to progress, the intersection of AI and various fields will become more pronounced. However, as this study reveals, human expertise remains indispensable, especially in areas as critical as healthcare.

If you care about AI and medicine, please read studies about ChatGPT took people by surprise and findings of ChatGPT’s healthcare responses nearly indistinguishable from human providers.

The study was published in Mayo Clinic Proceedings: Digital Health.

Follow us on Twitter for more articles about this topic.

Copyright © 2023 Knowridge Science Report. All rights reserved.