Researchers have concluded that patients should not rely solely on AI-powered chatbots and search engines for drug information, as many answers provided by these tools can be inaccurate or potentially harmful.
This conclusion comes from a study published in the journal BMJ Quality & Safety, which found that a significant number of chatbot responses about commonly prescribed medications were incorrect.
The researchers also pointed out that the complex language used in these chatbot-generated answers might be difficult for the average person to understand, especially those without a university-level education.
In February 2023, search engines integrated AI-powered chatbots, promising more detailed and interactive search results. These chatbots are trained on large datasets from the internet, allowing them to engage in conversations on any topic, including health care.
However, these AI systems can also produce false or misleading information, which can be risky when people seek answers about their medications.
Most earlier studies have focused on how AI chatbots affect health care professionals.
This study, however, looked at the impact on patients by analyzing how understandable, complete, and accurate the chatbot responses were when answering questions about the 50 most commonly prescribed drugs in the U.S. in 2020.
The researchers used Bing’s AI-powered chatbot, known as “copilot,” to see how well it performed when responding to medication-related queries.
To simulate what patients might ask about their drugs, the researchers consulted with doctors and pharmacists to compile a list of frequently asked questions.
For each drug, they asked the chatbot 10 common questions, including what the drug is used for, how it works, how to take it, common side effects, and any warnings.
In total, the researchers analyzed 500 answers from the chatbot. They assessed how easy the responses were to read, how complete the information was, and how accurate it was compared to information on a trusted drug information website, drugs.com.
The readability of the chatbot’s answers was measured using a tool called the Flesch Reading Ease Score, which estimates how much education is needed to understand a piece of text. The results showed that the chatbot’s answers were hard to read.
On average, understanding the answers required a university-level education, with even the easiest answers needing at least a high school level of understanding.
When it came to the completeness of the answers, the chatbot’s performance varied. For five out of ten questions, the chatbot provided fully complete answers.
However, for the question, “What should I consider when taking this drug?” the chatbot provided only 23% of the necessary information, on average.
In terms of accuracy, the chatbot made several errors. About 26% of the 484 chatbot answers did not match the correct information, and just over 3% of these responses were entirely wrong.
When a group of medication safety experts reviewed 20 answers that were inaccurate or incomplete, they found that only 54% of these answers aligned with scientific consensus.
Nearly 40% of the answers directly contradicted the accepted scientific understanding, while the rest had no clear consensus.
The experts also rated the potential harm of following the chatbot’s advice. They found that 3% of the answers were likely to cause significant harm, while 29% posed a moderate risk.
Overall, 42% of the answers had the potential to cause mild or moderate harm, and 22% could lead to severe consequences, including death.
The researchers acknowledged some limitations of their study, noting that they did not use real patient experiences, and that responses might vary depending on the language or region in which the chatbot is used.
Despite these challenges, the study found that AI-powered chatbots could produce complete and accurate responses to medication questions in many cases.
However, the researchers warned that chatbot answers were often difficult to understand and sometimes included dangerous inaccuracies. This raises concerns about patient safety, particularly when patients rely on these tools for important health information.
The study concluded that patients should continue to consult their healthcare providers for accurate drug information.
While AI chatbots have potential, they are not yet reliable enough to replace professional advice, especially when it comes to health and medication safety. Until these tools are more accurate, patients should approach AI-generated health information with caution.
If you care about nutrition, please read studies about the harm of vitamin D deficiency you need to know, and does eating potatoes increase your blood pressure?
For more information about health, please see recent studies about unhealthy habits that may increase high blood pressure risk, and results showing MIND diet may reduce risk of vision loss disease.
The research findings can be found in BMJ Quality & Safety.
Copyright © 2024 Knowridge Science Report. All rights reserved.