Chatbots in healthcare: a call for regulation

Credit: Unsplash+.

AI-powered chat tools like ChatGPT or Google’s MedPaLM are becoming increasingly popular.

These tools, called Large Language Models (LLMs), can chat like humans and are being considered for use in healthcare.

However, there are risks tied to their use, and calls for proper regulation are growing.

The Risk of Misinformation

Professor Stephen Gilbert, an expert in Medical Device Regulatory Science, warns about the potential dangers of LLMs. These chat tools can make convincing statements that are actually wrong or inappropriate.

When it comes to medical advice, there’s no way to be sure about the accuracy or quality of the information they give. This lack of certainty makes them risky for patient safety.

The Need for Regulation

Before seeing a doctor, many people search for their symptoms online. Search engines often play a big role in this process. If LLMs are integrated into search engines, people might trust their responses more.

But, LLMs can sometimes give dangerous advice because they don’t have a real understanding of medical facts.

There have already been cases where LLMs have given harmful medical advice. They have even been used without patient consent.

Nearly all uses of medical LLMs need regulatory control in places like the EU and the U.S. But, the problem is that current LLMs cannot explain their outputs clearly. So, they can’t be classified as “non devices.”

The authors of a new paper suggest that there are very few situations where LLMs could be used under current regulations.

They also explore how developers could make LLM tools that could be approved as medical devices. This would require creating new regulations that make patient safety a priority.

Key Principles for AI in Healthcare

The authors stress that current LLM chatbots do not meet the basic requirements for AI in healthcare. These requirements include controlling bias, being able to explain outputs, being validated and transparent.

To be useful in healthcare, chatbots must be accurate and safe. Their effectiveness also needs to be approved by regulators, concludes Professor Gilbert.

Follow us on Twitter for more articles about this topic.

If you care about health, please read studies about a breakfast linked to better blood vessel health, and drinking too much coffee could harm people with high blood pressure.

For more information about health, please see recent studies about unhealthy habits that may increase high blood pressure risk, and results showing plant-based protein foods may help reverse diabetes.

The study was published in Nature Medicine.

Copyright © 2023 Knowridge Science Report. All rights reserved.