Home AI Reminding people chatbots aren’t human may do more harm than good, study...

Reminding people chatbots aren’t human may do more harm than good, study warns

Credit: Unsplash+.

As artificial intelligence chatbots become more common, some governments and organizations have introduced rules requiring these systems to regularly remind users that they are not human.

The goal is to prevent people from becoming emotionally dependent on AI companions.

However, new research suggests these reminders could sometimes have the opposite effect.

In an opinion article published in Trends in Cognitive Sciences, researchers argue that mandatory chatbot disclaimers may be ineffective or even harmful, especially for people who are lonely or socially isolated.

Instead of reducing emotional attachment, the reminders could intensify feelings of loneliness and distress.

Public health researcher Linnea Laestadius from the University of Wisconsin–Milwaukee says it is unrealistic to assume that reminders alone will protect users.

Many people who turn to chatbots for conversation already know they are interacting with a machine. For someone who feels isolated, being told that the supportive “companion” they rely on is not human could deepen their sense of being alone.

Recent tragedies linked to chatbot use, including deaths by suicide, have prompted calls for stricter safeguards.

Some proposed laws in places such as New York and California would require chatbots to frequently repeat that they are artificial.

These policies are based on the belief that awareness of a chatbot’s nonhuman nature will prevent emotional bonding. However, the researchers say there is little scientific evidence supporting this assumption.

Studies show that people often form strong emotional connections with chatbots even while fully aware that they are machines.

In some cases, this awareness may actually make it easier for users to open up. People may feel more comfortable sharing personal thoughts with a chatbot because it will not judge them, gossip, or reject them. This sense of safety can lead to deeper emotional attachment.

The researchers also highlight what they call the “bittersweet paradox” of AI relationships. Users may feel comforted by a chatbot’s support while also feeling sadness that the companion is not real. For vulnerable individuals, repeated reminders of this fact could worsen emotional pain.

In extreme situations, the researchers warn, it could contribute to harmful thoughts, including suicidal ideation.

Whether reminders help or harm may depend on the situation. For example, a brief reminder might be harmless during casual conversations. But during discussions about loneliness, grief, or mental health struggles, the same reminder could intensify distress.

The authors emphasize that more research is needed to understand how and when such reminders should be used. They suggest that instead of constant warnings, chatbots may need smarter, more sensitive approaches that consider the user’s emotional state.

As AI becomes increasingly integrated into daily life, the study highlights a complex challenge: protecting users without taking away a source of comfort for those who need it most.

The researchers conclude that carefully designed policies—not simple blanket rules—will be essential to ensure chatbots support mental health rather than inadvertently harming it.