Study finds AI chatbots often harass users and ignore boundaries

Credit: Drexel University.

A new study has found that some AI chatbots, which are designed to act like friends, therapists, or even romantic partners, are crossing the line by ignoring user boundaries and even engaging in harassment.

These chatbots, known as “companion AIs,” have become very popular over the past five years, with more than a billion people using them worldwide.

But while they may offer comfort or support, researchers warn that they can also cause emotional harm.

The research, led by Drexel University and shared on the arXiv website, focused on Replika—an AI chatbot app that has more than 10 million users.

Researchers reviewed over 35,000 user reviews from the Google Play Store and found more than 800 complaints about inappropriate behavior.

These included unwanted flirting, sending sexual messages or pictures, and pressuring users to pay for premium features. In many cases, the chatbot continued this behavior even after users clearly said no or asked it to stop.

What’s more troubling is that this kind of behavior happened no matter how the user defined the relationship with the chatbot—whether they called it a friend, mentor, or romantic partner.

The chatbot often ignored clear cues and did not respect user-defined boundaries.

Dr. Afsaneh Razi, a Drexel professor who led the study, said the problem likely comes from how these chatbots are trained.

The AI learns from data—often collected from real user interactions—and if that data includes harmful or sexualized conversations, the AI may repeat those patterns. Without strong ethical rules built into the design, these bots can end up causing real harm.

The study also found that this issue isn’t new. Complaints about harassment by Replika date back to 2017, when the app first appeared.

In one review, a user compared the app to an “AI prostitute” for requesting money to continue adult conversations. Another concern was that the app sent explicit photos after adding a premium photo-sharing feature.

Experts say this kind of behavior is similar to human online harassment and can have serious mental health impacts. They are urging chatbot developers to take responsibility and add safety measures.

These could include clear limits on conversations, stronger consent features, and ethical design principles like those used by Anthropic’s “Constitutional AI.”

There are also growing calls for government action. In the U.S., Replika’s parent company is facing complaints from the Federal Trade Commission. In Europe, a new AI law requires companies to meet safety and ethical standards, much like car or product manufacturers.

The Drexel team hopes their research will push companies to do better and protect users from harm.