Home Medicine AI therapy apps may help some people—but they also carry serious risks

AI therapy apps may help some people—but they also carry serious risks

Credit: Unsplash+

Artificial intelligence tools such as ChatGPT are becoming increasingly popular for discussing emotions, seeking advice, or coping with stress.

Millions of people around the world now turn to these digital tools for comfort, guidance, or even informal “therapy.”

However, researchers are warning that while these tools may offer benefits, they also come with potential risks. Without clear government regulation or medical oversight, there is no guarantee that AI-based mental health tools are safe or effective.

A group of researchers from Cornell University has been studying this issue closely. Their goal is to help create better guidelines for designing responsible artificial intelligence tools that support mental well-being.

Their findings suggest that developers, regulators, and users need clearer ways to understand what these apps can realistically do and what risks they may carry.

The research was led by scientists from the Cornell Ann S. Bowers College of Computing and Information Science.

The study will be presented at the Association for Computing Machinery’s CHI Conference on Human Factors in Computing Systems (ACM CHI 2026), which will take place in April in Barcelona. The paper is also currently available on the arXiv preprint server.

The researchers propose a new way to think about AI mental health tools by comparing them with familiar categories from healthcare and everyday life. Instead of treating all AI therapy tools as the same, they suggest classifying them according to what they promise to do and how reliable they are.

For example, some apps may promise to provide specific mental health benefits for certain people. In that case, they might be similar to over-the-counter medications that are designed to treat particular conditions.

Other apps may claim to support general well-being without guaranteeing specific outcomes. These might be more like nutritional supplements that aim to improve health but do not promise a cure.

Another important question is whether the tool delivers a proven therapeutic method. Some AI systems may include structured mental health techniques, such as cognitive behavioral therapy exercises.

These techniques are widely used by trained therapists. However, the effectiveness of these methods may depend on how they are delivered and whether the user receives proper guidance.

Because of these differences, the researchers suggest four main ways to think about AI mental well-being tools. They may function like over-the-counter medications, nutritional supplements, primary care doctors, or even yoga instructors. Each category comes with different expectations for safety, effectiveness, and responsibility.

Ned Cooper, a postdoctoral researcher involved in the study, explained that these comparisons can help designers build safer tools. By thinking carefully about what the app promises and how it works, developers may be able to reduce harm and improve user outcomes.

Another goal of the research is to help users better understand the limits of these tools. Professor Qian Yang, a senior author of the study and head of the DesignAI studio at Cornell, warns that people should not treat AI tools as replacements for professional mental health care.

She explained that many AI mental health tools should be viewed more like supplements rather than medications. They may provide support or encouragement, but they should not replace therapy from trained professionals when someone is facing serious mental health challenges.

AI mental health tools do have real potential. They can provide low-cost support to large numbers of people and may help reduce stigma around discussing mental health. For people who feel anxious but do not have a serious condition, these tools may offer helpful reassurance or guidance.

However, there are also concerns. Some AI chat systems were originally designed for entertainment rather than healthcare. When people rely on these systems for emotional support, they may begin replacing real human relationships with conversations with machines. In some cases, individuals may delay seeking professional help because they believe the AI tool is enough.

There have already been lawsuits and public reports suggesting that certain AI chat systems may have contributed to mental health crises or even suicides. According to Yang, many people share deeply personal and sometimes dangerous thoughts with AI tools each week. She believes stronger safeguards are needed to help direct vulnerable users toward real medical care.

To develop their framework, the researchers interviewed 24 experts. These experts included mental health professionals, law and policy scholars, and founders of mental health technology companies.

The research team also reviewed more than 100 U.S. laws and regulations related to healthcare and technology to understand how existing rules might apply to AI tools.

While many experts agreed that safety should be a priority, there was disagreement about how much risk should be allowed. Some medical professionals argued that even if a tool helps many people, it might still carry rare risks, similar to certain medicines.

Others, especially experts in ethics and human-centered design, believed these tools should meet stricter safety standards because they interact directly with vulnerable users.

The researchers are now exploring how technology design and public policy can work together. One idea is to encourage AI tools to guide users toward real-world support systems such as therapists, community programs, or peer support groups. This approach could help ensure that technology strengthens mental health care rather than replacing it.

Overall, the study highlights both the promise and the danger of AI tools designed for mental well-being. While these technologies may expand access to support, they must be designed carefully to avoid causing harm.

Stronger guidelines, clearer expectations, and better connections to real healthcare services may help ensure that AI becomes a helpful partner in mental health care rather than a risky substitute.

The research by Cornell University scientists will be presented at the ACM CHI 2026 conference and is currently available as a preprint on the arXiv server.

A critical review of the study suggests that its main strength lies in offering a practical framework to understand AI mental health tools. By comparing them to familiar healthcare categories, the researchers make a complex regulatory issue easier to understand.

However, the study also highlights a major challenge: technology is advancing faster than regulation. Without clear oversight, users may mistakenly treat AI tools as professional therapy. Future research will need to examine real-world outcomes and develop stronger safeguards to protect vulnerable individuals.

If you care about mental health, please read studies about how dairy foods may influence depression risk, and 6 foods you can eat to improve mental health.

For more mental health information, please see recent studies about top foods to tame your stress, and Omega-3 fats may help reduce depression.

Copyright © 2026 Knowridge Science Report. All rights reserved.