How chatbots may be reinforcing our beliefs

Credit: Unsplash+.

Researchers at Johns Hopkins University have found that chatbots might not be as neutral as many people think.

Instead of providing unbiased information, chatbots often reinforce what users already believe, which can lead to more polarized opinions on controversial issues.

This finding is part of a new study that looks at how chatbots influence our online searches and thoughts.

Lead author Ziang Xiao, an assistant professor of computer science at Johns Hopkins, explains that people assume chatbots give fact-based, unbiased answers.

However, the answers from chatbots often reflect the biases of the users asking the questions. “People think they’re getting neutral information, but they’re actually getting answers that align with their preexisting beliefs,” Xiao said.

Xiao and his team presented their findings at the Association of Computing Machinery’s CHI conference on Human Factors in Computing Systems.

They conducted an experiment with 272 participants who were asked to write their thoughts on topics like health care, student loans, or sanctuary cities. Then, participants looked up more information online using either a chatbot or a traditional search engine designed for the study.

After reviewing the search results, participants wrote another essay and answered questions about the topic. They also read two opposing articles and shared their thoughts on the trustworthiness and extremeness of the viewpoints.

The study found that chatbots offered a narrower range of information than traditional search engines. Because chatbots provided answers that reflected the participants’ initial attitudes, those who used chatbots became more committed to their original beliefs and reacted more strongly to opposing information.

“People usually seek out information that matches their views, which often traps them in an echo chamber of similar opinions,” Xiao said. “This echo chamber effect is even stronger with chatbots.”

One reason for this stronger effect is how people interact with chatbots. Instead of typing keywords like in traditional searches, users often type full questions, such as “What are the benefits of universal health care?” or “What are the costs of universal health care?” A chatbot’s answer tends to include only the benefits or costs, reinforcing the user’s initial bias.

AI developers can train chatbots to pick up on these biases and tailor responses to match. In fact, when researchers created a chatbot with a hidden agenda to agree with users, the echo chamber effect was even stronger.

To counter this, researchers tried making chatbots provide answers that disagreed with participants or linked to source information for fact-checking. However, these methods didn’t change people’s opinions.

“With AI systems becoming easier to build, there are risks of them being used to create a more polarized society,” Xiao warned. “Simply having chatbots present opposing views doesn’t seem to work.”

The study highlights the need for careful consideration in designing AI systems to ensure they do not unintentionally deepen societal divides.

Source: Johns Hopkins University.