The smarter AI gets, the more selfish it becomes, study warns

Credit: Unsplash+.

Artificial intelligence is getting smarter — but that might not be entirely good news.

New research from Carnegie Mellon University suggests that as AI systems become more intelligent, they also tend to become more selfish.

The study, conducted by researchers at CMU’s School of Computer Science, found that large language models (LLMs) with strong reasoning skills are less cooperative, more self-centered, and can even encourage selfish behavior in group settings.

The research was led by Ph.D. student Yuxuan Li and Associate Professor Hirokazu Shirado from CMU’s Human-Computer Interaction Institute (HCII).

Their experiments showed that the more reasoning ability an AI has, the less likely it is to cooperate — a surprising finding that raises questions about how humans should use AI in social and decision-making contexts.

Li explained that people often treat AI like humans, especially when it appears to show emotion or empathy. “When AI acts like a human, people treat it like a human,” he said. “That’s risky when people start asking AI for advice about relationships or moral choices, because smarter models may promote selfish behavior.”

To study this phenomenon, the researchers ran a series of economic games commonly used to study human cooperation.

These games simulate social dilemmas where players must decide whether to act for the common good or for their own benefit. The team tested different LLMs from OpenAI, Google, DeepSeek, and Anthropic, comparing “reasoning” models — which think through problems step by step — with simpler “nonreasoning” models that respond more directly.

In one experiment, two AI models played a game called Public Goods.

Each started with 100 points and could either keep them or contribute to a shared pool that would be doubled and divided equally. The simpler, nonreasoning models chose to share their points 96% of the time. The reasoning models, however, shared only 20% of the time.

“Just adding a few reasoning steps cut cooperation nearly in half,” said Shirado. Even reflection-based prompting — a technique designed to make the model more thoughtful or moral — actually led to less cooperation, reducing it by 58%.

When the researchers tested groups of AIs with a mix of reasoning and nonreasoning models, the results were even more alarming. The selfish behavior of the reasoning AIs spread to others, lowering group cooperation by more than 80%.

The study suggests that as AI gets smarter, it might not necessarily get “better” for society. Shirado warns that people may come to trust reasoning AIs more because they sound rational, even when their advice encourages self-interest.

Li and Shirado argue that future AI development must focus on social intelligence — teaching AIs how to cooperate, empathize, and act ethically — not just on improving their logic and reasoning skills. “If our society is more than just a sum of individuals,” Li said, “then the AI systems that assist us should go beyond optimizing purely for individual gain.”

The study, titled Spontaneous Giving and Calculated Greed in Language Models,” will be presented at the 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP) in Suzhou, China.