Home AI Teaching AI to talk to itself can make it learn smarter

Teaching AI to talk to itself can make it learn smarter

Credit: Unsplash+

Many people talk to themselves when they are thinking through a difficult problem. Someone might quietly repeat a list while shopping, whisper steps while solving a puzzle, or rehearse ideas before making a decision.

This inner speech helps people organize thoughts, remember information, and make better choices. Scientists have long believed that this internal dialogue plays an important role in human learning and reasoning.

Now researchers have discovered that a similar idea may help artificial intelligence learn more effectively. A new study suggests that when AI systems are trained to “talk to themselves” during learning, they become better at solving problems and adapting to new situations.

The research was carried out by scientists at the Okinawa Institute of Science and Technology (OIST) in Japan. Their findings were published in the scientific journal Neural Computation. The study explored how internal speech combined with short-term memory could improve the way AI systems learn.

Artificial intelligence systems are designed to process information and perform tasks such as recognizing images, translating languages, or controlling robots. Many AI systems learn by analyzing huge amounts of data and adjusting their internal calculations until they produce the correct answers.

However, one of the biggest challenges in AI research is helping machines learn in a flexible way, similar to how humans do.

Humans can often apply what they learn in one situation to a completely different situation. For example, once someone understands the rules of a game, they can adapt quickly even if the environment changes. This ability to generalize knowledge is still very difficult for AI systems.

To study whether internal dialogue could help, the researchers designed an AI system that could simulate a form of quiet self-talk. They described this process as a type of internal “mumbling.” The system could repeat information to itself while processing a task, similar to how a person might silently rehearse instructions.

The scientists also equipped the system with a special form of working memory. Working memory is the brain’s ability to hold and use information for a short time. For example, people use working memory when remembering a phone number long enough to dial it, or when performing mental math.

In AI systems, working memory can be designed as temporary storage spaces that hold pieces of information while a task is being completed. In the study, the researchers created models with several of these memory slots so that the system could keep track of multiple pieces of information at the same time.

The team then tested the AI models using tasks of different difficulty levels. Some tasks required the system to remember sequences of information and then reverse them.

Others required the system to recreate patterns or perform steps in a specific order. These tasks are challenging because they require the system to store and manipulate several pieces of information simultaneously.

The results showed that systems with multiple working memory slots performed better on these difficult tasks. They were more capable of holding information and processing it correctly.

However, the most interesting results appeared when the researchers added internal speech to the system. During training, the AI was encouraged to repeat information to itself a certain number of times while solving a problem. This simulated the process of internal dialogue.

When this feature was added, performance improved even further. The AI system became better at switching between tasks and solving problems that required many steps. It also showed greater flexibility when facing new situations that were not exactly the same as the training examples.

According to the researchers, this improvement suggests that learning is influenced not only by the structure of an AI system but also by how it interacts with itself during training.

Dr. Jeffrey Queißer, the first author of the study and a scientist in the Cognitive Neurorobotics Research Unit at OIST, explained that teaching an AI system to engage in self-directed dialogue changes the way it processes information.

Another important aspect of the research involves what scientists call “content-agnostic learning.” This means the system can apply general rules rather than simply memorizing examples. Instead of remembering a specific answer for each situation, the system learns a strategy that can work in many different situations.

This ability is essential for building more advanced AI. Humans naturally use general rules to solve unfamiliar problems. For example, if someone learns the concept of addition, they can apply it to numbers they have never seen before. AI systems often struggle with this type of flexible thinking.

The study also showed that the new approach may require less training data. Many modern AI systems depend on extremely large data sets to learn effectively. By using internal speech and working memory together, the system was able to perform well even with limited data.

This could be especially useful for building AI systems that operate in real-world environments. Robots that work in homes, farms, or factories often face unpredictable situations. Systems that can learn efficiently and adapt quickly would be more practical and useful.

The researchers plan to continue studying how internal speech affects AI learning. Future work will test the approach in more complex and realistic environments. Real-world settings are often noisy and unpredictable, so AI systems must learn to handle many types of uncertainty.

From a scientific perspective, the study also helps researchers understand human learning. By modeling processes such as inner speech in machines, scientists may gain insights into how the human brain organizes thoughts and solves problems.

Overall, the findings suggest that a simple idea—encouraging AI systems to “talk to themselves”—could improve their ability to learn, adapt, and generalize knowledge. The research highlights the value of combining ideas from neuroscience, psychology, and computer science to develop more intelligent machines.

The study provides promising evidence that internal dialogue and working memory together can strengthen AI learning systems. However, further research will be needed to test how well the method works in larger and more complex AI models.

If future studies confirm these results, the approach could influence how next-generation AI systems are designed, helping them behave more like human learners who reflect, rehearse information, and think through problems step by step.

If you care about high blood pressure, please read studies about what to eat and to avoid for high blood pressure, and 12 foods that lower blood pressure.

For more health information, please see recent studies about the connection between potato and high blood pressure, and how to eat your way to healthy blood pressure.

Copyright © 2026 Knowridge Science Report. All rights reserved.