
In a major step toward making robots more helpful in loud and messy environments, researchers in South Korea have created a new kind of 3D microphone system.
What makes it special is that it only needs one microphone to locate where sounds are coming from, even in places full of noise like factories or disaster zones.
This technology could help robots better understand and interact with humans—even when it’s too dark or smoky to see clearly.
The system was developed by a research team led by Professor Sung-Hoon Ahn from the Department of Mechanical Engineering at Seoul National University.
Their new 3D “hearing” technology works much like how bats and dolphins navigate and communicate using sound alone.
The microphone, with the help of a rotating part and a specially designed structure, lets robots figure out the direction and location of sounds in three dimensions. The researchers call this system 3DAR, which stands for 3D Acoustic Ranging.
Most current sound-sensing systems use many microphones and complex setups to locate sound sources.
This makes them large, expensive, and hard to use in real-world environments. In contrast, the new single-sensor system is compact, affordable, and works reliably even in noisy areas. This makes it ideal for industrial settings, rescue missions, or other places where traditional cameras and networks don’t work well.
To build this system, the team designed a unique acoustic structure that can cancel out unwanted sounds and boost sounds coming from specific directions. This selective hearing ability is important in places like factories where machines make constant noise.
The structure works by controlling how sound waves arrive at the microphone, enhancing the desired signals while filtering out background noise.
The researchers also added a smart communication feature to the system. Inspired by how dolphins use different sound frequencies to communicate, the system splits sound into two parts: one that people can hear and another that only robots use.
This allows humans and robots to talk using voice commands, while robots can also “chat” with each other using sounds that won’t bother or confuse humans. This setup avoids interference and creates smoother, more advanced robot teamwork.
In real-world tests, a four-legged robot equipped with this microphone system could follow voice commands, track human movement, and even detect gas leaks using sound alone. The system’s simple design means it could be easily added to many types of robots in the future.
This technology is expected to be especially useful in smart factories, where robots and workers operate side by side. It could help prevent collisions, allow hands-free communication, and make unmanned monitoring more efficient by detecting unusual sounds from leaks, accidents, or failing machines.
Looking ahead, the research team hopes to combine their sound detection system with artificial intelligence, allowing robots to not just hear sounds but understand their meaning—just like humans do.
This could lead to more natural interactions between people and robots and open up new possibilities for humanoid robot development.