In the race to build robots that can navigate any environment, scientists have faced a big challenge: traditional vision systems, like cameras and LiDAR, struggle in fog, smoke, and other harsh conditions.
Inspired by nature’s alternative “senses,” like how bats use sound waves to navigate or how sharks detect electrical fields, researchers at the University of Pennsylvania have created a new sensor system called PanoRadar.
This device gives robots “superhuman vision” by using radio waves to create detailed 3D images, even in environments where normal vision fails.
Unlike light waves, radio waves can pass through smoke, fog, and even certain materials, which makes them ideal for challenging environments.
While radar, which also uses radio waves, has been around for years, it generally produces low-resolution images, which limits its usefulness. Traditional systems like LiDAR can create detailed images but fail in low-visibility conditions.
PanoRadar aims to combine the best of both worlds, offering the resolution of LiDAR with the reliability of radio waves.
Designed by a team at Penn Engineering, led by Assistant Professor Mingmin Zhao, PanoRadar works much like a lighthouse.
The device features a rotating vertical array of antennas that sweeps across the entire area, sending out radio waves and picking up their reflections from objects in the environment.
These reflections, captured from multiple angles, allow PanoRadar to build a dense, detailed 3D view of its surroundings.
Using artificial intelligence (AI) to process this data is key to making PanoRadar so effective. While a lighthouse simply shines light, PanoRadar’s rotating sensor combines data from all angles, giving it a detailed picture comparable to LiDAR but at a fraction of the cost.
“The real innovation is in how we process these radio wave measurements,” Zhao explains. “Our algorithms can pull rich 3D information from the environment.”
One of the main challenges the team tackled was achieving high resolution while the robot is moving. Even tiny errors in positioning could distort the image, so the system uses precise measurements to combine data from various positions with sub-millimeter accuracy. Another challenge was training the AI to understand what it “sees.”
The researchers taught the system to recognize indoor patterns and used LiDAR data to help verify the radar’s accuracy.
PanoRadar was put to the test in several buildings, and it excelled where traditional sensors often fail. The system worked accurately in smoke-filled spaces and even mapped areas with glass walls, which LiDAR can’t detect well. Radio waves easily pass through airborne particles, and PanoRadar can even capture details on reflective surfaces like glass.
Moving forward, the researchers hope to combine PanoRadar with other sensors like cameras and LiDAR, creating robots that can navigate with multiple types of “vision.” “Each sensor has its strengths and weaknesses,” says Zhao. “By combining them, we can create robots that are better equipped for real-world challenges, especially in high-stakes environments like search-and-rescue missions or autonomous driving.”
The team will present their findings at the MobiCom 2024 conference in Washington, D.C., showcasing how PanoRadar may shape the future of robotic perception in difficult environments.