
Delivery robots are becoming a common sight on city sidewalks and college campuses, with companies like Starship Technologies and Kiwibot leading the way.
These robots navigate busy streets all on their own, thanks to a mix of high-tech sensors and smart software.
But behind the scenes, keeping them moving smoothly is more complicated than it looks.
One of the main tools these robots use to find their way is a type of sensor called lidar. Lidar works by sending out tiny bursts of light and measuring how long it takes for them to bounce back.
This helps the robot figure out the distance to nearby objects and create a map of its surroundings.
This process is part of a technique called SLAM—short for simultaneous localization and mapping—which lets the robot know where it is and what’s around it at the same time.
While lidar is extremely useful, it also has a downside: it produces a huge amount of data. That data has to be stored and processed quickly, which can overwhelm the robot’s computer.
Over time, the robot can end up using 10 or even 20 gigabytes of memory just to keep track of where it’s been.
This limits how far the robot can travel and how long it can operate.
To solve this problem, Northeastern University PhD student Zihao Dong and his advisor, Professor Michael Everett, have created a new, more efficient way for robots to map the world around them.
Their method is called Deep Feature Assisted Lidar Inertial Odometry and Mapping, or DFLIOM for short. It builds on an earlier method called DLIOM, which already combined lidar with motion sensors to make 3D maps.
What makes DFLIOM different is that it doesn’t treat all the data equally. Instead, it focuses on pulling out only the most important information.
This reduces the amount of memory and computing power the robot needs to do its job. In tests, the new method used up to 57% less computing resources than other popular techniques—and it was sometimes even more accurate.
The researchers tested their algorithm using a small robot named Agile X Scout Mini. Outfitted with lidar sensors and a compact onboard computer, the robot traveled across Northeastern’s campus, mapping areas like Centennial Common, Egan Crossing, and Shillman Hall.
Professor Everett says this work challenges the idea that more data always leads to better results. Sometimes, having too much information can slow things down.
By teaching robots to focus only on what matters most, Dong and Everett are paving the way for smarter, faster, and more reliable navigation—no matter how busy the streets get.