In a world where cars are beginning to drive themselves, a team of researchers from the University of California, Irvine, and Japan’s Keio University, have found a way to trick the eyes of these autonomous vehicles.
These “eyes” are not like human eyes but are sensors called LiDAR (Light Detection and Ranging) that help the car “see” where it’s going by bouncing light off objects and measuring how long it takes for the light to return.
This technology is crucial for self-driving cars to navigate roads safely.
However, the researchers have discovered a significant flaw in this system. By using lasers in a clever way, they were able to make the LiDAR sensors see things that weren’t there, like a pedestrian or another car, and even hide objects that were actually present.
This trickery could lead to dangerous situations, such as the car stopping suddenly for no reason or failing to stop for a real obstacle, potentially causing accidents.
During a presentation on February 29 at a big security conference in San Diego, Takami Sato, a Ph.D. candidate at UCI, shared their findings from testing nine different LiDAR systems that are currently used in self-driving cars.
These cars range from robotic taxis like those operated by Google’s Waymo and General Motors’s Cruise, to consumer vehicles from big names like Volvo, Mercedes-Benz, and Huawei.
The researchers first tested older LiDAR systems and managed to deceive them into “seeing” fake objects, prompting the car to react as though there were a real obstacle, like hitting the brakes suddenly to avoid a collision with something that wasn’t actually there.
The trick they used didn’t work on the newer versions of LiDAR systems, which have built-in safeguards against such attacks, like randomizing when they send out light pulses and recognizing each pulse’s unique “fingerprint.”
But the team didn’t stop there. They came up with a new method using a custom-built laser setup that could make real objects—like other cars—disappear from the LiDAR’s view. This means a self-driving car might not see a car right in front of it, leading to a possible crash.
This research is a big deal because it shows that even with advancements in technology, self-driving car sensors can still be fooled in ways that could endanger passengers and other road users.
The team’s work, described as the most thorough examination of LiDAR vulnerabilities to date, resulted in 15 new findings that could help improve the design and safety of future autonomous vehicles.
The findings highlight a critical need for further development in the technology behind self-driving cars to ensure they can safely navigate the complexities of real-world driving.
As self-driving cars become more common, ensuring their sensors can’t be easily tricked will be crucial for everyone’s safety on the road.
This study opens the door for manufacturers to build even smarter and safer vehicles that are prepared for the tricks and challenges of the road ahead.