A team of researchers led by the University of Massachusetts Amherst has made significant strides in developing robotic guide dogs by first understanding the needs of visually impaired users.
Their study, which won a Best Paper Award at CHI 2024: Conference on Human Factors in Computing Systems, emphasizes the importance of gathering insights from guide dog users and trainers to create effective robot guide dogs.
The paper is published in the Proceedings of the CHI Conference on Human Factors in Computing Systems.
Guide dogs provide incredible support and mobility for their handlers, but only a small number of visually impaired individuals have access to them due to several barriers.
These include the scarcity of trained dogs, high training costs (around $40,000), and physical limitations like allergies that make caring for a dog difficult. Robots have the potential to fill this gap, but only if they are designed with the right features.
“We’re not the first to develop guide-dog robots,” says Donghyun Kim, assistant professor at the UMass Amherst Manning College of Information and Computer Science (CICS) and a corresponding author of the paper.
“There have been 40 years of study, but none of these robots are actually used by end users. We aimed to understand how visually impaired people use guide dogs and what technology they need before developing the robot.”
The research team conducted interviews and observations with 23 visually impaired guide dog handlers and five trainers.
They identified the current limitations of canine guide dogs and the features users want in robotic guide dogs. A key insight was the balance between robot autonomy and human control.
Initially, the researchers thought they were building a robot that would navigate autonomously like a self-driving car. However, they learned that handlers do not rely on their dogs for global navigation. Instead, handlers control the overall route while the dog manages local obstacle avoidance. This dynamic relationship is essential for the handler’s sense of safety and control.
The paper provides practical guidelines for developing robotic guide dogs. One crucial feature is a battery life of at least two hours to accommodate typical commutes.
“About 90% of the people mentioned the battery life,” says Hochul Hwang, the paper’s first author and a doctoral candidate in Kim’s robotics lab. Current quadruped robots do not last this long, making this a critical area for improvement.
Other important features include more camera orientations to detect overhead obstacles, audio sensors for hazards from hidden areas, and the ability to understand “sidewalk” to follow streets accurately. The robot should also help users get on the right bus and find a seat.
The researchers’ efforts highlight the importance of the human element in robotics. “My Ph.D. and postdoctoral research is all about making these robots work better,” Kim says. “We tried to find an application that is practical and meaningful for humanity.”
Contributors to the paper include Ivan Lee, an expert in adaptive technologies and human-centered design; Joydeep Biswas, who specializes in AI algorithms for robot navigation; Hee Tae Jung, an expert in human factors and qualitative research; and Nicholas Giudice, a professor at the University of Maine who is blind and provided valuable insights.
Winning the Best Paper Award places this work in the top 1% of all submissions to the conference, highlighting its importance and potential impact.
The researchers hope their findings will inspire further advancements in robotic guide dogs, ultimately making them available to individuals with visual impairments sooner.