A team of computer scientists from the University of Maryland has created a new camera mechanism that enhances how robots perceive and respond to their surroundings.
This innovative camera system, called the Artificial Microsaccade-Enhanced Event Camera (AMI-EV), is inspired by the tiny, involuntary movements of the human eye that help maintain clear and stable vision.
The details of this groundbreaking camera are outlined in a paper published in the journal Science Robotics.
Event cameras are a relatively new technology known for their ability to track moving objects more effectively than traditional cameras.
However, they often struggle with capturing sharp images when there’s a lot of motion.
This is a significant issue for technologies that rely on precise, timely images to function correctly, such as robots and self-driving cars.
Botao He, a Ph.D. student and lead author of the study, explained, “We wanted to understand how humans and animals keep their vision focused on moving objects despite constant motion.
The answer lies in microsaccades, small, quick eye movements that occur involuntarily to maintain focus.”
The researchers replicated these microsaccades by placing a rotating prism inside the AMI-EV. This prism redirects light beams captured by the lens, mimicking the natural movements of the human eye.
This mechanism allows the camera to stabilize the image of an object, just like the human eye, preventing motion-caused blurring.
To further enhance the camera’s performance, the team developed software to compensate for the prism’s movements, ensuring that the images captured are stable and clear.
Yiannis Aloimonos, a co-author of the study and a professor of computer science at UMD, highlighted the significance of this invention.
“Our eyes send images to our brain, where they are analyzed to help us understand the world. For robots, cameras act as eyes and computers as brains. Better cameras mean better perception and reactions for robots.”
The AMI-EV camera not only improves robotic vision but also holds potential for various other applications. Industries that rely on accurate image capture and shape detection, such as national defense and smart wearables, could greatly benefit from this technology.
Cornelia Fermüller, a senior author of the paper, noted that event sensors like AMI-EV have unique advantages over classical cameras. They perform better in extreme lighting conditions, have low latency, and consume less power.
These features make them ideal for virtual reality applications, where seamless experience and rapid computation of movements are crucial.
In initial tests, the AMI-EV demonstrated its ability to capture and display movement accurately in different contexts, such as detecting human pulses and identifying rapidly moving shapes. It can capture tens of thousands of frames per second, far surpassing typical commercial cameras, which capture between 30 and 1000 frames per second.
This new camera system could revolutionize various fields, from creating more immersive augmented reality experiences to enhancing security monitoring and improving astronomical image capture.
It can help self-driving cars distinguish between humans and other objects on the road, contributing to safer autonomous driving systems.
“Our camera system addresses specific problems like identifying humans on the road for self-driving cars,” said Aloimonos.
“It has many applications that the general public already interacts with, like autonomous driving and smartphone cameras. We believe this technology is paving the way for more advanced and capable systems in the future.”