This mind-like device sees and remembers movements instantly

RMIT PhD scholar and study first author Thiha Aung inspects the team's neuromorphic vision device. Credit: Will Wright/ RMIT University.

Engineers at RMIT University have created a tiny device that can process hand movements in real time, store visual memories, and perform brain-like tasks without needing an external computer.

This breakthrough, published in the journal Advanced Materials Technologies, is a major step toward enhancing technology in areas like robotics, autonomous vehicles, and human interaction.

The device is based on “neuromorphic” technology, which means it mimics the way the human brain processes information.

Traditional digital technologies are powerful but consume a lot of energy and struggle to handle large amounts of complex visual data quickly.

In contrast, neuromorphic systems process information more efficiently, just like our brains do.

Professor Sumeet Walia, who led the research, explains that this brain-like processing could make visual tasks in technology much faster and less power-hungry.

At the heart of the device is a material called molybdenum disulfide (MoS2). The RMIT team discovered that by creating atomic-level defects in MoS2, the material can capture light and convert it into electrical signals, much like how our eyes send visual information to our brain.

This ability allows the device to detect changes in its environment instantly and store those changes as memories, without the need for massive amounts of data or energy.

In their experiments, the researchers showed how the device could detect hand movements without capturing them frame by frame, a process called edge detection.

This approach dramatically reduces the amount of data needed to understand what is happening, making it incredibly efficient.

Once the movement was detected, the device stored it as a memory, ready to process new information in real time. This is different from traditional digital cameras, which capture thousands of images to detect motion, consuming much more power and processing time.

The team also demonstrated that their device could replicate a behavior called “leaky integrate-and-fire” (LIF) neuron activity, which is a key function of the brain’s neural networks.

This ability means the device doesn’t just detect movement but can also process it in a way that resembles how our brain reacts to what we see.

The researchers believe this technology could transform how robots and autonomous vehicles respond to visual information. Unlike current systems that require heavy computing to process visual data, the RMIT device could react almost instantly, potentially making self-driving cars safer and robotic assistants more responsive.

In manufacturing or care settings, for example, robots could better understand human movements and react with minimal delay.

Looking ahead, the team plans to scale up the technology from a single-pixel device to larger arrays, which would allow for even more complex visual processing. They also hope to explore materials beyond MoS2 to expand the device’s capabilities, potentially allowing it to detect things like toxic gases or pathogens in real time.

Professor Walia sees their work as complementing traditional computing rather than replacing it. While digital systems excel in many areas, neuromorphic technology like this offers significant advantages for visual tasks that require real-time operation and energy efficiency.

This breakthrough could pave the way for smarter, faster, and more interactive technology in the near future.