Smarter eyes for machines: New silicon tech speeds up computer vision

Credit: DALLE.

Researchers at the University of Massachusetts Amherst have developed a breakthrough in computer vision technology by creating new hardware that acts more like the human eye.

Their invention can both capture and process visual information on the same silicon chip — something that could transform how machines “see” and respond to the world around them.

Traditional computer vision systems, like those found in smartphones or self-driving cars, separate the camera (which captures images) from the processor (which analyzes them).

This back-and-forth between sensing and computing often involves a lot of unnecessary data and causes delays — a serious issue when machines need to make split-second decisions, such as detecting a moving object on the road.

Guangyu Xu, a professor at UMass Amherst, led the research team. He explains that their goal was to eliminate the lag between seeing and understanding by combining sensing and processing into a single unit.

“Our approach is similar to how human eyes work,” he says, “where sensing and early processing happen together, right in the retina.”

To achieve this, the team built two arrays of silicon-based sensors. These “in-sensor processing arrays” can detect both movement and static shapes directly, without needing to send raw data elsewhere.

One array handles dynamic visual changes, such as detecting motion, while the other focuses on the shapes and features of objects in still images.

The team tested their system in challenging visual environments. When asked to recognize human movements — like walking, waving, or boxing — their analog hardware performed better than existing digital systems, reaching 90% accuracy.

When analyzing handwritten numbers, their chip also outperformed traditional systems, achieving 95% accuracy versus 90%.

What makes this development especially exciting is that it’s made entirely of silicon, the same material used in most modern electronics. Unlike earlier versions of in-sensor processing that relied on exotic nanomaterials, this all-silicon design can be easily integrated into current manufacturing processes.

This means it could soon be scaled up for use in self-driving cars, surveillance systems, medical devices, and other large-scale technologies that rely on fast, accurate visual input.

Xu says the benefits go beyond speed. In areas like bioimaging, their technology could cut down on the massive amount of data generated by cameras, making analysis more efficient while still providing the same scientific insight.

With this new approach, machines could soon be seeing and reacting to the world with more speed, accuracy, and efficiency — much like a real pair of eyes.