New camera lens can keep everything in focus— from inches away to infinity

Credit: Carnegie Mellon College of Engineering.

Imagine taking a photo where every single detail—from the flower right in front of your lens to the mountains far in the distance—is crystal clear.

For over a hundred years, photographers and scientists have dreamed of such a lens.

Now, researchers at Carnegie Mellon University have developed a revolutionary “computational lens” that can bring an entire scene into focus all at once, no matter how close or far each object is.

The breakthrough, which could transform photography, microscopy, and even smartphone cameras, was created by a team led by Ph.D. student Yingsi Qin, along with professors Aswin Sankaranarayanan and Matthew O’Toole.

Their study, presented at the 2025 International Conference on Computer Vision, earned a Best Paper Honorable Mention, recognizing it as one of the most innovative ideas of the year.

Traditional camera lenses work by focusing on one flat plane—meaning only objects at a specific distance are perfectly sharp.

Anything nearer or farther becomes blurry. Photographers can narrow the aperture to make more of the scene appear in focus, but this reduces brightness and introduces new distortions caused by light diffraction.

Qin and her colleagues asked a bold question: What if a lens didn’t have to focus on just one flat layer at all? What if it could adjust its focus across the entire scene?

Their solution combines optical engineering with computer algorithms to create what they call a computational lens. The system builds on an existing concept called a Lohmann lens, which uses two curved, cubic lenses that shift against each other to control focus.

The Carnegie Mellon team added a phase-only spatial light modulator, a special device that can bend light differently at each pixel. This allows the lens to bring different parts of the image into focus simultaneously—like giving each pixel its own adjustable focus setting.

The researchers also developed two advanced autofocus systems to make the process fast and precise.

The first, contrast-detection autofocus, divides the image into small regions and automatically fine-tunes each one for maximum sharpness. The second, phase-detection autofocus, uses a dual-pixel sensor that can instantly tell whether a part of the image is in or out of focus and how to correct it.

Together, these techniques allow the system to capture sharp images at 21 frames per second, even for moving scenes.

This innovation could have a huge impact beyond photography. In microscopy, it could allow scientists to see every layer of a biological sample clearly without refocusing. In self-driving cars, it could improve safety by giving vehicles a sharper, real-time view of their surroundings. It could even enhance virtual and augmented reality, producing more natural depth and detail.

“Our system represents a completely new way of designing lenses,” says Sankaranarayanan. “It could fundamentally change how cameras see the world.”