How imperfect quantum chips can still build reliable supercomputers

Credit: Unsplash+.

Quantum computers are often described as the future of computing, with the potential to revolutionize fields like drug discovery, materials science, and cybersecurity.

But despite impressive progress, most quantum computers today remain too small and fragile to tackle real-world, large-scale problems.

A new study led by physicists at the University of California, Riverside, offers a promising path forward: building larger, fault-tolerant quantum systems by linking together smaller chips—even if those links aren’t perfect.

The research, published in Physical Review A, shows that scalable quantum architectures—systems made up of many smaller processors working as one—can function reliably even when the connections between chips introduce extra noise.

This discovery challenges the idea that we need flawless hardware before scaling quantum computers to useful sizes.

“Our work isn’t about designing a brand-new chip,” explained lead author Mohamed A. Shalby, a doctoral candidate in physics at UCR.

“It’s about proving that the chips we already have can be stitched together into much bigger systems, and still work. That changes how we think about scaling up quantum technology.”

In classical computing, scaling means being able to handle more data without the system slowing down or breaking.

In quantum computing, it means something more: fault tolerance, or the ability to automatically detect and correct errors that happen constantly in fragile quantum states.

One of the biggest hurdles has been connecting chips. Operations inside a single quantum chip are relatively stable, but connections between separate chips—especially when kept in different cryogenic refrigerators—are much noisier.

This noise can quickly overwhelm the system, making error correction fail.

The UCR team, however, found something remarkable. Even when the links between chips were up to ten times noisier than the chips themselves, the overall system still managed to detect and correct errors successfully.

The key, they say, is ensuring that each individual chip operates with very high fidelity. As long as the chips themselves are reliable, the links don’t have to be perfect—they just need to be “good enough.”

This insight is important because quantum computers rely on “logical qubits,” which are built from clusters of many physical qubits. The redundancy allows the system to spot and fix mistakes.

The most widely used method, called the surface code, is designed precisely for this. By simulating thousands of modular designs inspired by Google’s existing quantum infrastructure, the researchers showed that surface-code-based chips could be networked together successfully, even under realistic noise levels.

Until now, most progress in quantum computing has been measured in the sheer number of qubits a machine can hold.

But as Shalby pointed out, more qubits don’t mean much without reliability. “Our results show that scalable, fault-tolerant systems aren’t just a dream for the future—they’re possible right now with the technology we already have,” he said.

The findings suggest that researchers don’t need to wait for perfect hardware to build practical quantum computers.

By embracing modular systems with “good enough” links, the path toward powerful, reliable quantum supercomputers may be closer than expected.