How do you construct a perfect machine out of imperfect parts? That’s the central challenge for researchers building quantum computers. The trouble is that their elementary building blocks, called qubits, are exceedingly sensitive to disturbance from the outside world. Today’s prototype quantum computers are too error-prone to do anything useful.
In the 1990s, researchers worked out the theoretical foundations for a way to overcome these errors, called quantum error correction. The key idea was to coax a cluster of physical qubits to work together as a single high-quality “logical qubit.” The computer would then use many such logical qubits to perform calculations. They’d make that perfect machine by transmuting many faulty components into fewer reliable ones.
“That’s really the only path that we know of toward building a large-scale quantum computer,” said Michael Newman (opens a new tab), an error-correction researcher at Google Quantum AI.
This computational alchemy has its limits. If the physical qubits are too failure-prone, error correction is counterproductive — adding more physical qubits will make the logical qubits worse, not better. But if the error rate goes below a specific threshold, the balance tips: The more physical qubits you add, the more resilient each logical qubit becomes.
Now, in a paper (opens a new tab) published today in Nature, Newman and his colleagues at Google Quantum AI have finally crossed the threshold. They transformed a group of physical qubits into a single logical qubit, then showed that as they added more physical qubits to the group, the logical qubit’s error rate dropped sharply.
To read more, click here.