The long-term promise of robust quantum computing is difficult to overstate. Its potential for curing diseases, creating much-needed material breakthroughs, and solving problems even as big as climate change isn’t techno-bluster—it’s a real possibility. This promise is why companies and governments invest billions every year to unleash the power of qubit-based computing. However, major hurdles remain between the present and that far-future dream, and enemy no. 1 remains scalability.
To put it simply, qubits are incredibly sensitive to their environment, and require immensely precise control. These problems only increase as you add more qubits to a system, and eventually, disturbances in accuracy of quantum calculations—also known as “noise”—eventually lead to cascading errors. This is why the current era of quantum computing is known broadly as the ‘noisy intermediate-scale quantum (NISQ) era.’ These quantum processors only contain 1,000 qubits at most, and are not fault-tolerant enough to unleash the full promise of quantum computing.
Of course, scientists have a few techniques for making quantum computers less error-prone, including robust error correction and fault tolerant computing. And recently, experts at Oxford University achieved a major breakthrough regarding increasing the number of qubits in a quantum system. The idea is simple: instead of packing more qubits in one system, what if smaller quantum processors could form a distributed network? This could theoretically allow scientists to scale up the number of qubits while keeping noise low.
To read more, click here.