You've heard plenty of people by now—including us—banging on about quantum computers, and how they’re the future of high-performance computing. Quantum computing, we're meant to understand, is set to change the world. But despite its promise, it's neither widely available nor particularly useful yet. Here's why not.
First things first, a quick reminder of exactly what a quantum computer is. Essentially, they’re computers that make use of quantum physics. Unlike a normal digital systems which rely on data encoded into binary digits (bits)—that can only ever take the form of 0 or 1—quantum computation uses quantum properties to represent data and perform operations.
A quantum computer, then, uses not bits but qubits (quantum bits). Each of its qubits can represent a 0, a 1, or—crucially—anything in between.
Imagine a table covered in coins. In a classical computer, each one is either showing a head or tail; in a quantum computer, a coin could be showing 25 percent head and 75 percent tail, or any other position between the two possible states. Once measured, of course—and we have Schroedinger to thank for this—it assumes one of the two states, heads or tails.
Because each qubit can assume such a wide range of values, a modest number of them can hold an insane quantity of information; this is what lends quantum computers their theoretical grunt. Just 100 qubits can store 1,267,650,600,228,229,401,496,703,205,375 different numbers—many trillion times the storage capacity of all computers ever made. In other words, 100 qubits can simultaneously represent all possible 100-bit numbers in their huge quantum state, as opposed to a classical 100-bit computer, which can represent just one.
It’s that vast ability to assume many states at once that—in theory—means that quantum computers can provide untold power, many times faster than any classical computer. In practice, it's rather more difficult.
To read more, click here.