Today, Google announced a demonstration of quantum error correction on its next generation of quantum processors, Sycamore. The iteration on Sycamore isn’t dramatic—it’s the same number of qubits, just with better performance. And getting quantum error correction isn’t really the news—they’d managed to get it to work a couple of years ago.
Instead, the signs of progress are a bit more subtle. In earlier generations of processors, qubits were error-prone enough that adding more of them to an error-correction scheme caused problems that were larger than the gain in corrections. In this new iteration, adding more qubits and getting the error rate to go down is possible.
We can fix that
The functional unit of a quantum processor is a qubit, which is anything—an atom, an electron, a hunk of superconducting electronics—that can be used to store and manipulate a quantum state. The more qubits you have, the more capable the machine is. By the time you have access to several hundred, it’s thought that you can perform calculations that would be difficult to impossible to do on traditional computer hardware.
That is, assuming all the qubits behave correctly. Which, in general, they don’t. As a result, throwing more qubits at a problem makes it more likely you’ll encounter an error before a calculation can complete. So, we now have quantum computers with more than 400 qubits, but trying to do any calculation that required all 400 would fail.
Creating an error-corrected logical qubit is generally accepted as the solution to this problem. This creation process involves distributing a quantum state among a set of connected qubits. (In terms of computational logic, all these hardware qubits can be addressed as a single unit, hence “logical qubit.”) Error correction is enabled by additional qubits neighboring each member of the logical qubit. These can be measured to infer the state of each qubit that’s part of the logical qubit.
Now, if one of the hardware qubits that’s part of the logical qubit has an error, the fact that it’s only holding a fraction of the information of the logical qubit means that the quantum state isn’t wrecked. And measuring its neighbors will reveal the error and allow a bit of quantum manipulation to fix it.
The more hardware qubits you dedicate to a logical qubit, the more robust it should be. There are just two problems right now. One is that we don’t have hardware qubits to spare. Running a robust error correction scheme on the processors with the highest qubit counts would leave us looking at using fewer than 10 qubits for a calculation. The second issue is that the error rates of the hardware qubits are too high for any of this to work. Adding existing qubits to a logical qubit doesn’t make it more robust; it makes it more likely to have so many errors at once that they can’t be corrected.