TransWikia.com

Other than distance, what other metrics can be used to compare quantum error correcting codes?

Quantum Computing Asked on December 1, 2020

Using classical error correction (or channel coding) as a reference, I’d like to be able to compare QECC’s from different constructions. Distance is a reasonable measure and you can argue that an $[[n_1,k_1,d_1]]$ is a better code than an $[[n_2,k_2,d_2]]$ if for example $k_1/n_1 = k_2/n_2$ and $d_1>d_2$ (same rate, larger distance); or maybe $d_1 = d_2$ and $k_1/n_1 gt k_2/n_2$, (same distance, higher rate),or $n_1/k_1=n_2/k_2, d_1=d_2$ and $n_1 lt n_2$, (same rate and distance, smaller number of physical qubits). However, just like the classical case, I’m sure that distance doesn’t tell the whole story. The "channel" (or error model) has to enter the picture as well as the decoding algorithm. Distance can also be difficult to calculate for large $n$. In classical ECC, a plot of BER vs SNR in AWGN channel can quickly tell you if a code/decoder combination gives better performance than another. What would be possible equivalents for QECC? (to simplify things you can ignore decoder complexity as a parameter; you can also limit QECC’s to stabilizer codes)

2 Answers

So the biggest number used when comparing families of QECCs is the threshold, which is the error rate (generally depolarizing noise or XZ noise, depends on the paper) for which increasing the size (and distance) of the code actually increases performance.

Look at figure 4 in this paper. On the top plot, you can see that above a certain error rate, the extra machinery needed to correct errors is actually not worth it, and smaller codes perform better. However below this threshold, increasing your distance is worthwhile.

Code families (meaning sets of codes that have the same structure, such as surface codes or triangular color codes) are often compared with their threshold. Some, like the Bacon-Shor codes, don't have one at all, and instead have a sweet-spot distance for any given error rate.

Another metric, as mentioned in @JSdj's comment, is the size of the stabilizers. This essentially ends up corresponding to the connectivity required in the quantum hardware to effectively run the code. Codes which require a lot of connectivity often suffer from error propagation issues, since a single component having an error will quickly spread it.

Lastly, when it comes to error models there are a lot of different things to consider. In this paper, we talked about a few ion trap error models, and discussed how their structure interacts with different codes. This paper discusses how important it will be to optimize fault-tolerant protocols to different error models, and shows the sensitivity a codes performance can have to those models.

Correct answer by Dripto Debroy on December 1, 2020

If you focus solely on stabilizer/additive codes, I believe that the weight of the Paulis in the stabilizer is highly important. The weight of a $n$-qubit Pauli is the number of non-trivial factors in it.

The weight of the correctable errors has a close correspondence with the distance of a code, but the weight of the elements of the stabilizer is also important for the error correction process. In standard error syndrome measurement, a stabilizer generator is measured using an ancilla that is entangled through a 'controlled-generator' gate. The higher the weight of the generator, the harder it is to implement this gate. In current quantum computing architectures, the connectivity between qubits is severely limited (and I think this will always be the case), so measuring a weight-$10$ generator is easier said then done.

For instance, the $5$-qubit 'perfect' code is by no means perfect because the generators are of weight $4$. If you use just $1$ ancilla, that means that this ancilla needs to be connected to all the data qubits.

You can check out the concept of LDPC-codes (Low Density Parity Check) that tries to acknowledge this problem.

Now, by no means this is the only or necessarily most important measure for a code, but I think it is easy to forget about it during theoretical analysis.

Answered by JSdJ on December 1, 2020

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP