TransWikia.com

How to limit the error probability in large scale quantum computers

Quantum Computing Asked by midor on April 29, 2021

I am quite stumped by the fact that the roadmaps for quantum computers as given by IBM, Google, IonQ, etc. seem to imply a linear/exponential growth in the size of their quantum computers.

Naively, I would assume that bigger systems are much harder to engineer, especially if one wanted to be able to entangle all of their bits, because only the undisturbed outcome is correct. Given a fidelity where $P[0_i] = x$ means no error occurs on bit $i$ with probability $x$, it would seem to me that an entangled system of size $n$ would have probability $P[0_1, …, 0_n] = x^n$. The probability of error grows exponentially, if $x < 1.0$, which is of course true for all practical systems.

ECCs mitigate this to some extent, but also just by a polynomial factor and require even more bits. It looks like building large systems of the same fidelity becomes exponentially harder, and just scaling an existing design will result in a system with exponentially lower fidelity. Most interesting algorithmic building blocks, like the QFT, require entanglement between all of the involved bits. More bits without the ability to produce larger entangled states seem to have a limited advantage, because it would be practically equivalent to using two computers or executing in sequences.

Optimism about improvements in basic gate fidelity and error correction aside, is there anything I am missing about errors in entangled states that gives rise to these predictions? To state this more precisely, is there any theoretical effect or algorithm that can be exploited to limit the error probability in entangled systems to be polynomial or at least subexponential in the size of the system?

One Answer

It is true that fidelity decays exponentially in the course of quantum computation. This is indeed a major limitation of NISQ computers that imposes a stringent "depth budget". In order to overcome the decay, we need gates with fidelity so close to one that the decay is negligible over the course of quantum algorithms we intend to run. As you anticipated in your question, this is achieved using quantum error correction.

However, quantum error correction gives rise to exponential suppression of errors, not polynomial. Moreover, the asymptotic resource cost of the feat is polylogarithmic as a function of key variables. This enables the creation of logical gates with arbitrarily high fidelity and thus allows us to overcome exponential decay of fidelity. In practice, a challenge in building a quantum computer is that while the asymptotic overhead of quantum error correction is modest, the constant factors are fairly high.

Quantitatively, for a given quantum error correcting code there exist constants $c_1$ and $c_2$ and a threshold error rate $epsilon_T$ such that the logical error rate $epsilon_L$ is

$$ epsilon_L approx c_1 left(c_2frac{epsilon_P}{epsilon_T}right)^{leftlfloorfrac{d+1}{2}rightrfloor}tag1 $$

where $epsilon_P$ is the physical error rate and $d$ is the code distance (c.f. equation $(2)$ on p.11 in this paper). Importantly, the resources required to build and operate a quantum computer using a quantum error correcting code of distance $d$ are polynomial in $d$. For example, in case of the Surface Code the number of physical qubits required to encode one logical qubit scales as $O(d^2)$. Thus, theoretically, if the physical error rate $epsilon_P$ can be brought down below threshold then arbitrarily high logical fidelity should be achievable.

For a concrete example, suppose we have a computer with $epsilon_P < epsilon_T$ and we wish to run an algorithm that consists of $k$ gates on $q$ logical qubits and are willing to accept overall failure probability no greater than $p_f>0$. In order to meet these requirements, we need $epsilon_L$ such that

$$ kqepsilon_L approx 1 - (1 - epsilon_L)^{kq} < p_f. $$

Substituting from $(1)$ and carrying out the calculations, we find an upper bound on the necessary code distance

$$ log kqc_1 + leftlfloorfrac{d+1}{2}rightrfloor log left(c_2frac{epsilon_P}{epsilon_T}right) lt log p_f d lt frac{2log frac{kqc_1}{p_f}}{log frac{epsilon_T}{c_2epsilon_P}} = Oleft(log frac{kq}{p_f}right). $$

We see that the asymptotic overhead of quantum error correction is very modest. For example, using the Surface Code the number of physical qubits needed to run an algorithm with the above parameters is $O(qlog^2 frac{kq}{p_f})$.

For an informative discussion of exponential suppression of errors in concatenated codes see section 10.6.1 in Nielsen & Chuang, especially the text around equation $(10.113)$ on page 481. For a recent experimental demonstration of exponential suppression of errors, see this paper (disclosure: I'm a co-author).

Correct answer by Adam Zalcman on April 29, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP