Quantum Computing Asked by BlackHat18 on December 23, 2020
I am trying to understand the integration on page 4 of this paper. Consider a Haar random circuit $C$ and a fixed basis $z$. Each output probability of a Haar random circuit (given by $|langle z | C |0^{n} rangle |^{2}$, for each fixed z) is distributed according to the Porter Thomas distribution, given by
begin{equation}
text{PorterThomas}(x) = 2^{n} e^{-2^{n} x}.
end{equation}
The paper claims that
begin{equation}
mathbb{E}[|langle z | C |0^{n} rangle |^{2}] = int_{0}^{infty} frac{x}{2^{n}} x e^{-x} ,dx = frac{2}{2^{n}}.
end{equation}
However, I do not understand the integration at all. Shouldn’t the integration instead be
begin{equation}
mathbb{E}[|langle z | C |0^{n} rangle |^{2}] = int_{0}^{infty} x ~2^{n} e^{-2^{n} x} ,dx = frac{1}{2^{n}},
end{equation}
where I am just using the definition of the expected value and plugging in the pdf for Porter Thomas. However, this gives me a very different answer.
Where are all the extra terms coming from and why is the answer $frac{2}{2^{n}}$?
The issue that easily leads to confusion is the dual role played by output bitstring probability. It enters the computation of the average in two ways. On one hand, it determines how often one sees different bitstrings. On the other hand, it determines the contribution that each bitstring makes towards the average. In mathematical terms, the output bitstring probability affects both the probability measure of the random variable as well as its value.
To see this, consider the following example procedure that yields $mathbb{E}[|langle z | C |0^{n} rangle |^{2}]$:
In step 1, the output bitstring probability affects the bitstrings you see - you see the more likely bitstrings more often. In step 2, it affects the value you add up in the computation of the average - the more likely bitstrings contribute more towards the average.
We can make this reasoning more rigorous (following section IV C of QS paper supplement). The fact that the distribution of output bitstring probabilities is Porter-Thomas means that the fraction of output bitstrings with probability in $[p, p+dp]$ is:
$$ Pr(p) , dp approx 2^n e^{-2^np} dp. $$
Since there are $2^n$ possible output bitstrings, the number of bitstrings with probability in $[p, p+dp]$ is
$$ N(p) , dp approx 4^n e^{-2^np} dp. $$
Therefore, the probability that in step 1 above we see a bitstring whose probability lies in $[p, p+dp]$ is
$$ f(p) , dp approx p , 4^n e^{-2^np} dp. $$
Note that $f(p)$ is the probability density function for the output bitstring probability. Therefore, the average output bitstring probability is
$$ mathbb{E}[|langle z | C |0^{n} rangle |^{2}] = int_0^1 p f(p) dp approx int_0^1 p^2 4^n e^{-2^np} dp approx 2/2^n $$
as expected.
You may object that $f(p)$ defined above is not correctly normalized. This is due to the fact that the exponential formula is an approximate form of the Porter-Thomas distribution which is in fact a Beta distribution
$$ (2^n - 1) (1 - p)^{2^n - 2} approx 2^n e^{-2^np}. $$
In practice, this approximation is very good for $n$ above a dozen or so.
For completeness, note that if you were running the quantum circuits on a noisy quantum computer the distribution in step 1 would be different and the resulting average would be a number between $1/2^n$ and $2/2^n$ according to the fidelity obtained in the experiment. This is the key idea behind linear cross-entrpy benchmarking.
Correct answer by Adam Zalcman on December 23, 2020
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP