Quantum Computing Asked by Nepomuk Hirsch on April 24, 2021
In the paper Quantum Supremacy through the Quantum Approximate Optimization Algorithm the authors claim (last sentence of page 15):
"If […] the QAOA outperforms all known classical algorithms then it will achieve Quantum Supremacy in an algorithmic sense."
To support this, they prove that any classical (stochastic) algorithm that produces a similar (up to a small multiplicative error) output distribution as QAOA, cannot do this in poly-time (or in other words: sampling from the QAOA distribution cannot be done efficiently on classical computer).
Can someone explain how this point supports their supremacy claim?
I mean, according to their proof it is still possible that a (potentially not yet known) classical algorithm can have a completely different output distribution that however might have even a much higher probability for the correct answer.
Of course, this also depends on the definition of quantum supremacy but for me quantum supremacy would mean that no classical algorithm can exist at all that solves the problem in a better way than a quantum algorithm. So, is this just a different understanding of the supremacy term in the sense that they mean that supremacy is reached once all known classical algorithms are beaten?
"but for me quantum supremacy would mean that no classical algorithm can exist at all that solves the problem in a better way than a quantum algorithm."
If that were the case, then "quantum supremacy" would almost not exist at all.
Even Shor's algorithm for factoring numbers in polynomial time would not be considered "superior" to what classical computation can do because it is still unknown whether or not classical computers can factor numbers in polynomial time.
But the word "quantum supremacy" should be avoided for exactly the confusion you're undergoing here. It's better to be specific about what you mean. Instead of using a buzz word like "quantum supremacy", you can just say "no known classical algorithm can do this calculation as efficiently". Or in the case of the Deutsch problem, where a perfect quantum computer needs to evaluate the blackbox function only once, and a classical computer would need to evalue it twice, then you can say "the quantum computer can solve the problem with a small number of evaluations of the blackbox which is impossible to match by a classical computer."
Correct answer by user1271772 on April 24, 2021
Demanding that quantum computational supremacy be achieved only when we can prove that no classical algorithm can run with the same asymptotic speed is presently out of reach, not so much of the scientists and engineers, but of the complexity theorists.
For example because Feynman proved that $mathsf{BQP}$ is contained in $mathsf{PP}$, and because we don't yet know how to unconditionally separate, e.g., $mathsf{P}$ from $mathsf{PP}$, almost all proposals/approaches to proving quantum computational supremacy in the asymptotic sense must rest on assumptions about the limitations of classical computational algorithms. Often when speaking freely these hypotheses are unstated or unrepeated. Indeed, the Farhi and Harrow paper states in the abstract that the results are conditioned "based on reasonable complexity theoretic assumptions".
I would argue that running Shor's algorithm to factor, say, RSA-2048
would be evidence of quantum computational supremacy, with the hidden assumption that there is no polynomial-time factoring algorithm.
But the reason that Shor's algorithm is not seen as an ideal candidate in the near term for showing quantum computational supremacy is not because we don't know whether a classical algorithm exists; it's rather that the overhead required for factoring is too large now, absent fully scalable fault tolerant quantum computers.
That is, one of the motivations of Preskill starting the program of showing quantum computational supremacy was to provide a goal achievable with the near-term, noisy, intermediate-scale quantum (NISQ) devices. The QAOA approach in the Farhi-Harrow paper may be achievable with such a NISQ device; however, it nonetheless has to rely on reasonable complexity-theoretic assumptions.
Answered by Mark S on April 24, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP