TransWikia.com

What is the computational complexity of quantum annealing?

Quantum Computing Asked by Dr. Prasanna Date on March 4, 2021

Quantum annealing can be thought of as a black box solver that can find approximate solutions to hard optimization problems. For example, D-Wave quantum annealers can approximately solve quadratic unconstrained binary optimization (QUBO) problems, which are known to be NP-hard.

In this context, what is the computational complexity of quantum annealing? More specifically:

  1. If the size of my problem is $N$, how long will I need to run the quantum annealing process (as a function of $N$) to obtain the exact solution?

  2. Under an approximation algorithm / randomized algorithm setting, how many repetitions of quantum annealing will I have to perform to obtain an approximate solution that is within $epsilon$ error of the exact solution (for example, $epsilon = 0.1$ for getting a $10%$ error)?

In my experience running problems on the D-Wave, I have noticed that a constant annealing time in conjunction with a constant number of repetitions can find exact solutions for smaller-sized problems. However, quantum annealing is not as accurate on larger-sized problems.

2 Answers

  1. For a problem of size $N$, computational complexity of quantum annealing is $mathcal{O}(e^{sqrt{N}})$. This is better than simulated annealing, which has a complexity of $mathcal{O}(e^{N})$. Both these complexities are mentioned in this paper.

  2. More analysis needs to be done to answer this question quantitatively. But, qualitatively, we can expect to have exponential number of iterations ($mathcal{O}(e^{sqrt{N}})$) to be within $epsilon$-error.

Correct answer by Dr. Prasanna Date on March 4, 2021

Currently, it is not preciselly known whether quantum annealers bring any significant speed up. Lets take some task having exponential complexity on classical computer. If you run it on quantum annealer it will probably run faster. However the reason is not reduced complexity (it is still exponential) but smaller constants in function decribing the task complexity.

Concerning difficulties with reaching an optimum for large problems. This can be caused by a noise because inreasing the task size incerease a number of qubits involved. As a result, the level of noise is also increasing. Moreover, there is probably more entanglements in the larger problems. As entangled systems are fragile, this again probably causes difficulties with reaching the optimum.

Answered by Martin Vesely on March 4, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP