Quantum Computing Asked by KamWoh Ng on January 27, 2021
I am very new to quantum computing and currently studying quantum computing on my own through various resources (Youtube Qiskit, Qiskit website, book).
As my mindset is still "locked" with classical computation, where input and output are very certain, I can confirm my output given a certain input. There are some confusion for me to understand quantum computing properly.
Then many described what are state vector, superposition, measurement, etc. I understand what they meant (or maybe not fully). State vector is probability of being 0 or 1. Superposition is linear combination between two basis ($|0rangle$, $|1rangle$ or $|+rangle$, $|-rangle$ or any orthogonal basis). Measurement, to get the outcome of a qubit given the probability of the qubit.
However what doesn’t make me feel satisfy or "ahha" is that I don’t understand, since quantum computing always give an uncertain outcome after measurement (e.g. this video about 2:40 the pong game shows when Hadamard gate is applied, then the probability of plane 000 and 001 are half-half and causing the loss because the ball didn’t hit the plane), why we still want to use the uncertain outcome?
I understand (or maybe partially understand) that quantum computing is very fast, we can compute many things at once (since we have superposition or something), and we can simulate an experiment many times to get the result. However I don’t think this solved my confusion.
So my question is, why do we still need quantum computing since it always gives uncertain measurement?
Assuming you are measuring in the computational basis (Z basis), ${|0rangle, |1rangle }$, there is no randomness upon measurement in the following quantum circuit (you will always get back the state $|1rangle$):
Thus, measurement needs not be random. However, if you try this following circuit, you will have 50% to see a $|0rangle$ and 50% to see $|1rangle$:
So your answer is not so deterministic here... when designing a quantum algorithm, we want somehow create interference within the system and so upon measurement, we have something like the first circuit. Where the result is deterministic.
A state vector is a vector describing the state of the system. In quantum computing, your system is a qubit, hence a two-level quantum system. Thus, it can be described by a complex Euclidian space $mathbb{C}^2$. Thus, the state of a qubit, $|psi rangle$ can be written as $$ |psi rangle = alpha|0rangle + beta|1rangle hspace{1 cm} alpha, beta in mathbb{C}, hspace{0.5 cm} |0rangle = begin{pmatrix} 1 0end{pmatrix}, hspace{0.25 cm} |1rangle = begin{pmatrix} 0 1end{pmatrix} hspace{1cm} |alpha|^2 +|beta|^2 = 1$$ And if you have an $n$ qubit state then it can be written as a normalized vector in $mathbb{C}^{2 ^{otimes n}}$.
So suppose you have the state $|psi rangle = begin{pmatrix} 1/sqrt{2} 1/sqrt{2} end{pmatrix} $ then you can see that $|psirangle$ can be written as a superposition (in a linear combination) of the states $|0rangle$ and $|1rangle$ as $$ |psi rangle = dfrac{1}{sqrt{2}}|0rangle + dfrac{1}{sqrt{2}}|1rangle $$ Now, it is a postulate of quantum mechanics that any device that measures a two-state quantum system (a qubit) must have two preferred states ${|e_1rangle, |e_2rangle }$ that form an orthonormal basis for the associated vector space (here would be $mathbb{C}^2$). A measurement on the state $|psirangle$ transforms $|psirangle$ into one of the these basis vectors $|e_1 rangle$ or $|e_2rangle$. The probability that the state $|psirangle$ is measured as $|e_1rangle$ or $|e_2rangle$ is the square of the magnitude of the amplitude of the component of the state in the direction of the basis vector $|e_1rangle$ or $|e_2rangle$.
So if we picked $|e_1rangle = 0 $ and $|e_2 rangle = |1rangle$ then upon measuring the state $|psi rangle = dfrac{1}{sqrt{2}}|0rangle + dfrac{1}{sqrt{2}}|1rangle$, you will have 50% observing the state $|0rangle$ and 50% observing the state $|1rangle$. In this case, a single measurement doesn't tell you anything... you need many, many measurements to build up a statistical distribution.
But the uncertainty described above is only because we have picked $|e_1rangle$ and $e_2rangle$ the way we did. If we have picked, $|e_1rangle = |+rangle = begin{pmatrix} 1/sqrt{2} 1/sqrt{2} end{pmatrix}$ and $|e_2rangle = |-rangle = begin{pmatrix} 1/sqrt{2} -1/sqrt{2} end{pmatrix}$ then upon measuring, we will observe the state $|+rangle$ with a 100% probability (assuming no noise). There is no randomness here. This is because $|psi rangle$ is NOT in a superposition in the ${|+rangle, |-rangle }$ basis.
Therefore, the notion of superposition is basis-dependent. All states are superposition with respect to some bases but not with respect to others. That is, a state $|psi rangle = alpha |0rangle + beta |1rangle$ is only a superposition with respect to the computational basis ${|0rangle, |1rangle }$ but not a superposition with respect to the bases ${ alpha |0rangle + beta |1rangle, beta^* |0rangle - alpha^* |1rangle }$.
Since measuring a superposition state $|psi rangle = alpha |0rangle + beta |1rangle$ is probabilisitc, it is tempted to say that the state $|psi rangle$ is a probabilistic mixture of $|0rangle$ and $|1rangle$ and we just don't know which, when in fact, $|psi rangle$ is actually a definite state. That is, if we measure $|psirangle$ in certain bases, we will get a deterministic result.
Thus, when designing a quantum algorithm, we want the final state which contains the answer we are looking for be in a single eigenstate and not in superposition with respect to the computational basis (Z basis). For instance, the following circuit will create a state that is NOT in superposition in the computational basis...
so if you run the above circuit on a quantum computer, you will only need to measure it ONCE to obtain your result... This circuit is constructed to solve the problem of "secret bitstring". In fact, the state upon measurement is the state $|111rangle$, which tells you that the secret bitstring is 11
... This is known as the Bernstein–Vazirani algorithm. I would recommend you to read up on it as it will help you to understand where the advantage of quantum computation come about.
Answered by KAJ226 on January 27, 2021
The result is random is the resulting state is a superposition of states, but this is not always the case. Indeed, the idea with quantum computing is to exploit the phenomenon of interference, which is the core of quantum mechanics. In a good quantum algorithm, the computational paths that lead to a wrong answer cancel out, while the paths leading to a correct answer reinforce.
A nice reference is:
https://www.cs.virginia.edu/~robins/The_Limits_of_Quantum_Computers.pdf
Answered by Michele Amoretti on January 27, 2021
Deterministic case
As the other answers point out, quantum circuits may in fact have deterministic or nearly deterministic output distributions. In those cases, the probability of obtaining some value of interest $v$ is very high and after a few executions of the algorithm we can be very confident what $v$ is.
Non-deterministic case
More generally however, non-deterministic computation (quantum or otherwise) can be inherently useful if an output probability is equal to a quantity of interest $v$. Let $p_0$ denote the probability that a non-deterministic computer outputs $0$ and $p_1$ that it outputs $1$. Suppose that we have found a non-deterministic algorithm with the property that $p_0 = v$. If we execute the algorithm $n$ times counting the number $k$ of cases where the computer returned $0$, then we can estimate $v$ as
$$ v = p_0 approx frac{k}{n}tag1 $$
and we can improve the quality of the estimate by increasing $n$. In fact, there are many classical probabilistic algorithms, such as Monte Carlo, that work this way.
Example
A concrete example of a useful quantum algorithm with non-deterministic output is the HHL algorihtm for solving linear equations. Suppose we would like to compute the first component $x_0$ of the solution to a very large linear system $Ax=b$. Combining the HHL algorithm with the Hadamard test as described in this answer we can create a quantum circuit for which
$$ mathrm{Re},x_0 = p_0 - p_1 approx frac{2k-n}{n} $$
where $k$ is the number of times the computer output $0$ among $n$ executions.
Answered by Adam Zalcman on January 27, 2021
Summary: Quantum computing is believed to give uncertain measurements fast.
If you delve a little bit into computational theory, you'll find that there are problems that cannot be solved exactly and efficiently with traditional computational tools. However, for some of these problems, if we have a solution we can check if it is correct efficiently with a classical computer. For some of these problems we already use probabilistic or approximation approaches to get "close-enough" solutions.
As some of the other answers mention, there are some affordances in the way we construct quantum circuits that kind of gloss over the noise inherent in the individual qubits. But, with all due respect, the real promise of QC is that we can get an answer quickly that would be hard to find by classical means, and then check its correctness using a classical computer.
For example, Shor's algorithm can be naively thought of as a random number generator that is likely to output factors of large numbers. The best algorithm for factorization on a classical computer is given as
(see https://en.wikipedia.org/wiki/Integer_factorization), but Shor's can give an answer in $O(b^3)$. Of course, we have to check if the answer is correct, but that's easy, it's just multiplication.
Answered by Will on January 27, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP