TransWikia.com

Quantum circuits explain algorithms, why didn't classical circuits?

Quantum Computing Asked on March 19, 2021

When explaining a quantum algorithm, many revert to ‘circuit-speak’ by drawing a diagram of how qubits split off into transformations and measurements, however, rarely if not never would someone explaining a classical math algorithm revert to its representation in binary circuits. I would understand that this is because transforms and such don’t exist in the binary world, but:

Doesn’t this unnecessary focus on the computational details relating to computing-computing, rather than the mathematical/statistical/optimization problem that the circuitry merely only underlies, detract from the main problem/application at hand? Is the classical mindset just that intuitive and aligned to general human thought, that quantum circuits, on the other hand, will remain to be a standard explanation strategy?

4 Answers

You might find this analogy helpful: the development of quantum algorithms is still in the Booth's multiplication algorithm stage; we haven't quite reached dynamic programming or backtracking. You'll find that most textbooks explain the Booth's algorithm using the following circuit.

Multiplier circuit

That is in fact, the method in which the multiplication logic is implemented in most modern processors (with some minor modifications depending on the version). However, this kind of representation quickly becomes tedious when you move to on algorithmic techniques like looping and recursion which may involve multiple multiplication and division steps, among others. It would be crazy for textbooks to explain more advanced algorithms using hardware-level implementations like this. Not to mention that the basic hardware circuitries vary with the processor. If you've ever done assembly language programming this should resonate.

Classical algorithm textbooks like the CLRS evade this problem by framing the algorithms without any particular processor in mind. The basic algorithmic procedures like addition, multiplication, looping, etc. are all considered as black boxes. If you're interested to see the processor-specific implementation of a CLRS algorithm you could certainly write it up in some high-level language like C and then convert it to assembly. Fortunately, compilers do that tedious conversion on our behalf!

Now the interesting part is that the basic building blocks of quantum algorithms are not addition or multiplication as such, but rather operations like Fourier transform and amplitude amplification. Quantum algorithms are largely framed in terms of these basic transformations that are very easy to visualize using quantum circuits (at least, if we're using the gate model). It's really much more about convenience and much less about intuition.

Rest assured that if a textbook ever states a generalized quantum equivalent of the Dijkstra's algorithm it wouldn't be showing you all the gates required to implement it, but rather in terms of the elementary quantum operations whose hardware implementations would largely vary depending on the quantum processor you're using. The bottom line is that we're still in the assembly language stage of quantum computing.

Correct answer by Sanchayan Dutta on March 19, 2021

The state of quantum computing technology is still in its infancy, so implementation details are generally important when considering quantum algorithms. Number of gates, number of operations, types of gates (e.g. Clifford vs. non-Clifford) are often necessary information to evaluate the feasibility and value of a quantum algorithm.

In many cases quantum algorithms are still being optimized, and there are often competing approaches with different trade-offs being considered and iterated. As a result, even publications describing very complex algorithms often include circuit diagrams implementing novel functions to improve efficiency (e.g. Fig. 1: controlled SELECT).

The quantum circuit model is also one of the more intuitive ways to depict quantum computations. Quantum circuits are a restricted form of tensor networks (see e.g. here), which are often used more broadly in both physics and classical computing (particularly in machine learning).

Microsoft seems to be one of the leaders in terms of developing the level of abstraction of quantum computation that you seem to be referring to, embodied in Q#. However, effective abstraction is not always straightforward or necessarily more intuitive (see e.g. here).

Answered by Jonathan Trousdale on March 19, 2021

In classical computing, both circuit diagrams and pseudo-code are used to explain algorithms. The choice between circuits and pseudo-code depends on the context. If the goal is to explain a highly optimized implementation of an algorithm on FPGA, a circuit diagram is probably more suitable. For example, see this paper on AES implementation on FPGA. Pedagogical explanation of AES uses pseudo-code.

Similarly in quantum computing, if one wants to explain a highly optimized implementation of a modular adder, they resort to showing circuit diagrams. Papers focused on more high-level quantum algorithms frequently contain no quantum circuit diagrams and use pseudo-code instead. A good example of such a paper is Quantum algorithm for linear systems of equations. If you look through papers referenced at Quantum Algorithm Zoo, you will find many that have no circuit diagrams in them.

It seems that many people get the impression that 'circuit-speak' is so common because quantum computing is taught from the ground up. Quantum circuits are one of the first concepts many get exposed to when learning quantum computing.

Answered by Kliuchnikov Vadym on March 19, 2021

There are no classical registers in quantum computing

In classical computers you can have a well defined "current state at a given time" (stored notably in CPU registers and DRAM memory in modern systems), and this state changes with time (each CPU clock) in a controlled way.

Therefore, it is easier to map sequential description of an algorithm back to classical real hardware. For example, a classical algorithm might be described sequentially as:

a = b + c
d = 2 * a

and in a classical computer this might actually be implemented in two separate steps:

  • a CPU clock happens
  • one ADD instruction that stores the intermediate result to a register that represent a
  • a CPU clock happens
  • one MUL instruction which stores the final result to a register that represnts d
  • a CPU clock happens
  • ...

In quantum computing however, you cannot save the "intermediate state of a computation" and operate on it in a later step: you set up the inputs and the circuit, and information flows in a single indivisible step to the sensor device at the end of the circuit which makes a probabilistic reading.

Therefore, unless we are treating quantum circuits as black boxes between classical registers, sequential algorithm descriptions don't make much sense.

It is this fact that makes quantum computers much harder to program.

So a more likely useful description of quantum computing looks more like the combinatorial logic blocks (i.e. blocks with no registers and therefore no state) in hardware description languages such as Verilog and VHDL, which are just textual descriptions of a graph of circuits.

For example, in a Verilog combinatorial block, when you say:

a = b + c

it doesn't mean "on the next clock cycle of the algorithm, the register a will be worth b + c" like in say C or Python.

It rather means:

  • a is a wire,
  • b is a wire
  • c is a wire
  • + is an adding circuit with b and c as inputs and a as output

Therefore, as soon as b or c change, a also "immediately" changes. With "immediately" in quotes because in practice electrons do take some time to move, and so we can't take the clock smaller than this propagation time.

A "propagation time" analogue is also present in quantum computers, where each experiment takes some time to finish, and the faster that time, the faster you can rerun the experiment to reduce uncertitude of the result.

Of course, for any maximum input size, you could make one huge combinatorial circuit that implements that algorithm. But in classical computing we don't do that because silicon is expensive to design and produce, so it is much more economical to design a circuit that solves a wider range of problems than a huge specialized circuit, even if each problem is solved a bit less fast.

In quantum computers, you don't have a choice. Unless you can use a divide and conquer style algorithm to generate smaller subproblems (which generally implies a P problem which might not be so interesting to a quantum computer), you just need a minimum number of qubits and gates for each given algorithm.

Answered by Ciro Santilli TRUMP BAN IS BAD on March 19, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP