TransWikia.com

Why do the qasm_simulator runtimes vary significantly for different IBMQ CouplingMap/NoiseModels?

Quantum Computing Asked on May 2, 2021

I have been running VQE experiments locally using the qasm_simulator with actual IBMQBackend NoiseModels as shown in the Qiskit Textbook. I have noticed that the simulation runtimes for certain backends(e.g. ibmq_vigo – 5 qubits/ibmq_rome – 5 qubits) are significantly faster than others which in fact seem to run indefinitely and do not finish executing(e.g. ibmq_manhattan – 65 qubits/ibmq_montreal – 27 qubits).

Noting that I am running the same VQE experiments on all 4 NoiseModels I do not understand why this extreme simulation runtime variation exists.

How is this explained? and if not immediately resolvable would using the NoiseModel for ibmq_vigo in tandem with the coupling map of say ibmq_manhattan in the case that the desired VQE variational form exceeds 5 qubits(i.e. the number of qubit on ibmq_vigo) – e.g. VQE on $BeH_2$ – result in faster execution?

One Answer

It could be that the qubits are not neighbors and the circuit require a lot of intermediate qubits to execute certain operations. This could means that you are trying to simulate a very large problem. I would suggest that you transpiled the circuit onto those device qubit's layout and see what the circuit might looks like.


On the side note, by looking at the code from the linked you provided, I see this:

IBMQ.load_account()
provider = IBMQ.get_provider(hub='ibm-q')
backend = Aer.get_backend("qasm_simulator")
device = provider.get_backend("ibmq_vigo")
coupling_map = device.configuration().coupling_map
noise_model = NoiseModel.from_backend(device.properties())
quantum_instance = QuantumInstance(backend=backend, 
                                   shots=1000, 
                                   noise_model=noise_model, 
                                   coupling_map=coupling_map,
                                   measurement_error_mitigation_cls=CompleteMeasFitter,
                                   cals_matrix_refresh_period=30)

I just want to point out that by performing measurement_error_mitigation with CompleteMeasFitter can be costly and could slow down your simulation if you are using a lot of qubits. This is because Qiskit is trying to calibrated a $2^n times 2^n$ ($n$ is the number of qubit you are using) matrix to mitigate the measurement error.

For the case of Manhattan, assuming that you are using all of the qubit then this is $2^{65} times 2^{65}$ matrix! It would takes years (maybe?).

As for why do we need to we need to create $2^n times 2^n$ matrix, this is because you are creating $2^n$ eigenstate and measure each of them. Says you have $3$-qubit, and you want to understand how much error is happening in your measurement, what you do is then to prepare the $2^3$ eigenstates:

$$|000rangle, |001rangle, cdots, |111rangle $$

and make measurement on them. Measuring the first eigenstate will give you some probability distribution like:

$$begin{pmatrix} 0.80 0.02 0.04 0.01 0.03 0.05 0.04 0.01 end{pmatrix} hspace{1 cm} $$ You do this for the other 7 eigenstates, and hence you formed an $2^3 times 2^3$ matrix, says $M$. Then by operate $M^{-1}$ to the probability vector you generated by measuring some state $|psi rangle$, you will be able to recover (to some extent) the result that doesn't have the read out error.

As you can see, this is a very expensive process. And that is why your simulation is taking forever.

Answered by KAJ226 on May 2, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP