Quantum Computing Asked on May 18, 2021
I’ve been running VQE experiments locally using both the statevector_simulator
and qasm_simulator
, however I’ve noticed that the runtimes for the qasm_simulator
are significantly longer than that of statevector_simulator
. According to this answer the statevector_simulator
produces ideal/deterministic results, whereas qasm_simulator
produces non-ideal/probabilistic results. However I don’t see why the simulation times should differ significantly($>times 100$) if the number of shots=1
for the qasm_simulator
even if a NoiseModel
or measurement_error_mitigation_cls=CompleteMeasFitter
is used.
Is this in fact expected and how is it explained?
Note that if you use qasm_simulator with shots=1, this doesn't tell you anything! Unless IBM automatically switch your simulator to statevector_simulator if you specify the shots =1 when calling qasm_simulator.
Now in term of add noise model and doing measurement_error_mitigation. To be able to do this, you need to generate $2^n$ basis states and measure each of them to create a calibration matrix. It is an expensive procedure since you have many more quantum circuits to execute. Each circuit requires a certain amount of shots, up to 8192 shots. In statevector_simulator, you don't need to do this as there is no noise.
Answered by KAJ226 on May 18, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP