TransWikia.com

Quantify the probability in guessing the Hamiltonian?

Quantum Computing Asked on December 28, 2020

Background

Let’s say I know my experimentalist friend has been measuring the eigenvalues of a physical system. I can see the $M$ measurements are noted in a sheet of paper and I assume the dimensionality of the Hamiltonian to be $K$. I also note the dimensions of eigenvalues of that of energy. I see non-unique eigenvalues and (randomly) guess an eigen-operator $hat O$ was measured in between the measurements ,i.e, a measurement of operator $hat O$ was done after every eigenenergy measurement. I would like to guess the Hamiltonian of the system.

Question

Now, my question how to quantify this probability of a reasonable guess of the Hamiltonian (any strategy is allowed including the one given below) of it being particular dimension $K$ given $M$ measurements as correct ? I would prefer if one included degerate Hamiltonians in their calculations (if possible)?

Why it’s a difficult problem

This only means to show the probabilistic nature of the problem. We will think of this in terms of eigen-energies primarily other formulation will require another layer of probability. Now:

Let, us write a variable Hamiltonian $H_j$ where $j$ is a counting system adopted which avoids redundancy . From the eigenvalue equation for energy:

$$ H_j |lambda_i rangle= tilde lambda_i|lambda_i rangle$$

From the spectral theorem I can reconstruct a particular Hamiltonian, see:

$$ H_j= sum_{i} tilde lambda_i |lambda_i rangle langle lambda_i| $$

Now, if we include degeneracies in the argument after measuring $tilde lambda_alpha$ one can conclude:

$$ frac{partial}{partial tilde lambda_alpha} H_j= sum_{kappa} |lambda_kappa rangle langle lambda_kappa| $$
How does one conclude this? When one measures a particular eigenvalue and assumes a degenerate $H_j$:

$$ tilde lambda_alpha to sum_{kappa}^M |lambda_kappa rangle$$

Note: all degeneracies obey:

$$H_j |lambda_kappa rangle= tilde lambda_alpha|lambda_kappa rangle $$


Non-uniqueness: given the same eigenvalue $tilde lambda_alpha$ cannot one distinguish between $H_j$ from $H_delta = H_j – |tilde lambda_alpha rangle langle tilde lambda_alpha |$


One Answer

This is my attempt. Let's say the list looks like:

$lambda_1$, $lambda_2$, $lambda_1$, $lambda_3$, $dots$, $lambda_n$

where $lambda_i$ are numbers. The variables (unknowns) are the Hamiltonian and the energy eigenvectors

We start with the following tricks:

  1. We assume $M$ (number of measurements) is a large number. This enables us to say the distribution observed ${ M }$ is the most probable distribution*.
  2. By *most probable we mean the expectation empirically is the most probable expectation value to be measured. For example the expectation is: $$ langle H rangle = frac{m_1 lambda_1 + m_2 lambda_2 + dots + m_n lambda_n}{M}$$
  3. Where we count the frequency of the of a particular eigenvalues with $ lambda_k$ with $m_i$. Hence, for a particular eigenvalue $m_i$.
  4. Obviously intermediary measurements are being done otherwise the same energy eigenvalue would be on the list. We will assume the observable used (to change the state) was $hat O$ with kets $|o_i rangle$. Then the probability of the eigenvalue $lambda_k$ arriving through that particular state (with no degeneracy)$| lambda_k rangle$ is $$ P_i( lambda_k ) = | langle o_i | lambda_k rangle |^2 = frac{m_k}{M}$$ Do note: the eigenvector is essentially not known.
  5. We use the following trick (from statistical mechanics) of multinomial theorem: $$ (a_1+a_2 +a_3 + dots +a_M )^M = sum frac{M!}{b_1!b_2! dots b_M!} {a_1}^{b_1}{a_2}^{b_2} dots {a_n}^{b_n}$$
  6. Now choosing $b_j to m_j$ and $a_j to P(lambda_j)$. We should get a coefficient of the probability of obtaining our particular distribution which is: $$ P_i({M }) = frac{M!}{m_1!m_2! dots m_M!} $$
  7. Using point ($1$) the probability obtained is the most probable distribution. Hence, by varying the eigenkets claim the correct eigenkets would maximize the below: $$ P_i({ M }) = max P_i({ M' }) implies frac{M!}{m_1!m_2! dots m_M!} = max prod_k frac{M!}{(M | langle o_i | lambda_k rangle |^2)!} $$ Taking log and maximising will work as well.
  8. It should be possible to talk about a mix of eigenkets such as $| o_k rangle$, $| o_i rangle$, etc (using density matrices). It should also be possible consider variations of degenerate eigenvalues (in a similar style shown in the question). Again one would have to consider which outcome maximizes the probability.
  9. Once, one has the eigenkets one can use the spectral theorem to write the Hamiltonian.

Answered by More Anonymous on December 28, 2020

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP