TransWikia.com

Pauli equation: hermite adjoint when deriving probability density

Physics Asked by Norton on March 25, 2021

When trying to derive the probability density from the Pauli equation, I face a problem.
Starting from the Pauli equation
$$ ihbar frac{partial Psi}{partial t}=hat H_0 Psi +mu_B hat sigma cdot mathbf{B} Psi, $$
I need to adjoint it:
$$ -ihbar frac{partial Psi^+ }{partial t}=hat H_0^* Psi^+ +mu_B left( hat sigma cdot mathbf{B} Psi right)^+. $$
So, I’m trying to calculate the multiplication in brackets, using the properties of conjugation $(AB)^+=B^+A^+$:
$$
left( hat sigma cdot mathbf{B} Psi right)^+ equiv
bigg( left( hat sigma cdot mathbf{B} right) Psi bigg)^+=
Psi^+ left( hat sigma cdot mathbf{B} right)^+ =
Psi^+ mathbf{B}^+ hat sigma^+= Psi^+ mathbf{B}^T hat sigma.
$$

Here I’ve used the facts that the magnet field is real $(mathbf{B}^+=mathbf{B}^T)$ and pauli matrices are hermitian $(hat sigma^+ =hat sigma)$.

However, in the book (Greiner, Quantum Mechanics: an introduction) there is another answer:
$$ left( hat sigma cdot mathbf{B} Psi right)^+ =
Psi^+ hat sigma cdot mathbf{B}.
$$

Where is the mistake? Thanks in advance.

P.S. I understand that $hat sigma$ is an operator, and so it must act some function, but… I still don’t see my mistake.

2 Answers

Your textbook is right. In fact, your third equation mistake is not even a mistake, since you omitted a dot, so your answer is meaningless, as it stands.

What you need to understand is the Hermitian 2×2 matrix
$$ left( hat sigma cdot mathbf{B} right )= left( hat sigma cdot mathbf{B} right )^dagger; $$ it is the sum of three Pauli matrices, with real coefficients, the components of the real magnetic field.

So a good Hermitian piece of the Hamiltonian. So its Hermitian conjugate is just itself. You may think of the vector consisting of the three Pauli matrices and that of the magnetic field, but the two are dotted, so you have a scalar, ignorant of transpositions. The only Hermitian conjugation involved is that of each and all Pauli matrices, so the book answer is trivially right. Try an explicit example.

Correct answer by Cosmas Zachos on March 25, 2021

There is no difference between your result $$Psi^+mathbf{B}^That{sigma}$$ and Greiner's result $$Psi^+hat{sigma}cdotmathbf{B}.$$ Both evaluate to $$Psi^+(hatsigma_x B_x+hatsigma_y B_y+hatsigma_z B_z).$$ Remember $B_j$ are just real numbers (1x1 matrices). Therefore it is pointless to distinguish between $mathbf{B}$, $mathbf{B}^+$ and $mathbf{B}^T$.

Answered by Thomas Fritsch on March 25, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP