TransWikia.com

Why is the proof that the product of a linear operator with it's adjoint always given in terms of the inner product?

Physics Asked by diatomicDisaster on December 30, 2020

The proof that $LL^dagger$ is Hermitian, where $L$ is a linear (but not necessarily Hermitian) operator, is usually given in terms of the inner product, i.e:

begin{align}
langle phi |LL^dagger |psi rangle &= langle psi |(LL^dagger)^dagger |phi rangle^*
&= langle psi |(L^dagger)^dagger (L)^dagger |phi rangle^*
&= langle psi |L L^dagger |phi rangle^*.
end{align}

Why is it not sufficient to write, simply:

begin{align}
(LL^dagger)^dagger = (L^dagger)^dagger (L)^dagger = LL^dagger~?
end{align}

2 Answers

I think both ways are fine. But you have to keep in mind that by writing $$ (LL^dagger)^dagger = (L^dagger)^dagger (L)^dagger = LL^dagger $$ you implicitly mean what you wrote in your first version. The definition of an adjoint operator, and by this the definition for self-adjoint and Hermitian, are based on a scalar product (in our case the $L^2$ scalar product). Since we are talking about operators it is a bit more complicated than with matrices. In our case these operators are maps from one Hilbert space to another and we have to keep that in mind.

For example, sometimes people make a distinction between self-adjoint and Hermitian that lies not in the formula $A=A^dagger$ but in the domains of the operators. There it makes sense to use both versions.

An operator $A$ is called Hermitian when the domain of $A$ is part of the domain of $A^dagger$, so $$ langlechi,A psirangle=langle A chi,psirangle quad forallpsi,chiinmathcal{D}(A), quad mathcal{D}(A)subseteqmathcal{D}(A^dagger).$$ Since $psi$ and $chi$ are only from the domain of A it makes sense to specify them here. And an operator $A$ is called self-adjoint when the domain of $A^dagger$ is the same as the domain of $A$, so $$ A^dagger=A, quad mathcal{D}(A)=mathcal{D}(A^dagger).$$ Since there is no difference between the two domains, it is not necessary to explicitly write the vectors down.

Unfortunately, I only have a German physics book at hand that makes this distinction (Gernot Münster, Quantentheorie). The German Wikipedia page to self-adjoint operators also mentions this and cites the functional analysis book of Walter Rudin, but I don't have it so I can't check it. I will search for another functional analysis book later, and mention it here if I find anything.

Answered by daveh on December 30, 2020

When you write $$(L^†L)^† = (L^†)(L^†)^† = L^† L$$ you are implicitly using two facts, that $(AB)^† = B^† A^†$ and that $(A^†)^† = A$.

If you already accept those facts then the second version is fine. But if you want to prove those two facts you will need to consider the inner product.

You must consider the inner product because the adjoint is only defined in relationship to a particular inner product. The adjoint of an operator $A$ is defined as the operator $A^†$ (if it exists) such that $$langle Achi,psirangle = langle chi, A^†psirangle$$ for all $chi,psi$. The inner product is integral to the definition, and any proof of universal properties of the adjoint must use this definition.

One way of seeing that the adjoint really does depend on the choice of inner product is to note that same operator may be self-adjoint with respect to one inner product but not with respect to a different one!

For example, consider two inner products on $mathbb{R}^2$. The first is $langle y , xrangle_1 = y^T x$ and the second is $langle y ,xrangle_2 = y^T begin{pmatrix}1 & 0 0 & 2end{pmatrix} x$. Then consider the operator represented by the matrix $A =begin{pmatrix}0 & 1 1 & 0 end{pmatrix}$ You can check that $langle y , Axrangle_1 = langle Ay , xrangle_1$ but $langle y , Axrangle_2 ne langle Ay , xrangle_2$

Answered by Luke Pritchett on December 30, 2020

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP