TransWikia.com

Hilbert space decomposition into irreps

Physics Asked by Timsey on April 13, 2021

I’m currently following a course in representation theory for physicists, and I’m rather confused about irreps, and how they relate to states in Hilbert spaces.

First what I think I know:

If a representation $D: G rightarrow L(V)$ of a group $G$ is reducible, then that means there exists a proper subspace of $V$, such that that for all $g$ in $G$ the action of $D(g)$ on any vector in the subspace is still in that subspace: that subspace is invariant under transformations induced by $D$.

Irreducible means not reducible: my interpretation is that an irrep is a representation restricted to its invariant subspace. In other words, an irrep $R$ only works the subspace that it leaves invariant. Is this a correct view?

Now, my confusion is the following:

Say we have a system invariant under the symmetries of a group $G$. If this group is finite then any rep $D$ of $G$ can be fully decomposed into irreps $R_i$. We could write any $D(g)$ as the following block diagonal matrix:

$D(g) = left( begin{array}{cccc}
R_1(g) & & &
& R_2(g) & &
& & ddots &
& & & R_n(g)
end{array} right)$

I suppose the basis of this matrix is formed by vectors in the respective subspaces that are left invariant by $R_i(g), forall g in G$, but here is where I’m not clear on the meaning of it all. How does such a matrix transform states in the Hilbert space, when Hilbert space is infinite dimensional, and this rep $D$ isn’t?

I’ve found a book that gives an example of parity symmetry, using $Z_2 = { e,p }$.

The Hilbert space of any parity invariant system can be decomposed into states that behave like irreducible representations.

So we can choose a basis of Hilbert space consisting of such states, which I suppose would be the basis of the matrix $D(g)$ above? Then the Hilbert space is the union of all these invariant subspaces? In the case of parity there exist two irreps: the trivial one (symmetric) and the one that maps $p$ to $-1$ (anti-symmetric). I suppose this is also a choice of basis, but in this basis $D(g)$ is $2$-dimensional, so I don’t understand how this could possibly work on the entire Hilbert space.

I apologize for the barrage of questions, but I honestly can’t see the forest for the trees anymore.

One Answer

Your understanding of reducible and irreducible representations is a little bit muddled. Let me try to clarify this a bit:

  • A reducible representation $D:Gto text{GL}(V)$ is one that has a nontrivial invariant subspace $W$. That is, there exists a nonzero $W<V$ such that for all $gin G$ and all $win W$, the action $D(g)win W$ remains in the subspace.

  • By contrast, an irreducible representation is one where no such subspace exists. That is, for any nonzero proper subspace $W$, there exist a $gin G$ and a $win W$ such that $D(g)w notin W$.

After that, the main source of your confusion, I think, is the fact that the invariant subspaces do not need to be finite-dimensional. This is why formulations of the type $$D(g) = left( begin{array}{cccc} R_1(g) & & & & R_2(g) & & & & ddots & & & & R_n(g) end{array} right)$$ can be rather misleading. It is indeed possible to construct finite direct sums of vectors and of operators which are infinite-dimensional, and to represent them graphically using matrices; it's a little bit involved but I think it will help clarify the issue.

Consider, then, a vector space $V$ which is the direct sum of its subspaces $W_1,ldots,W_nleq V$. By definition, this means that for every $vin V$ there exist unique vectors $w_jin W_j$ such that $v=sum_j w_j$. It is possible, in this case to represent $v$ using the notation $$v = left( begin{array}{c} w_1 vdots w_n end{array} right).$$ However, it is important to note that the $w_j$ are not numbers; instead, they are vectors in as-yet-unspecified vector spaces $W_j$. Moreover, these could indeed be infinite-dimensional. (Indeed, if $V$ is infinite-dimensional then at least one of the $W_j$ needs to be.)

Linear transformations $T:Vto V$ can be treated similarly. For any $w_jin W_j$, $T(w_j)$ is in $V$ which means that it can be decomposed as $T(w_j)=w'_1+ldots+w'_n$, with each $w'_jin W_j$. These new vectors are unique for each $w_j$, which means that they are functions of it, and it's easy to show that the dependence is linear. This allows us to get new sub-functions $T_{kj}:W_jto W_k$, which have the property that for every $w_jin W_j$ $$ T(w_j)=sum_k T_{kj}(w_j). $$ This then extends, by linearity, to the action of $T$ on a general vector $v=sum_j w_jin V$, which is then written $$ T(v)=sum_{k,j} T_{kj}(w_j). $$ With this machinery in place, you can represent $T$ as a matrix, $$T = left( begin{array}{cccc} T_{11} & T_{12} & cdots & T_{1n} T_{21} & T_{22} & cdots & T_{2n} vdots & vdots & ddots & vdots T_{n1} & T_{n2} & cdots & T_{nn} end{array} right).$$ The advantage of this notation is that the matrix-vector product works perfectly: $$T, v = left( begin{array}{cccc} T_{11} & T_{12} & cdots & T_{1n} T_{21} & T_{22} & cdots & T_{2n} vdots & vdots & ddots & vdots T_{n1} & T_{n2} & cdots & T_{nn} end{array} right)left( begin{array}{c} w_1 vdots w_n end{array} right).$$

So why have I gone to such lengths to define matrices? The important thing is that the submatrices need not be finite dimensional.

To bring things down to something more concrete, consider the specific case of parity on $L_2(mathbb R)$. Here $L_2$ (dropping the $mathbb R$) splits into an even and an odd component, $$L_2=L_2^+oplus L_2^-,$$ which is just the statement that every function $f$ can be seen as the sum of its even and odd parts $f_+$ and $f_-$, or in column vector notation $$f=begin{pmatrix}f_+f_-end{pmatrix}.$$

Similarly, the parity operator splits into two constant parts, the identity $mathbb I:L_2^+to L_2^+$ on even functions, and minus the identity on odd functions, $-mathbb I:L_2^-to L_2^-$. In matrix notation, $$ P=begin{pmatrix}mathbb I&0 0&-mathbb Iend{pmatrix}, $$ and $$ Pf=begin{pmatrix}mathbb I&0 0&-mathbb Iend{pmatrix} begin{pmatrix}f_+f_-end{pmatrix} =begin{pmatrix}f_+-f_-end{pmatrix}. $$ As before, the individual subrepresentations $R_j(g)=pmmathbb I$ are infinite-dimensional operators, and the fact that $D(g)$ is written as a matrix with finite rows and columns does not imply that it is finite-dimensional. This aspect of the discussion can get dropped from textbooks (and is never very prominent to begin with), so it's perfectly understandable to be confused about it.

I hope this helps clarify the issue but let me know if it doesn't.

Correct answer by Emilio Pisanty on April 13, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP