TransWikia.com

Dimension of Dirac $gamma$ matrices

Physics Asked by Editortoise-Composerpent on February 13, 2021

While studying the Dirac equation, I came across this enigmatic passage on p. 551 in From Classical to Quantum Mechanics by G. Esposito, G. Marmo, G. Sudarshan regarding the $gamma$ matrices:

$$tag{16.1.2} (gamma^0)^2 = I , (gamma^j)^2 = -I (j=1,2,3) $$
$$tag{16.1.3} gamma^0gamma^j + gamma^j gamma^0 = 0 $$
$$tag{16.1.4} gamma^j gamma^k + gamma^k gamma^j = 0, jneq k$$
In looking for solutions of these equations in terms of matrices, one finds that they must have as order a multiple of 4, and that there exists a solution of order 4.

Obviously the word order here means dimension. In my QM classes the lecturer referenced chapter 5 from Advanced Quantum Mechanics by F. Schwabl, especially as regards the dimension of Dirac $gamma$ matrices. However there it is stated only that, since the number of positive and negative eigenvalues of $alpha$ and $beta^k$ must be equal, $n$ is even. Moreover, $n=2$ is not sufficient, so $n=4$ is the smallest possible dimension in which it is possible to realize the desired algebraic structure.

While I got that the smallest dimension is 4, I fail to find any argument to reject the possibility that $n=6$ could be a solution. I also checked this Phys.SE post, but I didn’t find it helpful at all.

Can anyone help me?

4 Answers

Let us generalize from four space-time dimensions to a $d$-dimensional Clifford algebra $C$. Define

$$ p~:=~[frac{d}{2}], tag{1}$$

where $[cdot]$ denotes the integer part. OP's question then becomes

Why must the dimension $n$ of a finite dimensional representation $V$ be a multiple of $2^p$?

Proof:

  1. If $Csubseteq {rm End}(V)$ and $V$ are both real, we may complexify, so we may from now on assume that they are both complex. Then the signature of $C$ is irrelevant, and hence we might as well assume positive signature. In other words, we assume that we are given $ntimes n$ matrices $gamma_{1}, ldots, gamma_{d}$, that satisfy $$ {gamma_{mu}, gamma_{nu}}_+~=~2delta_{munu}{bf 1}, qquad mu,nu~in~{1,ldots, d}.tag{2} $$

  2. We may define $$ gamma_{munu}~:=~ frac{1}{2}[gamma_{mu}, gamma_{nu}]_- ~=~-gamma_{numu}, qquad mu,nu~in~{1,ldots, d}. tag{3}$$ In particular, define $p$ elements $$ H_1, ldots, H_p,tag{4} $$ as $$ H_r ~:=~igamma_{r,p+r}, qquad r~in~{1,ldots, p}.tag{5} $$

  3. Note that the elements $H_1,ldots, H_p$, (and $gamma_d$ if $d$ is odd), are a set of mutually commuting involutions $$ [H_r,H_s]_- ~=~0, qquad r,s~in~{1,ldots, p},tag{6} $$ $$ H_r^2 ~=~{bf 1}, qquad r~in~{1,ldots, p}.tag{7} $$

  4. Therefore, according to Lie's Theorem, then $H_1,ldots, H_p$, (and $gamma_d$ if $d$ is odd), must have a common eigenvector $v$.

  5. Since $H_1,ldots, H_p$ are involutions, their eigenvalues are $pm 1$. In other words, $$H_1 v~=~(-1)^{j_1} v, quad ldots, quad H_p v~=~(-1)^{j_p} v,tag{8} $$ where $$ j_1,ldots, j_p~in ~{0,1} tag{9}$$ are either zero or one.

  6. Apply next the $p$ first gamma matrices $$ gamma^{1}, gamma^{2}, ldots, gamma^{p}, tag{10} $$ to the common eigenvector $v$, so that $$ v_{(k_1,ldots, k_p)}~:=~ gamma_{1}^{k_1}gamma_{2}^{k_2}cdotsgamma_{p}^{k_p} v, tag{11} $$ where the indices $$ k_1,ldots, k_p~in ~{0,1} tag{12} $$ are either zero or one.

  7. Next note that $$ [H_r,gamma_s]_-~=~0 quad text{if}quad r~neq~ s mod p tag{13} $$ and $$ {H_r,gamma_r}_+~=~0. tag{14} $$ It is straightforward to check that the $2^p$ vectors $v_{(k_1,ldots, k_p)}$ also are common eigenvectors for $H_1,ldots, H_p$. In detail, $$ H_r v_{(k_1,ldots, k_p)}~=~(-1)^{k_r+j_r}v_{(k_1,ldots, k_p)}.tag{15}$$

  8. Note that each eigenvector $v_{(k_1,ldots, k_p)}$ has a unique pattern of eigenvalues for the tuple $(H_1,ldots, H_p)$, so the $2^p$ vectors $v_{(k_1,ldots, k_p)}$ must be linearly independent.

  9. Since $$ gamma_{p+r}~=~ i H_r gamma_r, qquad r~in~{1,ldots, p}, tag{16} $$ we see that $$ W~:=~{rm span}_{mathbb{C}} left{ v_{(k_1,ldots, k_p)} mid k_1,ldots, k_p~in ~{0,1} right} tag{17} $$ is an invariant subspace $Wsubseteq V$ for $C$.

  10. This shows that that any irreducible complex representation of a complex $d$-dimensional Clifford algebra is $2^p$-dimensional.

  11. Finally, we believe (but did not check) that a finite dimensional representation $V$ of a complex Clifford algebra is always completely reducible, i.e. a finite sum of irreducible representations, and hence the dimension $n$ of $V$ must be a multiple of $2^p$. $Box$

Correct answer by Qmechanic on February 13, 2021

Intuitive explanation

Preliminary: A vector has a many components as elements of the vector space basis.

A Clifford algebra basis is generated by all (independent) products of the generators (in Dirac's equation case these are the $gamma$'s).

The counting

There are as many $gamma$'s as the dimension of the spacetime, and according to the definition the algebra includes a unit, $$bigl{gamma^a,gamma^bbigr} = 2 eta^{ab}mathbf{1}.$$

For any extra element the new basis consists of the previous basis elements plus the product of each of those by the extra element. This is the new basis has twice the elements. Therefore, $$dim(mathcal{C}ell(n)) = 2^{n}.$$

In order to represent this algebra one needs "matrices" of $2^{n/2}times 2^{n/2}$, which is not bad for even dimensional spacetimes.

Said that, the problem (which I don't intend to demonstrate) comes with odd dimensional spacetimes... however, intuitively again, this algebra can be represented by two copies of the co-dimension one algebra, i.e. one dimension less. This reason is why the minimal dimensionality for the representation of the $gamma$'s is $$dim(gamma) = 2^{lfloor n/2rfloor}times 2^{lfloor n/2 rfloor}.$$


If you wonder whether one can find a bigger representation of the $gamma$'s, the answer is YES, but you will will end up with either a non-fundamental or a trivial extension.

Answered by Dox on February 13, 2021

A rigorous proof of the dimensionalality of $gamma$ matrices come from group representation theory. Its about finding the irreducible representation of Clifford algebra. A recent book of Ashok Das of group theory discussed that in a great depth. An etair chapter of this book dedicated for finding the representation of Clifford algebra both in even and odd direction. See page no 162 for the prrof.

A nice and cute prove was given by peter West in

http://arxiv.org/abs/hep-th/9811101.


Answered by user114189 on February 13, 2021

Thats a good question. To answer this lets start with Clifford algebra generated by $gamma$ matrices. begin{equation} gamma_{mu}gamma_{nu}+ gamma_{mu}gamma_{nu}=2eta_{munu} end{equation} with $mu,nu=0,1,2,cdots N$ with the metric signature $eta_{munu}=text{diag}(+,-,-,-,cdots,-)$. Using $I$ and $gamma_{mu}$ we can construct a set of matrices as follow begin{equation} I, gamma_{mu},gamma_{mu}gamma_{nu}quad(mu<nu), gamma_{mu}gamma_{nu}gamma_{lambda}quad(mu<nu<lambda),cdots,gamma_{1}gamma_{2}cdotsgamma_{N}. end{equation}

There are begin{equation} sum_{p=0}^{N}binom{N}{p} = 2^{N} end{equation} such matrices. Lets call them $Gamma_{A}$, where $A$ runs from $0$ to $2^{N}-1$. Now let $gamma_{mu}$ are $dtimes d$ dimensional irreducible matrices. Our goal is to find a relation between $d$ and $N$. To this end lets define a matrix begin{equation} S = sum_{A=0}^{2^N-1}(Gamma_{A})^{-1}YGamma_{A} end{equation}. Where $Y$ is some arbitary $dtimes d$ matrix. It is follows that begin{equation} (Gamma_{B})^{-1}SGamma_{B} = sum_{A=0}^{2^N-1}(Gamma_{A}Gamma_{B})^{-1}YGamma_{A}Gamma_{B} =sum_{C=0}^{2^N-1}(Gamma_{C})^{-1}YGamma_{C}=S end{equation} Where we have used $Gamma_{A}Gamma_{B}=epsilon_{AB}Gamma_{C}$, with $epsilon_{AB}^{2}=1$

Hence begin{equation}SGamma_{A}=Gamma_{A}Send{equation} Since $S$ commutes with all the matrices in the set, by Schur's lemma we conclude that $S$ must be proportional to the identity matrix so that we can write begin{equation} S = sum_{A=0}^{2^N-1}(Gamma_{A})^{-1}YGamma_{A} = lambda I end{equation}

Taking trace we get begin{eqnarray} text{Tr} S & = & sum_{A=0}^{2^N-1} text{Tr} Y = lambda d Rightarrow lambda & = & frac{2^{N}}{d}text{Tr} Y end{eqnarray} or begin{equation} sum_{A=0}^{2^N-1}(Gamma_{A})^{-1}YGamma_{A} = frac{2^{N}}{d}text{Tr} Y end{equation}

Taking the $(j; m)$ matrix element of both sides of last equation yield begin{equation} sum_{A=0}^{2^N-1}((Gamma_{A})^{-1})_{jk}(Gamma_{A})_{km} = frac{2^{N}}{d}delta_{jm} delta_{kl} end{equation} where $j; k; l; m = 1; 2;cdots; d$ and we have used the fact that Y is an arbitrary $d times d$ matrix. If we set $j = k; l = m$ and sum over these two indices, that gives begin{equation} sum_{A=0}^{2^N-1} text{Tr}[(Gamma_{A})^{-1}] text{Tr}[Gamma_{A}] = 2^{N}end{equation} There are two cases to consider, namely, $N$ even and $N$ odd. For $N = 2M$ (even), $text{Tr} Gamma_{A} = 0$ except for $Gamma_{0} = 1$ for which $text{Tr} Gamma_{0} = d$. Which gives begin{equation} d^2 = 2^Nqquad text{or} quad boxed{d = 2^{N/2}} end{equation} This is the main result. For four dimensional Minkowski space time $N=4$ cosequntly the dimension of irreducible representation is $d = 2^{4/2} =4$.

Answered by sam on February 13, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP