TransWikia.com

What does it mean to contract the indices of a Lorentz matrix?

Physics Asked by Hexiang Chang on April 2, 2021

The metric tensor in SR obeys the transformation law (I am using Schutz’s bar notation for different frame indices):

$$eta_{bar{alpha}bar{beta}}=Lambda^mu_{~bar{alpha}}~Lambda^nu_{~bar{beta}}~eta_{~munu}$$

If I’m not wrong, the second Lorentz matrix can be contracted with the metric like so:

$$eta_{bar{alpha}bar{beta}}=Lambda^mu_{~bar{alpha}}~Lambda_{~mubar{beta}}$$

This seems wrong since the Lorentz matrix elements are usually written with one upper and one lower index, and I don’t understand how $Lambda_{mubar{beta}}$ could be written in a matrix representation.

From various sources, I am also told that the above equations are equivalent to matrix multiplication with the transpose of $Lambda$:

$$eta=LambdacdotetacdotLambda^T$$

  • Does contracting the Lorentz matrix transpose it?

  • What are the differences between $Lambda^mu_{~bar{nu}}$, $Lambda^bar{mu}_{~nu}$, $Lambda_{mubar{nu}}$ and $Lambda^{mubar{nu}}$? Which of them are inverses and which are transposes?

3 Answers

Note that $Lambda$ is not itself a tensor. It is a matrix of transformation coefficients.

A tensor is well-defined in the absence of any specified frame, and its components can be defined in one frame or another. The Lorentz transformation, by contrast, is used to calculate how tensor components change from one frame to another.

Contracting one up- and one down-index on a second-rank tensor gives a zero-rank tensor, which is to say an invariant scalar. Contracting one up- and one down-index on a non-tensor such as $Lambda^a_{; b}$ will give some sort of mathematical result, but it is not guaranteed to be a quantity of any particular interest. Similarly, the combination of metric with $Lambda^a_{; b}$ can be used to get a quantity with all indices down, but it is not a tensor and it is best not to play these index gymnastics with non-tensors. Just let $Lambda^a_{; b}$ be what it is.

For a second-rank tensor, lowering the first index corresponds to pre-multiplying by the metric in matrix language; lowering the second index corresponds to post-multiplying by the metric. The transpose operation corresponds to reversing the order of the indices (N.B. without moving them up or down). For example $$ Lambda^a_{; mu} Lambda^b_{; nu} F^{munu} = Lambda^a_{; mu} F^{munu} Lambda^b_{; nu} = Lambda^a_{; mu} F^{munu} (Lambda^{! rm T})^{;; b}_{nu} $$ and in matrix notation this combination can be written $$ Lambda F Lambda^{! rm T} $$

Correct answer by Andrew Steane on April 2, 2021

This took me quite a while to get at the beginning of relativism class but this may help: a general matrix multiplication like we all know it looks like this: $$[A][B] = sum a_{ri}b_{ir}$$ Now the relativity version with einstein summation convenction would make this matrix multiplication look like this: $$[A][B] = a_{nusigma}b^{sigmanu}$$ As you see this is the same as the previous statement with just the summation symbol dropped, and thus with the first index representing the row and the second representing the column. Now there are possibly people who have an explanation for how a (2,0) (i.e 'covariant') tensor is different from a (0,2) (i.e 'contravariant') tensor, however I just keep the rule in my head that you can only contract covariant with contravariant indices, and thus apply a metric of the form $g_{mu mu}$ ($g^{mu mu}$) if it is necessary to lower (rise) indices. To get to the form of matrices it is necessary to convert the tensor back to a matrix-form by contracting it with a metric. I will use the minkowski metric $eta$ here for the demonstration. This brings us back to your question, your equation can be written as: $$Lambda_{sigmaalpha}eta^{mu sigma}Lambda^{nusigma}eta_{betasigma}eta_{munu} = Lambda_{sigmaalpha}eta_{betasigma}Lambda^{nusigma}delta^{sigma}_{nu} = Lambda_{nualpha}eta_{betasigma}Lambda^{nusigma} stackrel{nu leftrightarrow beta}{=} [Lambda]cdot[eta]cdot[Lambda]^T$$ Where I've used the fact that $$eta^{musigma}eta_{betasigma} = delta^{mu}_{sigma}$$ And just basic rising and lowering of indices via: $$Lambda^{mu}_{;alpha} = Lambda_{sigmaalpha}eta^{mu sigma} text{ and } Lambda_{mu}^{;alpha} = Lambda^{sigmaalpha}eta_{mu sigma}$$

I could've made an index mistake somewhere but I hope this gives a kind of understanding and I hope it helps.

Answered by IronicalCoffee on April 2, 2021

First a brief introduction to tensors. An $(r,s)$ tensor $t$ on a $K$-vector space $V$ is just a multilinear map from $s$ copies of $V$ and $r$ copies of the dual vector space $V^*$ to the underlying field $K$: $$t:underbrace{V^*times...times V^*}_{r-times}timesunderbrace{Vtimes...times V}_{s-times}to K$$ The dual vector space is just the set of all linear maps from $V$ to $K$. The set of all $(r,s)$ tensors on $V$ is commonly denoted $T^r_s(V)$.

Ok, how do we get from these vectors and linear maps to these numbers $g_{munu}$ that we're working with?

Well, we do what we always intuitively do namely, choose a basis. Say we chose some basis ${e_i}subset V$ on our vector space, we can now express every vector $vin V$ as a linear combination of these basis vectors: $v=v^ie_i$. The $v^iin K$ are basically numbers and the position of the index so far is just convention. We can now do the same for the dual vector space, with a particularly clever choice of basis ${e^i}subset V^*$, such that $e^{i}(e_j)=delta^i_j$. Again, the position of the index is just convention in order not to mix up vectors and dual vectors, as these are fundamentally different objects!

With this index convention, we would now write the components of a vector as $v^i$ and of a dual vector as $v_i$.

For the tensors, it is very similar: $$ t(w_k^{(1)}e^k,...,w_l^{(r)}e^l,v^i_{(1)}e_i,...,v^j_{(s)}e_j)=w_k^{(1)}...w_l^{(r)}v^i_{(1)}...v^j_{(s)}t(e^k,...,e^l,e_i,...,e_j) $$ The $t(e^k,...,e^l,e_i,...,e_j)equiv t^{k...l}_{i...j}$ are again just numbers, independent of what vectors and covectors the tensor is acting on (just the basis we chose). Similarly to the vectors and covectors, we can also express the tensor in terms of its components: $$ t = t^{k...l}_{i...j}e_kotimes...otimes e_lotimes e^iotimes...otimes e^j $$

Ok now, what happens when we choose a different basis ${b_i}subset V$ and a corresponding dual basis ${b^i}subset V^*$? Since our new basis vectors (and new basis covectors) are still elements of the same underlying (dual-)vector space, we can express them as some linear combination: $$b_i = A_i^je_j qquad text{and} qquad b^i=B^i_je^j$$ Similar to above, we can view the numbers $A_i^j$ and $B^i_j$ as components of a tensor: $$ A=A_i^je_jotimes b^i qquad text{and} qquad B=B^i_jb_iotimes e^j $$ note: In physics, we typically don't view these transformations as tensors, however in I think this is quite useful for our discussion here. For more information see, e.g. (Is Lorentz transform a tensor?)

Now lastly what does contracting two tensors actually mean? Tensor contractions are really just defined for an individual tensor and are linear maps $C^k_l:T^r_s(V)to T^{r-1}_{s-1}(V)$ defined by: $$ T^{nu_1...nu_r}_{mu_1...mu_s}e_{nu_1}otimes...otimes e_{nu_r}otimes e^{mu_1}otimes...otimes e^{mu_s} mapsto T^{nu_1...nu_r}_{mu_1...mu_s} e^{mu_l}(e_{nu_k}) e_{nu_1}otimes...otimes e_{nu_{l-1}}otimes e_{nu_{l+1}}otimes...otimes e_{nu_r}otimes e^{mu_1}otimes...otimes e^{mu_{k-1}}otimes e^{mu_{k+1}}otimes...otimes e^{mu_s}) $$ But thats no problem, since we can make one tensor out of two using the tensor product. In your particular case we have the "tensor" $Lambda=Lambda^{nu}_{mu}e_{nu}otimes b^{mu}$ and the tensor $eta=eta_{munu}e^{mu}otimes e^{nu}$: $$ etaotimesLambda = eta_{munu}Lambda^{alpha}_{beta}e^{mu}otimes e^{nu}otimes e_{alpha}otimes b^{beta} $$ and $$ C^{alpha}_{nu}(etaotimesLambda)=eta_{munu}Lambda^{alpha}_{beta}e^{nu}(e_{alpha})e^{mu}otimes b^{beta} = eta_{munu}Lambda^{nu}_{beta}e^{mu}otimes b^{beta}equiv Lambda_{mubeta} e^{mu}otimes b^{beta} $$ In GR (and SR) space-time is a Lorentzian manifold $L$, the vector space of interest is the tangent space $T_pL$ at some point $pin L$ and the tensors you're considering have special meaning, however (as always) the maths doesn't care too much about the physics.

Answered by KilianM on April 2, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP