Mathematica Asked by Gabriel Aller on December 3, 2020
The model is a simple eigenvalue problem. A matrix that depends on some parameters kx, ky, t
, defined by:
gamma = {KroneckerProduct[PauliMatrix[1], IdentityMatrix[2]],
KroneckerProduct[PauliMatrix[3], IdentityMatrix[2]],
KroneckerProduct[PauliMatrix[2], PauliMatrix[1]],
KroneckerProduct[PauliMatrix[2], PauliMatrix[2]],
KroneckerProduct[PauliMatrix[2], PauliMatrix[3]]};
comutar[a_, b_] := a.b - b.a
GAMMA[i_, j_] := comutar[gamma[[i]], gamma[[j]]]/(2 I)
di[kx_, ky_, lv_] := {(1 + 2 Cos[kx/2] Cos[Sqrt[3] ky/2]), lv,
lr (1 - Cos[kx/2] Cos[Sqrt[3] ky/2]), -Sqrt[3] lr*
Sin[kx/2] Sin[Sqrt[3] ky/2]}
dij[kx_, ky_] := {-2 Cos[kx/2] Sin[Sqrt[3] ky/2],
lso (2 Sin[2 (kx/2)] - 4 Sin[kx/2] Cos[Sqrt[3] ky/2]), -lr*
Cos[kx/2] Sin[Sqrt[3] ky/2],
Sqrt[3] lr*Sin[kx/2] Cos[Sqrt[3] ky/2]}
H[{kx_, ky_}, lv_] :=
Sum[di[kx, ky, lv][[i]]*gamma[[i]], {i, 1, 4}] +
GAMMA[1, 2]*dij[kx, ky][[1]] + GAMMA[1, 5]*dij[kx, ky][[2]] +
GAMMA[2, 3]*dij[kx, ky][[3]] + GAMMA[2, 4]*dij[kx, ky][[4]]
TRIM = {{0, 0}, {Pi, Pi/Sqrt[3]}, {-Pi, Pi/Sqrt[3]}, {0, 2 Pi/Sqrt[3]}}
lso = 0.06;
lr = 0.03;
H[{kx, ky}, t] // MatrixForm // Simplify
After this initialization, the Hamiltonian of the problem is in the function H[{kx,ky},t]
The next step is to calculate the eigenvectors along the line that connect the points {0,0}
to {Pi, Pi/Sqrt[3]}
for t=0
The problem is, that the eigenvectors seems to to have an discontinuity at {0,0}
to {Pi, Pi/Sqrt[3]}
, even that the eigenvalues are continous. This can be seen by using Manipulate:
Manipulate[Eigenvectors[H[kx*TRIM[[2]], 0]], {kx, 0, 1}]
Look how the coordinates of the eigenvectors jump from something like {-0.7071...}
at k=0
from something completely different at any other value of k
Is there a way to solve this?
OBS: On the manipulate above, it is possible to see some randomness on the sign of the coordinates of the eigenvectors, is this possible to solve as well?
You have to take care of which eigenvector belongs to which eigenvalue. In addition, for kx==0,we have 2 degenerate eigenvalues and the belonging eigenvectors are not fix, but may be chosen as some basis of the belonging 2 dim sub-space.
According to the manual the output of "Eigenvalues" is ordered by decreasing absolute value. However, our eigenvalues appear in pairs with opposite sign, therefore their order is undefined. And indeed, the first two and the last two eigenvalues change position randomly with kx. See e.g:
Manipulate[Eigenvalues[H[kx TRIM[[2]], 0]], {kx, -.001, .001}]
If you assign the eigenvectors to the correct eigenvalue you will see that they change smoothly with kx, with the exception of kx==0 where we have degenerate eigenvalues. Here the eigenvectors are not uniquely determined. However in this case, you may find linear combinations of two given vectors belonging to the same eigenvalue that makes the vectors vary smoothly in the vicinity of kx==0.
Answered by Daniel Huber on December 3, 2020
For non-degenerate eigenvalues, there is always one unique vector for every eigenvalue. However, in the degenerate case the eigenvectors are not uniquely. Now imagine a system, where the eigenvectors depend on a parameter,say eps
and that for eps!=0
we have the non-degenerate case and for eps==0
the degenerate case. It would be nice if the eigenvectors would change smoothly for eps== -1..1
. However, this is NOT the case. For lambda==0 the eigenvectors are arbitrary in the degenerate subspace! If you want smooth dependency on eps
, you have to calculate the correct linear combination of the given vectors yourself.
Here is an example:
For eps != 0
we have:
m = {{1., eps}, {eps, 1.}};
es = Eigensystem[m]
(*{{-1. (-1. + 1. eps), 1. (1. + 1. eps)}, {{-1., 1.}, {1., 1.}}}*)
You see the eigenvectors are: {-1., 1.}
and {1., 1.}
However, for `eps==0:
m = {{1., eps}, {eps, 1.}} /. eps -> 0;
es = Eigensystem[m]
(*{{1., 1.}, {{-1., 0.}, {0., 1.}}}*)
Now the eigenvectors are: {-1., 0.}
and {0., 1.}
. To get a smooth transition we need to replace these vectors by new vectors that are linear combinations of the old ones. In this case:
new1== {-1.,1.}== 1/2({-1., 0.}+{0., 1.}) == 1/2 (old1+ old2)
new2== { 1.,1.}== 1/2(-{-1., 0.}+{0., 1.}) == 1/2 (-old1+ old2)
In this simple example the eigenvectors do not depend on eps
. However, in the general case they do. To get the correct eigenvectors in the degenerate case, you calculate the eigenvectors for the non degenerate case and then simply set eps=0
.
Now you may ask what is the reason for this behaviour. Well, for eps=0
the eps disappears and MMA has now way to know in what parameter you are interested. Some other parameter may approach the degenerate subspace from different directions.
Answered by Daniel Huber on December 3, 2020
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP