Mathematica Asked by Luis.Satoni on November 4, 2020
I am in a precarious situation where I have two equations:
eq1 = α1 + αt12.t1 + αr11.r1 == 0;
eq2 = γ1 + γt12.t1 + γr11.r1 == 0;
Where each variable is a 3×3 matrix, the gamma and alpha terms are predefined matrices and I need to solve for t1 and r1.
I know that I can predefine r1 and t1 as arrays
r1 = Array[R, {6, 6}];
t1 = Array[T, {6, 6}];
and use Solve and subsequently ArrayReshape to get the matrices
Sol = Solve[{eq1, eq2}, Flatten[{r1, t1}]];
r11 = ArrayReshape[r1 /. Sol1, {6, 6}];
t12 = ArrayReshape[t1 /. Sol1, {6, 6}];
This gives me the correct solution but does not seem to be the most computationally efficient as the time to solve greatly increases if the dimensions of the matrices increases or the number of equations.
Is there a way to obtain a set of predefined matrix operations to solve for r1 and t1?
This is not so easy. The problem here is, that matrices are not commutative. You could define a non-commutative algebra and write a solver for this algebra. But let's try something simpler. If I am allowed to speculate a bit, we may try to generalize the "general" method of solving linear equations. Toward this aim, let's assume that our variables are now matrices and that the equations in this variables are linear.
We formally still have: m.x=y where m is now a matrix of matrices, x is a vector of matrices as is y. Formally, we must search the left inverse of m. We may formally do this using MMA. As an example, with 4 square matrices e that create a "super" matrix m:
m = Array[Subscript[e, #1, #2] &, {2, 2}];
Inverse[m]
But note that we have products in the denominator, what may be wrong because MMA does not pay attention to non commutativity. So we need to take care of order.Toward this aim, I use two different names for the elements of m: a for the elements of the matrix we want to invert and e for the original matrix , it is the same matrix but we want to make the order visible. The invert times the matrix must give the unit matrix of matrices:
ma = Array[Subscript[a1, #1, #2] &, {2, 2}];
im = Inverse[ma];
MatrixForm[im.m]
This should now be the unit matrix of matrices. Therefore we have the following equations:
Remember, 0 is a zero matrix and 1 is a unit matrix and a are the same elements as e. From the first 2 equations we see, that e12 (what is the same as a12) must commute with e22 and e21 must commute with e11. Otherwise the inverse is not defined. Further the 4. equation is the commuted 3. quation. This implies that e11 commutes with e22 and e12 commutes with e21. And this in addition tells as that "coef", the determinant of m, can be calculated without any order problem.
The result of this is: Provided that matrices e12 and e22, e21 and e11, e11 and e22, e12 and e21 commute then we can calculate x from above by:
where "⊗" means that the left expression (a simple matrix) multiplies each of the matrices e11,e12,e21,e22 from the left.
Correct answer by Daniel Huber on November 4, 2020
To make it simple, I use latin characters instead of greek ones: a1,at,ar and g1,gt,gr. Note, this variables are now square matrices of any dimension. Then we may calculate t1 and r1 by the time honoured "manual" method like (I assume that the matrices are invertible):
Clear[a1, at, ar, g1, gt, gr, t1, r1];
eq1 == a1 + at.t1 + ar. r1 == 0 ;
iar.a1 + Inverse[ar].at.t1 + r1 == 0;
r1 = -Inverse[ar].a1 - Inverse[ar].at.t1;
eq2 == g1 + gt.t1 - gr.Inverse[ar].a1 - gr.Inverse[ar].at.t1 == 0;
(gt - gr.Inverse[ar].at).t1 == -g1 + gr.Inverse[ar].a1;
t1 == Inverse[gt - gr.Inverse[ar].at].(-g1 + gr.Inverse[ar].a1);
r1 == -Inverse[ar].a1 - Inverse[ar].at.t1;
Answered by Daniel Huber on November 4, 2020
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP