Mathematica Asked by anderstood on September 19, 2020
Consider two square matrices $A_1$ and $A_2$. Consider the following matrix involving matrix trigonometric functions:
begin{equation}
M_1(t)=begin{bmatrix} cos(tA_1) & tmathrm{sinc}(t A_1) -A_1sin(tA_1) & cos(tA_1) end{bmatrix} end{equation}
and similarly $M_2(t)$ defined by changing $A_1$ to $A_2$. Using the double-angle identities, it can be shown that
begin{align}
delta &= M_1(2t_1)-M_2(2t_2)
&= 2begin{bmatrix} t_1mathrm{sinc}(t_1A_1) & -t_2mathrm{sinc}(t_2A_2) cos (t_1 A_1) & -cos(t_2A_2) end{bmatrix} begin{bmatrix}-A_1sin(t_1A_1) & cos(t_1 A_1) -A_2sin(t_2A_2) & cos(t_2 A_2) end{bmatrix}
end{align}
which provides a factorization of the difference $delta$.
This equality can be checked in MMA using random values for $A_1,A_2,t_1,t_2$:
M1[t_] :=
ArrayFlatten[{{MatrixFunction[Cos, t*A1],
t*MatrixFunction[Sinc, t*A1]}, {-A1.MatrixFunction[Sin, t*A1],
MatrixFunction[Cos, t*A1]}}]
M2[t_] :=
ArrayFlatten[{{MatrixFunction[Cos, t*A2],
t*MatrixFunction[Sinc, t*A2]}, {-A2.MatrixFunction[Sin, t*A2],
MatrixFunction[Cos, t*A2]}}]
delta := M1[2 t1] - M2[2 t2]
zero := delta -
2 ArrayFlatten[{{t1*MatrixFunction[Sinc, t1*A1], -t2*
MatrixFunction[Sinc, t2*A2]}, {MatrixFunction[Cos,
t1*A1], -MatrixFunction[Cos,
t2*A2]}}].ArrayFlatten[{{-A1.MatrixFunction[Sin, t1*A1],
MatrixFunction[Cos, t1*A1]}, {-A2.MatrixFunction[Sin, t2*A2],
MatrixFunction[Cos, t2*A2]}}]
Block[{A1 = RandomReal[{-1, 1}, {2, 2}],
A2 = RandomReal[{-1, 1}, {2, 2}], t1 = RandomReal[10],
t2 = RandomReal[10]}, zero] // Chop
(* {{0, 0, 0, 0}, {0, 0, 0, 0}, {0, 0, 0, 0}, {0, 0, 0, 0}} *)
My question is, would it have been possible to find this factorization using MMA?
I did the problem in reverse and showed the matrix product (which I called δ2
) reduces to the difference of matrices with
δ2 // FunctionExpand // Simplify
Only then I could reduce the Sinc[A]
as Sin[A]/A
. The result is that of a difference of two matrices.
Answered by Gwanguy on September 19, 2020
this is a question with some answer that might interest You:
mathematica program for PLUR decomposition of a symbolic matrix using full pivot.
Nobody needs it, nobody wants it following @daniel-lichtblau.
For numerical decomposition, You have the choices in Mathematica: AdvancedMatrixOperations.
The major problem atop is that
InverseFunction[Sinc[x]]
(InverseFunction[Sinc[x]])
is not a function pair like Sin/ArcSin
are. And that is even worse because this is the MatrixFunction
of Sinc
.
Simple example: rotation
{Q, R} = QRDecomposition[{{1, 1}, {-1, 1}}]
(* {{{1/Sqrt[2], -(1/Sqrt[2])}, {1/Sqrt[2], 1/Sqrt[2]}}, {{Sqrt[2],0}, {0, Sqrt[2]}}} *)
and
Solve[RotationMatrix[[CurlyPhi] ] == Q, [CurlyPhi]][[1]] /.C[1] -> 0
(*{[CurlyPhi] -> [Pi]/4} *)
A simple example: regular matrix
S = MatrixPower[Transpose[M].M, 1/2]; R = Inverse[Transpose[M]].S(* rotationmatrix R^t.R=Identity*)
A variation:
m = {{1, 1}, {-1, 1}};
FullSimplify @ Solve[ConjugateTranspose[RotationMatrix[θ]].ScalingMatrix[{s1, s2}] == m,
{s1, s2, θ}, Reals] /. C[1] -> 0
As far as I understand the given source:
periodic solutions of n-dof autonomous vibro-impact oscillators with one lasting contact phase
There are three matrices:
M={{m1,0},{0,m2}} (*mass matrix*)
and
K={{k1+k2,-k2},{-k2,k2}} (coupling matrix of harmonic oscillators)
both from formular (2.5)
and
A={{0,I},{-L^2,0}}
from (3.9).
This A matrix is then put into the MatrixExp
to give (3.11) and (3.12). In both of them L is a scalar out of Reals
. And that remains in all the following sections.
This seems together with the commend more or less the solution path to the proof of theorem 3.1 Please reformulate.
Answered by Steffen Jaeschke on September 19, 2020
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP