Physics Asked on August 26, 2021
The angular velocity vector of a rigid body is defined as $vec{omega}=frac{vec{r}timesvec{v}}{|vec{r}|^2}$. But I’d like to show that that’s equivalent to how most people intuitively think of angular velocity.
Euler’s theorem of rotations states that any rigid body motion with one point fixed is equivalent to a rotation about some axis passing through the fixed-point. So let’s consider a rigid body undergoing some motion with one point fixed, and for any times $t_1$ and $t_2$ let $vec{theta}(t_1,t_2)$ denote the “rotation vector” of the rotation that’s equivalent to the rigid body’s motion between time $t_1$ and time $t_2$. For those who don’t know, the rotation vector of a rotation is a vector whose magnitude is equal to the angle of the rotation and which points along the axis of the rotation; see this Wikipedia article.
Now my question is, how do we prove that the limit of $frac{vec{theta}(t_1,t_2)}{t_2-t_1}$ as $t_2$ goes to $t_1$ exists, and that it’s equal to the angular velocity vector?
This would all be much simpler if rotations were commutative since then the angular velocity would just equal the derivative of $vec{theta}(t_0,t)$ with respect to time. But since rotations are non-commutative, $vec{theta}(t_1,t_2)$ does not equal $vec{theta}(t_0,t_2)-vec{theta}(t_0,t_1)$ and thus the relation between angular velocity and the time derivative of $vec{theta}(t_0,t)$ is considerably more complicated; see this journal paper for details.
Note: This is a follow-up to my question here.
EDIT: Note that what this journal paper calls $vec{alpha}(t)$ would in my notation be written as $vec{theta}(t_0,t)$. The paper discusses the fact that the angular velocity vector $vec{omega}(t)$ is not equal to the time derivative of $vec{alpha}(t)$. This means that the limit of $frac{vec{theta}(t_0,t_2)-vec{theta}(t_0,t_1)}{t_2-t_1}$ does not equal $vec{omega}(t_1)$. But my question is about proving a slightly different statement, which is that the limit of $frac{vec{theta}(t_1,t_2)}{t_2-t_1}$ as $t_2$ goes to $t_1$ DOES equal $vec{omega}(t_1)$. Note that the expressions $frac{vec{theta}(t_0,t_2)-vec{theta}(t_0,t_1)}{t_2-t_1}$ and $frac{vec{theta}(t_1,t_2)}{t_2-t_1}$ are not equal, because $vec{theta}(t_0,t_1)+vec{theta}(t_1,t_2)$ does not equal $vec{theta}(t_0,t_2)$ due to the non-commutativity of rotations. So none of what I’m saying contradicts or seeks to disprove the journal paper.
If you take $t_0 = t_1$ in the EDIT part of your question, with $vec{theta}(t_0,t_1) = vec{theta}(t_1,t_1) = 0$, you are in the special case $omega = dot{alpha}$ from Asher Peres's paper you have mentioned, which then proves your statement according to your observation from the EDIT part, because in that case, you have $$frac{vec{theta}(t_0,t_2)-vec{theta}(t_0,t_1)}{t_2-t_1} = frac{vec{theta}(t_1,t_2)}{t_2-t_1},$$ whose limit when $t_2 to t_1$ is equal to $dot{alpha}(t_1) = omega (t_1)$.
Indeed, according to Asher Peres, we have: $$ w = dot{alpha} + frac{1 - cos alpha}{alpha^2} (alpha times dot{alpha}) + frac{alpha - sin alpha}{alpha^3} (alpha times (alpha times dot{alpha})),$$ which, for $alpha(t) = vec{theta}(t_1,t)$ and $t = t_1$, and using $alpha(t_1) = vec{theta}(t_1,t_1) = 0$ (see above), reduces to $$ w(t_1) = dot{alpha}(t_1) + 0.$$ Note that luckily $1 - cos alpha = mathrm{O}(alpha^2)$ and $alpha - sin alpha = mathrm{O}(alpha^3)$, hence there is not problem when passing to the limit when $alpha to 0.$
Correct answer by user130529 on August 26, 2021
To counter your question with another, how would the non-commutative aspect of a set of operations interfere with it having a derivative specifically?
If set of rotations do not commute, changing their order will may cause the resulting positional orientations of the object to differ. But in each case the object still has an angular momentum defined at each point in time. It's just that between the two cases the the specific angular momentum at a given time may NOT be the same. An angular momentum still exists, however for each sequence of rotations, just not necessarily the same resultant one for each.
Answered by IntuitivePhysics on August 26, 2021
The two concepts do seem to be similar. The key I think lies in the fact that one can express an infinitesimal angle as the arclength divided by the radius $$ delta theta = frac{delta s}{r} . $$ If the radius is expressed as a vector and the arclength as another vector indicating the direction of motion during rotation, then one can express the infinitesmial rotation as a cross-product $$ vec{delta theta} = frac{vec{r}timesvec{delta s}}{|vec{r}|^2} . $$ Now we just need to divide by the difference in time to make the connection $$ frac{vec{delta theta}}{delta t} = frac{vec{r}timesvec{delta s}}{|vec{r}|^2delta t} . $$ In the appropriate limit, this then becomes $$ vec{omega}=frac{d vec{theta}}{d t} = frac{vec{r}}{|vec{r}|^2}times frac{dvec{s}}{dt} = frac{vec{r}timesvec{v}}{|vec{r}|^2} . $$
Answered by flippiefanus on August 26, 2021
Consider a fixed point with location $vec{a}$ or a rigid body.
To prove the rotation first establish that $$ {rm d}vec{a} = {rm d} vec{theta} times vec{a} tag{1}$$
This can be done with just geometry given that small angle approximations. For example the change in the x-direction is ${rm d}a_x =a_z {rm d} theta_y - a_y {rm d}theta_z $.
The expression can be written as $$vec{v} =frac{{rm d}vec{a}}{{rm d}t} = vec{omega} times vec{a} tag{2} $$
The last part is to calculate $$vec{a} times vec{v} = vec{a} times (vec{omega} times vec{a}) = vec{omega} ( vec{a} cdot vec{a} ) - vec{a} (vec{r} cdot vec{omega}) tag{3}$$
Take the projection of the location perpendicular to the rotation $vec{r}$ with $vec{r} cdot vec{omega}=0$ then
$$require{cancel} vec{r} times vec{v} = vec{omega} | vec{r} |^2 - cancel{vec{r} cdot vec{omega}} $$ $$ boxed{vec{omega} = frac{vec{r} times vec{v}}{| vec{r} |^2} } tag{4}$$
Edit 1
A more vigorous treatment involves creating a 3×3 rotation matrix, and applying small angle approximation to it. Use $vec{theta} = (theta_x,theta_y,theta_z)$ as successive rotations
$$mathtt{R}=mathtt{R}_x(theta_x)mathtt{R}_y(theta_y)mathtt{R}_z(theta_z) = begin{vmatrix} costheta_y costheta_z & -costheta_y sintheta_z & sintheta_y cos theta_x sintheta_z + sintheta_x sintheta_y costheta_z & costheta_x costheta_z - sintheta_xsintheta_ysintheta_z &-sintheta_x costheta_y sin theta_x sintheta_z - costheta_x sintheta_y costheta_z & sintheta_x costheta_z + costheta_xsintheta_ysintheta_z &costheta_x costheta_y end{vmatrix} $$
All this now applied to a small angle to make ${rm d}vec{a} =( {rm d}mathtt{R})vec{a} -vec{a}$ such that $sin(square)=square$ and $cos(square)=1$
$${rm d}mathtt{R}= mathtt{R}_x({rm d}theta_x)mathtt{R}_y({rm d}theta_y)mathtt{R}_z({rm d}theta_z) = begin{vmatrix} 1 &-{rm d}theta_z & {rm d}theta_y {rm d}theta_z + {rm d}theta_x {rm d}theta_y & 1 - {rm d}theta_x {rm d} theta_y {rm d} theta_z & -{rm d}theta_x -{rm d}theta_y+{rm d}theta_x {rm d}theta_z & {rm d}theta_x + {rm d}theta_y {rm d}theta_z & 1end{vmatrix} =begin{vmatrix} 1 &-{rm d}theta_z & {rm d}theta_y {rm d}theta_z & 1 & -{rm d}theta_x -{rm d}theta_y & {rm d}theta_x & 1end{vmatrix} tag{6} $$
So with small angle approximation $${rm d}vec{a} = ( {rm d} mathtt{R})vec{a} -vec{a} = left({rm d}mathtt{R} - mathtt{1}right) vec{a} ={rm d} vec{theta} times vec{a}$$
$$ [{rm d} vec{theta} times] = begin{vmatrix} 0 &-{rm d}theta_z & {rm d}theta_y {rm d}theta_z & 0 & -{rm d}theta_x -{rm d}theta_y & {rm d}theta_x & 0end{vmatrix}$$
$$ frac{{rm d} vec{theta} }{{rm d}t} times = begin{vmatrix} 0 & -omega_z & omega_y omega_z &0&-omega_x -omega_y&omega_x&0end{vmatrix}$$
The last 3×3 matrix is called the vector cross product operator matrix. It is skew symmetric and it is used widely in computer graphics and in dynamics.
Answered by John Alexiou on August 26, 2021
$newcommand{rr}{mathbb R}newcommand{abs}[1]{left|#1 right|}$I'll try to be as rigorous as possible but before diving in to the problem I want to explain my notation. I've dropped vector arrows if it is clear from the expression, which quantity is a vector and which is a scalar. Furthermore sometimes I've dropped the time dependence but one can deduce from the context which quantity time dependent. Note that differentiability is a local condition that is it doesn't care about what the function does far apart but just around the point that you are taking the derivative. This is apparent from the definition of a differentiable function. Let $f: rr to rr^n$ be a function. We say that the function $f$ is differentiable at a (fixed) point $x_0 in rr$ if and only if the following limit exists:
$$ lim_{x to x_0} frac{f(x) - f(x_0)}{x-x_0} = lim_{h to 0} frac{f(x_0+h) - f(x_0)}{h}$$
if we substitute $x-x_0 =h$. This is probably the usual definition of a derivative that you know. However for our purposes I want to use another equivalent definition, which might look weird at the first glance. The function $f$ is differentiable at $x_0 in rr$ if and only if there exists $J in rr^n$ and a function $H: rr to rr^n$, which is continuous at $x_0$ with the value $H(x_0) =0 in rr^n$, such that the following holds for all $xin rr$:
$$ f(x) - f(x_0) - J (x-x_0) = H(x) cdot abs{x-x_0}=:o(x-x_0) $$
There is an intuitive way to think about this definition, which basically tells you that the tangent line with "slope" $J$ that you put at the function should have maximum a linear error. If you think about in terms of Taylor series the definition becomes more clear. Note again that the differentiability is a local condition. You can see this clearly from the second definition. There are absolutely no conditions on how the function $H$ should behave like (except around $x_0$, which we require it to be continuous since continuity also a local condition remember the $epsilon,delta$ definition of continuity). Thus far away from $x_0$ we just define $H$ to be:
$$H(x) := frac{f(x) - f(x_0) - J (x-x_0) }{abs{x-x_0}}$$
So with this intro let's return to our problem. We can only prove that the angular velocity vector is given by $omega = dot theta (t_0)$ locally around $t_0$. Note that we will choose $t_0 in rr$ to be a fixed but an arbitrary point. Thus you can show that for all $t_0in rr$ there exists a neighbourhood of $omega(t_0)$ such that $dot theta (t_0) = omega(t_0)$. Of course I am assuming here that $theta$ is a differentiable function over all $rr$, which would be the case if you consider something "physical" since you can make the slope of $theta$ arbitrarily large but in praxis you cannot make it not differentiable. I assume that you know that the Lie Algebra of $rm SO(3)$ is $mathfrak{so}(3)$ which are basically all $3 times 3$ skew-symmetric matrices. I'll choose $t_0 = 0$ without loss of generality since you can translate the time axis and redefine your functions. Note that $r(t) = R(t) r(0)$, where of course my origin is the fixed point and $R : rr to {rm SO(3)}$ is a differentiable function with the property that $R(0) = I$. Thus the derivative of $r$ at zero is given by:
$$ r(t) - r(0) - v cdot t = o(t) $$
for some $v in rr^3$, which is the velocity vector. We want to figure out what $v$ is in terms of $r$ and $R$. Note that $ R_{ij}(t) = I_{ij} -t cdot epsilon_{ijk} , omega_k + o(t)$ for some $omega_k in rr$, where summation over $k$ is implied. You might think that I'm doing something shady by calling these numbers $omega_k$. Note that we are doing maths at this moment thus if you want you can call them $w_k$. I'll explain later why physically $omega_k$ is the angular velocity but for now we have:
$$ -t cdot epsilon_{ijk} , omega_k r_j(0) - v_i cdot t = o(t) $$
at this point I think it is quite obvious what you should choose as $v_i$, namely you choose $v_i =- epsilon_{ijk}, r_j(0)omega_k $, which you can also write it as $vec v(0) = vec omega times vec v(0)$. Assuming that $omega$ is the angular velocity at the moment, let me calculate your identity. We just take the cross product with $vec r$ from left:
$$vec r times vec v = vec r times ( vec omega times vec v) = r^2 vec omega -(vec r cdot vec omega) vec r = r^2 vec omega$$
where I used the fact that $omega perp r$.
Our main problem now is to show that the numbers $omega_k$ are in fact the angular velocity. Sadly there is no rigorous proof of this fact because know we are leaving the realm of maths and entering the realm of physics. I'll try to convince you of this fact by giving an example. You know that the position vector of a rotating point mass around $z$ axis with angular velocity $omega$ is given by:
$$vec r = begin{pmatrix} r cos omega t -r sin omega t end{pmatrix} = R(t) begin{pmatrix} r 0 end{pmatrix} = R(t) vec r(0)$$
where
$$R(t) = begin{pmatrix} cos omega t &sin omega t&0 -sin omega t&cos omega t &0 &0&1 end{pmatrix}$$
is the usual rotation matrix. By our definition the angular velocity vector has to be exactly the vector in $-z$ direction because I have chosen the wrong si(g)n (pun intended!) with length $omega$. Note that we have:
$$R(t) = begin{pmatrix} cos 0 &sin 0&0 -sin 0&cos 0 &0 &0&1 end{pmatrix} + omega t begin{pmatrix} -sin 0 &cos 0&0 -cos 0&-sin 0 &0 &0&0 end{pmatrix} +o(t) $$
comparing this with the equation $R_{ij}(t) = I_{ij} -t cdot epsilon_{ijk} , omega_k + o(t)$, we see that:
$$ begin{pmatrix} 0 &omega t &0 -omega t &0 &0 &0&1 end{pmatrix} = begin{pmatrix} 0 &-omega_3t&omega_2t omega_3t& 0 &-omega_1t-omega_2t &omega_1 t&0end{pmatrix} $$
and now you see $omega_1 = omega_2 = 0$ and $omega_3 = -omega$ as promised. Now obviously this is not a proof of the fact that in general this holds but if you want you can also try this for a rotation in $x$ and $y$ axes and you get any combination thereof if you remember the vague written identity (s. wiki for the Baker–Campbell–Hausdorff formula): $$exp[mathfrak{so}(3) + mathfrak{so}(3)] ={rm SO(3) cdot SO(3) }$$
I have to gloss over some aspects like Lie algebra/group correspondence and locality of continuity of a function because I assumed that you are familiar with these ideas, if not feel free to say so in the comments and I'll edit my answer accordingly.
Answered by Gonenc on August 26, 2021
There are some differences in the notation of the article with those used in the original question. This answer is provided within the notation of the article.
Key point: The article states that ${dot{vec{alpha}}}neq{vec{omega}}$ with $alpha$ correponding to the angle of rotation.
Now we can equate $dot{vec{r}}=vec{omega}times vec{r} $ with $dot{vec{r}}={Omega} vec{r}$ with $Omega$ is an antisymetric matrix defined by $Omega_{ik}=epsilon_{ijk}omega_j$. THESE are correct regarding positional changes of the object in terms of its angular velocity.
(I suspect what you are refering to as "lim $Theta(t_1,t_2)over t_2-t_1$..." is ACTUALLy $Omega$ in the article, one reason I have migrated to the article's notation so as not to confuse this object with $dot{alpha}$ or $dot{Theta} $ in your notation.)
Your question as stated then reads to me "How do we proove THIS OBJECT (i.e. $Omega$ exists and that it equates to the operation $omegatimes$)."
First of all, it has been defined in terms of the anti-simetric tensor $epsilon_{ijk} $and the angular velocity $omega $ ; $epsilon_{ijk}omega_j r_k=vec{omega}timesvec{r}$ by definition.
But my intuition is that you are intersted in how to arrive at this object starting from $alpha$ instead. I believe this is demonstrated sufficiently in the first collum of the article.
To summarize: Letting the orthogonal matrix $S=e^{vec{alpha}times}$, $vec{r}=Svec{r_0}$. it follows that
$dot{vec{r}}=dot{S}S^{-1}vec{r}$. Therefor $Omega=dot{S}S^{-1}$. If $alpha$ exist, $S$ exists and $Omega$ may be expressed in term of it. It works from both angles. (Also the derivatives of S given in the article are calculable.)
To put another way equations of the same form have the same solution. On one hand we have $dot{vec{r}}=dot{S}S^{-1}vec{r}$ on the other we have $dot{vec{r}}=vec{omega}times vec{r} $. Therby it is trival to make the interpretive association $Omega=dot{S}S^{-1}$
EDIT: In more explicit fashion
1) We have two items to consider initially.
A) The first is $dot{vec{r}}=vec{omega}times vec{r}$, the time derivitive of the position vector $vec{r}$ as a function of itself. Let $Omega$ be the matrix object that fulfills this relation. $Omega$ BY DEFINITION is $Omegaequiv epsilon_{ijk}omega_j$ where $omega_j$ are the components of the angular velocity.
B) The second is $dot{vec{r}}= f(vec{alpha})$ the time derivative of the position vector $vec{r}$ as a function of the rotational angle vector $alpha$.
Logical connection: A=B
Since $dot{vec{alpha}}neqomega$, as the article demonstrates, $dot{vec{r}(alpha)}neqOmegavec{r}$. QUESTION: what is the relation beteen these items, and consequently what is $dot{alpha}$ and/or $omega{(dot{alpha})}$. HINT: $omega$ is NOT $vec{dot{alpha}}$. Let see what we find instead.
BY DEFINION of the derivative $dot{vec{alpha}}=$limit a $t_2-t_1=0$of${alpha(t_2)-alpha(t_1)over{t_2-t_1}}$ ...
Letting $S_{lm}equiv e^{epsilon_{lmn}alpha_m}=I_{lm}+epsilon_{lmn}alpha_m+(epsilon_{lmn}alpha_m)^2+...$, it follows that
$dot{vec{r}}(alpha)=dot{S}S^{-1}vec{r}$ (See article)
Therfore, $epsilon_{ijk}omega_j={dover dt}S_{ia}S^{-1}_{ak}$...provides a proper relation beween $omega_j$ and $dot{alpha}$
One may calculate the terms of $dot{S}_{ia}$ if they wish. This i the only object involving time derivatives of $alpha $ at this stage. YOu may write those in terms of the definition of the derivative if you desire. I will use a short hand.
${dover dt}S_{ia}=epsilon_{ima}dot{alpha_m}+h.c.$
$S^{-1}rightarrow I+h.c$
So we obain after removing the antisymetric tenor from each side
$omega_j=dot{alpha}_j+h.c.$
QED
I feel the rational for the correct relations is sufficiently demonstrated at this point, but may include a couple of further edits if I'm inclined at some point.
Also please lay down what you explicitly define as $theta(t_1,t_2)$ in term of the notation of the article. If you can do that, your question is nearly answered.
EDIT: OP has since posted his definition as "Note that what this journal paper calls $vec{α}(t)$ would in my notation be written as $vec{θ}(t_0,t)$. The paper discusses the fact that the angular velocity $vec{ω}(t)$ is not equal to the time derivative of $vec{α}(t)$. This means that the limit of $vec{θ}(t_0,t_2)−vec{θ}(t_0,t_1)over t_2−t_1$ does not equal..."See edit to original question for rest.
...I thought there was something peculiar at hand :)! Actually (I believe, tell me if you disagree), correctly speaking that $vec{alpha}(t,t_0)=vec{theta}(t,t_0)$ Can we agree that the meaning of $t_0$ is that an initial condition has been applied? In general,$vec{alpha}(t,t_0)neq vec{alpha}(t)$. So your correlation of your notation with that of the article is not quite right!
Properly speaking, the limit of $vec{θ}(t_0,t2)−vec{θ}(t0,t1)over{t2−t1}$ does not equal$ vec{ω}(t1)$...No it does not. But the limit of $vec{θ}(t2)−vec{θ}(t1)over{t2−t1}$ DOES...indeed.
And $vec{θ}(t_0,t2)−vec{θ}(t0,t1) neqvec{ theta}(t_2,t_1)$ But $vec{θ}(t2)−vec{θ}(t1)$ IS.
Answered by IntuitivePhysics on August 26, 2021
(NOTE:Posted as new answer instead of edit to previous answer because we will look at this scenario slighly differently.
One important fact is that finite rotations do not commute in gerneral, however infinitesimal rotations always commute. Let's revisit this point later.
But first let's invoke some proper statements regarding theorems.
{{Euler's Theorum Formally Stated}} For {any} general proper, orthogonal operator $mathcal{R}$, there exists a fixed axis $hat{bf{n}}$ and an angle $bf{Phi}$ in the range $0leqThetaleqpi$ such that $mathcal{R}[{bfPhihat{n}}]=mathcal{R}$
{Source}: Analyitcal Mechanics for Relativity and Quantum Mechanics, Oliver Davis Johns, Oxford University Press, 2005)
({ {General Theorem :}}Angular Velocity of Parametized Operators) emph{Any "`time-varying"'} rotated vector (fixed axis or not) may be written as $bf{V}(t)=mathcal{R}[Phi(t)hat{n}(t)]{bf{V}}$ ({Source}: Analyitcal Mechanics for Relativity and Quantum Mechanics, Oliver Davis Johns, Oxford University Press, 2005)
{{Consequence of second Theorem}}: If the operation of interest relates to the rotational velocity as ${dover dt}mathcal{R}=Omega_{ik}equivepsilon_{ijk}bf{omega}_j$ and the vector of interest is the position vector $bf{r}$ then the derivative of these items is ${dover dt}bf{V}(t)=omegatimes{bf{V}}$
then the explicit form of the rotational velocity in terms of a {general } rotational angle that solves these criteria (also see derivation of article) turns out to be $omega(t)=dot{Phi}+sin(Phi){dhat{n}over dt}+(1-cos(Phi))hat{n}times{{dhat{n}over dt}}$
{{{But}}}... for a fixed axis, (as in the restrictions of Euler's theorem), ${dover{dt}}hat{n}$ turns out to be zero. Consequently, for a fixed axis of rotation
$omega(t)|_{fixed axis}=dot{Phi}hat{n}$
{{QED}}
Now let us return to the first point, which has not yet been demonstrated...
If $omega(t)|=dot{Phi}hat{n}$ is valid for any fixed axis rotation, a simple deduction is that it should apply to a sequence of fixed axis rotations. This is somewhat of an intuitive connundrum however. One reason the truth of the initial statement is important.
Let us consider a sequence of two fixed axis rotations a and be
Allegedly $vec{omega(t_a+t_b)}=omega{(t_a)}hat{n}_a+omega{(t_b)}hat{n_b}$, and the operation should commute if the rotations are infinitesimal ...still EDITING.
Answered by IntuitivePhysics on August 26, 2021
If you take for rigid body the Rodrigues rotational Matrix which is:
$$S=I_3 +tilde{d},sin(varphi)+tilde{d}tilde{d},(1-cos(varphi))tag 1$$
where $vec{d}$ is a $3times 1$ the instantaneous rotation axes with $vec{d}cdotvec{d}=1$ , $~varphi$ is the rotation angle about this axis $~,I_3$ is $3times 3$ identity matrix and tilde operator is: $vec{a}times vec{b}=tilde{a},,vec{b}$.
from equation (1) you obtain $$tilde{omega}=dot{S},S^T$$ $Rightarrow$ $$vec{omega}_1=vec{d}dot{varphi}tag 2$$
thus equation (2) must be equal to:
$$vecomega_2=frac{vec{r}times vec{v} }{|vec{r}|^2}tag 3$$
with $ds=|r|,dvarphi~,frac{ds}{dt}=|r|,frac{dvarphi}{dt}~Rightarrow ~,v=|r|,dot{varphi}$ and
$$vec{v}=|r|,dot{varphi},vec{e}_vtag 4$$ with $vec{e}_vcdotvec{e }_v=1$
thus equation (3) is now:
$$vecomega_2=frac{vec{r}times (|r|,dot{varphi},vec{e}_v) }{|vec{r}|^2}= frac{vec{r}times vec{e}_v}{|vec{r}|},dot{varphi}tag 5$$
but $vec{d}_2=frac{vec{r}times vec{e}_v}{|vec{r}|}~,$ is the rotational axes $vec{d}$ thus
$$omega_2=omega_1~surd$$
Answered by Eli on August 26, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP