Physics Asked by Julian Kintobor on February 17, 2021
Electromagnetic waves are frequently described as "self-propagating", implying a mode of propagation distinct from that of electrostatic fields; but as I understand things, both have strength proportional to the inverse square of the distance from their source. Let me lay out what one ignorant of wave propagation and ignoring the magnetic field expects to see from a moving charge:
Edit Rephrased the below because I forgot that I was dealing with inverses.
In both situations (2) and (3) the electric field where I stand is the sum of a constant and a periodic function (in case (3) two periodic functions along perpendicular axes), purely as a result of the oscillation of the source charge–no magnetic or special "propagation" effects needed. Obviously I have neglected the finitude of the speed of light in these calculations, which would introduce a tiny bit of distortion.
The periodic component is something like the multiplicative inverse of a squared sine wave, shifted so as to stay finite; some fancy trig likely makes it sinusoidal, since it’s pretty dang close. Here are graphs of, respectively, the transverse and longitudinal components of (3), using r=1, P=1, and A=0.1:
Is it the case that the electromagnetic wave produced by Maxwell’s equations in (2) and (3) will lose amplitude at precisely the same rate as this "inverse wave" that derives trivially from the inverse square law and the charge’s motion? How, then, do we consider the wave "self-propagating" if it has no special powers to resist decay and acts just like the rest of the electric field?
Related desired elaboration: Apparently the Maxwellian wave will have the same frequency as the inverse wave, so how/why do their phases/amplitudes differ? And where do we get the energy for this extra wave?
The description of EM waves as self-propagating is misleading. There's no causal connection between changing/curved electric and curved/changing magnetic fields: Maxwell's equations simply state that whenever you detect a changing electric field in empty space, there's also a curved magnetic field at the same spacetime point, and vice-versa; they have the common sources: charges and currents.
This fact is nicely summarized in the Jefimenko's equations, which reformulate EM fields (and potentials) as functions of charges and currents at retarded times, with all the fields and potentials being completely independent from each other.
Correct answer by Ruslan on February 17, 2021
The inverse $r^2$ intensity you are talking about is just geometry. Whether it is light intensity, gravitational field intensity, or electric field intensity, the amount of the field intercepted by a detector falls off as inverse $r^2$. The sum of intensity over the entire sphere of radius $r$ will be the same as the source, unless there is something between the source and the detector to attenuate it. The inverse $r^2$ intensity has nothing to do with the properties of light, gravitational force, or electrical force.
In the case of light, it is easy to see because the measured light intensity is directly proportional to the detector's area. Integrating over the entire $4 pi r^2$ spherical area, you will get the same constant for all $r$. The inverse $r^2$ intensity fall-off is strictly due to the geometric spreading out of the beam and has nothing to do with the wave nature of light.
In the case of gravitational and electric fields, the geometric nature is easily seen with Gauss Law. In the case of the electric field:
$E A=q/epsilon_0$
where for a spherically symmetric charge distribution, $A$ is the same $4 pi r^2$ area that light spreads its energy into.
Gauss Law for gravitation has the same form with $F/m$ replacing $E$ and $4pi GM$ replacing $q/epsilon_0$.
In all three cases the field intensity falls-off by inverse $r^2$, because the field is spreading to an area that increases as $r^2$.
If you were able to focus a beam of light so it never spread out, and a laser comes pretty close, the intensity would stay the same with distance.
Answered by Bill Watts on February 17, 2021
Wave intensity falls off as r$^{-2}$ because of energy conservation. The field of a point charge falls off as r$^{-2}$ because it is the gradient of the potential which falls off as r$^{-1}$ as described by Coulomb's law, not because of a conservation law.
Answered by my2cts on February 17, 2021
Electromagnetic waves are frequently described as "self-propagating", implying a mode of propagation distinct from that of electrostatic fields; but as I understand things, both have strength proportional to the inverse square of the distance from their source.
You seem to have a misunderstanding. EM radiation fields fall off as $r^{-1}$ not $r^{-2}$. The energy density is proportional to the square of the fields, so for the radiation the energy falls off as $r^{-2}$, not the fields. In contrast, the energy density of a Coulombic field falls off as $r^{-4}$. More importantly, for radiated fields the flux falls off as $r^{-2}$ while for electrostatic fields it is 0.
Answered by Dale on February 17, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP