Physics Asked by DinoManPhyLab on August 2, 2020
I have studied physics up to 12th grade and I noticed that whenever new equations are introduced for certain entities, such as a simple harmonic wave, we never prove that it’s continuous everywhere or differentiable everywhere before using these properties.
For instance we commonly use this property that $v^2cdot frac{partial^2f}{partial x^2} = frac{partial^2f}{partial t^2}$ holds for the equation to be a wave, and personally I’ve used this condition dozens of times to check if a function is a wave or not, but I’ve never been asked to check whether the function I’m analyzing itself is defined everywhere and has a defined double derivative everywhere.
Is there a reason for this? There are many more examples but this is the one I get off the top of my head.
A lot of physicists would tell you that it doesn't matter if solutions to physical equations are smooth, as long as you can get meaningful predictions from them. Such a view is overly simplistic. There are circumstances where non-smooth features crop up in solutions to physical equations and are themselves very meaningful. The reason why high school physics classes don't worry about such matters is simply that they are typically beyond the scope of what can be taught in such a class.
A classic example of a meaningful discontinuity in a physical system is a shock wave. In certain (nonlinear) wave equations, you can have a solution that starts out smooth but eventually becomes discontinuous in finite time. These discontinuities tell you something useful: they can show up in real life as rogue waves in fluid dynamics or traffic jams in models of traffic. An example from Burgers' equation is shown below.
Discontinuities can form in many other systems, especially condensed matter systems, and indicate the presence of defects. Examples include vortices in superfluids (shown below) and dislocations in crystals. The ways that these defects behave often plays a dominant role in the overall behavior (i.e. thermodynamics) of the material.
One of the major reasons why it is useful to examine what happens when equations of physics break down is that these are precisely the circumstances where we can learn about new physics. For example, the behavior near discontinuities in nonlinear wave equations can be either diffusive (where the discontinuity gets smeared out in time) or dispersive (where the discontinuity radiates away as smaller waves), and knowing which it is tells you something about the microscopic structure of the fluid. For this reason, identifying where physical equations fail to be well-posed or self consistent is really important. There is a famous open problem in mathematics known as Navier-Stokes existence and smoothness, whose importance can be thought of in this way. If the Navier Stokes equations turn out to generate discontinuities in finite time, it could have profound implications for understanding turbulent phenomena.
One physical theory where mathematical rigor is especially far from established is quantum field theory. QFT famously has lots of calculations that spit out $infty$ if done naively. The reasons for this are not fully understood, but we think it has something to do with the fact that there are more fundamental, as yet unknown theories that kick in at very small length scales. Another historical problem related to mathematical nonsense in QFT has to do with the Higgs boson: In absence of a Higgs boson, certain calculations in QFT give probabilities which are greater than 1, which is of course impossible. The energy scale at which these calculations started to break down not only told us that there was some physics we didn't understand yet--namely, there existed a new particle to be discovered--but also told us roughly what the particle's mass had to be.
So understanding the well-posedness of mathematical theories of physics is important. Why then don't people worry about this in high school physics? The answer is simply that our current theories of physics have been so well refined that our models for most everyday phenomena are totally consistent and produce no discontinuities. And the reason they never ask you to check that your solutions are sensible is just that they don't want you to get bored, because the answer is always yes.
In fact, there are some very general results in the mathematical fields of dynamical systems and partial differential equations which guarantee that most physics equations have unique, smooth solutions. Once you know some of these theorems, you don't even need to check that most solutions are smooth--you are guaranteed this by the structure of the equations themselves. (For example, the Picard-Lindelof theorem accomplishes this for most problems in Newtonian particle dynamics.)
Correct answer by Yly on August 2, 2020
Short answer: we don't know, but it works.
As the commented question points out, we still don't know if the world can be assumed to be smooth and differentiable everywhere. It may as well be discrete. We really don't have an answer for that (yet). And so what do physicist do, when they don't have a theoretical answer for something? They use Newton's flaming laser sword, a philosophical razor that says that "if it works, it's right enough". You can perform experiments on waves, harmonic oscillators, and the equation you wrote works. As one learns more physics, there are other equations, and for now we can perform experiments on pretty much all kind of things, and until you get really really weird as in black holes or smaller than electrons, the equations that we have give us the correct answer, therefore we keep using them.
Bonus question: let's suppose that, next year, we have a Theory of Everything that says that the universe is discrete and non-differentiable. Do you think the applicability of the wave equation would change? And what about the results, would they be less right?
Answered by Mauro Giliberti on August 2, 2020
The answer by @MauroGiliberti is great, but we do work with discontinuities in physics as the answer here says. In fact, a lot of careful and rigorous analysis is going on in general relativity, as there smoothness/singularity problems easily arise.
Newtonian physics however is very intuitive and easy. You do not have just some random mathematical entities, you have entities which are to describe real world.The math represents some mechanism and from intuition you know how the math should behave.
Take for example falling rock from height $h_0$. The equation of motion is $md^2h/dt^2=F,$ where F is the force. Do we need to show that $h$ is twice differentiable everywhere and that $F$ is function? Of course not, as we know how the system is supposed to behave. And it is not twice differentiable everywhere (and the force is not in fact function), since the movement of the rock is described by this function: $$h(t)=left(h_0-frac{1}{2}gt^2right)H(sqrt{2h_0/g}-t),$$ where $H$ is heaviside step function.
From mechanism of gravitation we know that before the rock hits the ground, the system is supposed to be well behaved and we also know what happens when the rock hits the ground. Because of this, you never see analysis like this in a physics class, where you would use discontinuous heaviside step function in solution to simple fall of the rock.
I've never been asked to check whether the function I'm analyzing itself is defined everywhere
Why would it need to be defined everywhere? When you analyse wave, you care about the thing you observe. You do not care what is going on with this wave in the other side of the universe. The computation thus better be independent on what goes on in there.
The physicist just have some idea about the mechanism how the universe is supposed to work, and have some intuitive understanding why the math he is using is supposed to correctly represent it. Then he can just assume the functions are well behaved, as physics demands. Sometimes he even uses the math knowingly incorrectly, because he might have reasons to think this incorrect manipulation does represent the mechanism he has in mind.
Then he just checks wheter the results agree with experiments. If they do, he will create work for many many mathematicians trying to make some sense of what he did. And they are not always successful. Take for example statistical physics. It is 100 years old, produced enormous amount of evidence that it works, yet mathematicians are still struggling to show the calculations are in fact consequence of the known laws of physics.
Answered by Umaxo on August 2, 2020
Generally speaking, you can assume that the functions you deal with in high school physics are suitably well behaved. This is taken as given and most students will never question it, or even realise that there is anything to question - so well done to you for thinking about this issue.
Even in more advanced physics, there is a tendency not to worry about the finer points of mathematical models as long as they produce physically realistic outcomes that match experimental results. Most physicists will not question the fundamental assumptions of a model until and unless it predicts a singularity or a paradox or some other "pathological" outcome. And even then the short-term solution is often to avoid pathological results by restricting the domain in which the model is applied.
Mathematicians, by inclination and training, tend to be more careful. What the physicist sees as a focus on reality, the mathematician perceives as a lack of rigour. What is rigorous to the mathematician is overly fussy and pedantic to the physicist.
As an example, engineers and physicists will happily use the Dirac delta function, whereas a mathematician will point out that $delta(x)$ is not actually a function (technically, it is a distribution) and treating it as if it were a function can lead to incorrect results. The mathematician says "if $delta(x)$ is a function then what is the value of $displaystyle int_{-1}^{1} delta(x)^2 dx$ ?". The physicist says "in what physical situation would I ever need to use such a bizarre integral ?".
Answered by gandalf61 on August 2, 2020
Just to follow up slightly on @MauroGiliberti, one of the main reasons for the use of Newton's flaming laser sword is the context behind which most physicists are working. Mathematical physics is often concerned with models of the real world. A model by its very nature is not a perfectly accurate representation of the phenomenon in question but a useful approximation. This is still true even if the model is highly accurate.
Therefore, even if the underlying system is discrete, if its granularity is such that it can be reasonably modeled as a continuous process then a continuous function is fit for purpose.
This occurs in other fields as well. Economics and mathematical finance borrow and re-purpose a great deal of physical models for modelling the flow of money in an economy or for pricing financial instruments. Technically speaking, money is discrete. Yet when the sums are vast enough, it may as well be a continuous quantity as its grain becomes so fine it's practically smooth.
Answered by Garry Cotton on August 2, 2020
For an enjoyable (from the physics side of the coin) "survey" of this issue, enjoy this video on YouTube:
https://www.youtube.com/watch?v=xPzR_D9qKeo
I believe the basic obliviousness shown neatly captures the interaction of the question and the comment "... there is a tendency to not worry about the finer points of mathematical models as long as they produce physically realistic outcomes that match experimental results." in gandalf61's answer.
The sad thing is that interesting things are usually taking place where something that is good almost everywhere isn't good somewhere.
I suppose though, every physics fellow secretly yearns to be able to act like an engineer fellow so...
Answered by Jeorje on August 2, 2020
I would disagree with @MauroGiliberti that we don't know. In your example of the classical wave equation, the reason we don't bother checking the continuity and differentiability of solutions is that we require that these properties be satisfied. To explore this notion further, consider the following: the theory of classical mechanics tells us that certain physical phenomena (such as waves on strings) will follow the equation $square f = 0$. The main questions we want to ask about this equation in order to use it are as follows
What physically observable phenomena does this equation predict?
Are experimental observations consistent with those predictions?
Note that we do not ask if this is what really happens on a fundamental level.
To address the first question, it is trivial to show that the solution to a second order differential equation is twice differentiable, so it is unnecessary to show this explicitly. Regarding the second question, it may seem that you need to check that your experimental data consists of a twice differentiable function, but this is not so as you cannot directly measure $f$ (a function defined at uncountably many points, thus requiring uncountable measurements). Instead, you make finitely many measurements, note that your theory requires a twice differentiable function, and choose a twice differentiable function to fit to your data to check against the equation. In addition, each of your data points has some uncertainty associated with it, while the condition of continuity requires infinite precision.
Answered by Sandejo on August 2, 2020
Remember, the ideas of calculus were motivated by physics. Think of situations where non differentiable functions come up: e.g. $theta(x)$, the Heaviside step function. This is defined as 1 when $x geq 0 $ and 0 otherwise. How would you differentiate this function? Using the properties of the Dirac Delta distribution it can be shown that $ frac{d}{dx} theta (x) = delta(x)$. This intuitively makes sense: $delta(x)$ is zero when $x$ is nonzero but it spikes up at 0 such that its integral over any range that includes 0 is 1. A mathematician would look at that and say ‘Hey, you can’t do that!’ and from his point of view, he’d be right. But the reason this works for a physicist is the same reason that communicating with incorrect grammar and spelling still works: you have an intuition for what the speaker or writer is trying to say.
Furthermore, pedagogically speaking, there’s always the issue of practicality. It is not practicable to teach every physics and engineering student the amount of rigorous math it would require to prove every theorem that they are going to use. Some (esp theorists) might not, but the vast majority of students are going to find having to learn functional analysis as a prerequisite to quantum mechanics burdensome.
Now, this is not to say that all of physics is non rigorous. People are working on the mathematical foundations of Quantum Field Theory and mathematicians are very interested in fields such as string theory. But this is another specialised field and even most people who work with QFT are not going to rigorously prove everything as they learn and apply their work. What is important is gaining a working intuition of how different parts work together to make a coherent whole.
Answered by saad on August 2, 2020
I just want to add my 2¢ into the discussion and mention a more mathematical view of this problem.
In physics, we're often very interested in Lebesgue-integrable functions, which is a very reasonable constraint: on a finite interval, a bounded function is Lebesgue integrable iff it is measurable – and every sane function that could correspond to anything real certainly is! Non-measurable functions are really broken on the infinitesimal level and their construction is considered “physically not possible”. To reject non-measurable functions is to postulate that physics is not pure chaos and madness.
Functions that are not bounded are a lot more common and reasonable in physics. The nice ones are also Lebesgue-integrable and most of the rest comes from non-physical idealizations, but we've developed a lot of techniques to deal with the physical infinities that can't be tamed otherwise.
Now, how does this relate to differentiability? Well, let's consider the nicest space of functions that you can imagine: infinitely differentiable functions that are decreasing faster than any polynomial in infinity. This is the Schwartz space $mathcal{S}$. With these functions you can do almost literally whatever you want. A remarkable fact about the Schwartz space is that it is dense in $L^p$ for all $p in [1, infty)$ – that means that you can approximate any integrable function with a function from $mathcal{S}$ with arbitrary precission. So you can describe your model using infinitely differentiable functions and as long as the model itself is continuous, you can always generalize it to $L^p$ just by taking the limit. Don't you find this incredible?
But often, working with $mathcal{S}$ and then finding the limit can get quite laborious. For example in electrodynamics you want to talk about charge densities as well as point charges and even charged surfaces – in order to do describe such systems, you'd have to approximate the charge density with a smooth function and solve the Maxwell equations for it. Luckily, something called distribution theory was invented. This theory gives us rigorous mathematical framework in which we can talk about the limits themselves, in a sense.
For example, if you imagine you were taking a derivative of a sigmoid function and then took the limit that turns it into a Heaviside function, the derivative would explode to infinity, like in this video. But if your model were a good represantiton of reality, you're probably not interested in the derivative itself, but you're using it as a intermediate result, maybe in an integral. Then you can as well avoid doing the limit altogether and take the weak derivative of a Heaviside distribution, which equals the delta distribution. Weak derivatives are defined on all integrable functions, so the differential equation you wrote in your question can be evaluated even with any integrable function. However remember that this always gives the same result as doing the limit, just in a fancy simplified way.
In the previous paragraphs I was talking about functions that have a specific physical meaning. That is, however, not the case of the famous wavefunction in quantum mechanics. Wavefunctions are special in the sense that QM can be naturally modelled as an (possibly ∞-dimensional) vector space and functions are really convenient ∞-dimensional vectors. However, because ∞-dimensional spaces are weird, not all covectors have a representation as a vector. You probably have an intuition for this already: distributions are the “covectors” of differentiable functions, and while differentiable functions are distributions, delta distribution is not a differentiable function. Because of this non-conventional nature of QM, distributions are perfectly valid objects of the theory, not only intermediate results. For example, you could have $psi(p) = delta(p)$.
Answered by m93a on August 2, 2020
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP