Mathematics Asked on November 26, 2021
Let’s say that $f(x)$ is a function and $dx$ is a slight change in the input, $x$. It has been used here as standard calculus notation and hence, $dx rightarrow 0$. $df$, or $df(x)$ is again, by standard calculus notation, the change in the value of $f(x)$ with the slight change in $x$ equal to $dx$ i.e. $df = f(x+dx) – f(x)$.
Now, I want to know if there is some general form for $df$ in terms of $dx$.
I have stated some examples below to make it clearer.
$$text{If }f(x) = x^2 implies df = (2xcdot dx) + (dx)^2$$
$$text{If }f(x) = x^3 implies df = 3x(dx^2) + 3x^2(dx) + (dx)^3$$
$$text{If }f(x) = sin(x) implies df = (cos(x))cdot dx$$
$$text{If }f(x) = cos(x) implies df = (-sin(x))cdot dx$$
As far as I’ve noticed, for most of the functions, $df = a(dx) + b(dx)^2 + … + k(dx)^n$, where $a neq 0$. But, can there exist such a function that $df$ includes a constant term?
Let’s assume that such a function exists. Let’s call it $f_2$. Now, let’s say that $df_2 = c + a(dx)$, where $c in Bbb R$. Now, how would we evaluate $f_2′(x)$ in this situation?
$f_2′(x) = dfrac{df_2}{dx} = dfrac{a+c(dx)}{dx} = dfrac{a}{dx} + c$ as $dx rightarrow 0$. So, does that mean that $f_2′(x) = infty$?
Edit (more on the intentions for posting this question)
While deriving the power rule in one of his videos, Grant Sanderson (3Blue1Brown) considers $(df)(dg)$ as negligible but that didn’t satisfy me much. So, while deriving $dfrac{dx^2}{dx}$, I first simplified the RHS for the LHS being $dfrac{dx^2}{dx}$ and the right hand side being $dfrac{2x(dx)+(dx)^2}{dx} = 2x+dx$ as $dxrightarrow 0$, which makes it equal to $2x$. Now, I thought of using the same for deriving $dfrac{d((f(x)⋅g(x))}{dx}$ when it has been deduced that $d(f(x)⋅g(x)) = f(x)(dg) + g(x)(df) + (df)(dg)$.
Now, if I somehow, establish that $dx$ is a part of every single term in the expansion of $df$, then I can just take it out as a common part and then cancel it with the $dx$ in the denominator to have at least a single term without any power of $dx$. All the terms other than that term would then, collapse as they would all have $(dx)^n$ in them for $n geq 1$ and as $dx rightarrow 0$, those terms would approach $0$ as well. This would help me get a clearer view of the derivation and of differentiation itself.
Thanks!
You need to be careful with how you're defining the differential $df$. Usually, it is defined as $df(x,Delta x) equiv f'(x) Delta x$. For small enough increments $Delta x$ this becomes $df=f'(x)dx$. All terms which are nonlinear with respect to $dx$ are discarded by definition. So, $f(x)=x^2 implies df = 2xcdot dx$. This is motivated by considering $Delta y equiv f(x+Delta x) - f(x)$ which satisfies $Delta y = f'(x)Delta x + epsilon = df(x) + epsilon$ where the error term $epsilon$ satisfies $lim_{Delta x to 0} frac{epsilon}{Delta x} =0 $ Definied as such, the differential of a functions represents the principal (linear) part of the increment of the function.
If you want a more explicit treatment of differentials as infinitesimal quantities, you should check out nonstandard analysis which makes use of the hyperreal numbers.
Answered by Mithrandir on November 26, 2021
I had similar questions when I saw these notations in physics classes when I started studying maths. The most mathematically accurate way is to interpret $df$ and $dx$ in terms of differential forms. Say $f$ is a smooth function and $x$ a coordinate. The differential of $f$ at some $x_0$, $df(x_0)$, is defined as a linear map $df(x_0):mathbb{R}rightarrowmathbb{R}$ such that $$f(x_0+h) - f(x_0)= df(x_0)cdot h + phi(h),$$ where $phi$ is some function such that $phi(h)/h to 0$ for $hto 0$. Hence $df(x_0)$ is the linear approximation of $f$ around $x_0$. Now interpret $x$ as the identity function $text{id}(x) = x$. Replacing $f$ by $text{id}$ in the formula above, we find $$h = dx(x_0)cdot h+phi(h)$$ and it follows that $dx(x_0) = text{id}$ and $phi = 0$. Thus, the differential $dx$ of $x$ is nothing but the identity again and we may omitt the point $x_0$. We can verify the following relation between $df(x_0)$ and $dx$: $$df(x_0) = f'(x_0)dx.$$ This is what eventually justifies writing $frac{df}{dx}=f'(x_0)$. This is indeed the most accurate definition of a derivative of a function/map. There is no obvious way how to define powers $(dx)^2, (dx)^3,...$ in a mathematical way. But it is possible of course. To make sense of it, one needs the notion of tensor product and/or wedge product, but I will skip that here. A very intuitive introduction into the subject is Do Carmo's "Differential Forms and Applications" but one needs some basic analysis and linear algebra to read it.
To answer your question. Saying that $df_2 = c+adx$ doesn't make sense unless $c=0$. The formal mathematical reason is that $df_2$ must be a linear function and the additive constant breaks linearity. Intuitively, we want to think of $df_2$ as a infinitely small perturbation so we must have $df_2to 0$ as $dxto 0$.
Also, your examples are incorrect. It is $d(x^2) = 2xdx$, $d(x^3) = 3x^2 dx$ and so on (just apply the formula $df = f'(x_0) dx$). (Tensor - )Powers of differentials will come into play if one wants to derivatives of higher order formally.
Answered by Teddyboer on November 26, 2021
As I commented on a previous post of yours, one should keep in mind that writing expressions such as $frac{a+ccdot dx}{dx}$ really is taking notation too seriously, since the derivative is not a fraction and so on.
However, with the right interpretation one can kind of make sense of (at least some of) such manipulations. In this view, then the equation $$df=a+c(dx)$$ (where $a$ is not an infinitesimal) doesn't make sense because on the left hand side you have an infinitesimal, and on the right hand side you have a nonzero constant (which cannot be infinitesimal).
Edit. Regarding the question of whether $df$ is always of the form $a dx + b (dx)^2+dots+(dx)^n$.
As I mentioned before, Grant makes a series of informal but intuitive statements. One could choose different methods to formalize these informal statements, and the answer to the form of $df$ really depends on which method you use. For example, in synthetic analysis we always have $(dx)^2=0$, and hence $(dx)^n=0$ for $n>1$.
There is also something called nonstandard analysis, which is a system that also can be used to formalize the intuitive statements made by Grant. I'm not familiar with that formalism, though.
And of course there is the usual framework that uses limits to formalize calculus, and in this context the expressions $df$ and $dx$ by themselves don't make any sense.
Edit. knowing more about the motivation behind the question.
Since you want to know this in order to "cancel a factor $dx$ with the $dx$ in the denominator" you should know that, in all treatments, let it be the standard limit-based calculations, synthetic analysis or nonstandard analysis, every function which has a derivative can be said to "admit a differential which has at least one factor of $dx$ in every term" (this needs to be made rigourous in each of the frameworks, though). In this vague sense, the answer to your question would be "yes" for every differentiable function, and if this is not the case then the function is not differentiable.
For example, here is a calculation that shows the case you mention, as it happens in the usual limit-based framework.
Let $f:mathbb Rtomathbb R$ be defined by $f(x)=1$ for $xneq 0$ and $f(0)=0$. Graphically, the situation is as follows:
Then for $x=0$ and any $hneq 0$ you have
$$f(x+h)-f(x)=f(h)-f(0)=1-0=1$$
which you could interpret as telling you that "$df=1$ at $x=1$". What happens, then? Well, then the function is not differentiable at $x=0$, so the derivative does not exist.
Answered by Jackozee Hakkiuz on November 26, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP