Mathematics Asked on December 23, 2021
Typically, in taylor series, I see an expansion about $x=0$ for some function $f(x)$ that we’re approximating. I always thought this was just done for simplicity since this causes a lot of $x’s$ to drop out in the infinite/truncated series. Is this actually the reason or is it more sophisticated?
In general, how do you best determine the point of expansion? If a function’s domain is all positive, then I don’t think it makes sense to expand about 0.
Very often, you do a Taylor series approximation to replace some function which is non-trivial to analyze by some easy low- level polynomial. To be easier, you however typically stop the expansion after the first view terms. This means that you accept some error. This error is zero at the point around which you did the expansion, and has the tendency to grow the more you are away from it.
This implies that one should do the Taylor series approximation roughly around a point such that this point lies in the middle of the interval for which one wants to evaluate the approximation. Thus, if you are interested in the values of $f(x)$ close to $x=0$, do the approximation around $x=0$. If, on the other hand, you are interested in the behavior of $f(x)$ around $x=100$, approximate $f$ around this point.
Finally, the reason why you typically mainly see approximations around $x=0$ is because many people first make a coordinate transformation such that the point they want to approximate around lies at the origin, and only then do the approximation. In many cases, this (slightly) simplifies the steps.
Answered by NeitherNor on December 23, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP