Mathematics Educators Asked on December 28, 2021
When teaching partial fractions, there’s probably no way to escape the heavy algebra necessary for partial fractions, but I’m wondering how to introduce the idea in a way that is intuitive or geometric. (Like how you could introduce integration by parts by a picture before you do examples, could you do the same with partial fractions?).
I’ve found an example of an answer here. I could start teaching by showing the a simple graph, like $$frac{2x}{x^2-1}$$ and get $$frac{1}{x-1} + frac{1}{x+1},$$ I could see how students could believe that each of those denominators contribute to a vertical asymptote but it doesn’t seem easy to generalize to the overall concept of partial fractions.
I'm wondering how to introduce the idea in a way that is intuitive or geometric.
How about an introduction along the lines of adding two shapes together. Start with adding fractions to have a picture in mind:
For example, one could add the fractions $frac{1}{5}$ and $frac{1}{7}$ by first drawing a rectangle with width $frac{1}{5}$ and height $1 = frac{7}{7}$, and then a rectangle with width $frac{1}{7}$ and height $1 = frac{5}{5}$. Each of the smallest pieces (shaded red) has area $frac{1}{35}$. Then, the sum $frac{1}{5} + frac{1}{7}$ is just the combined areas of all of the small pieces: seven of the $frac{1}{35}$'s from the first picture, and five of the $frac{1}{35}$'s from the second picture. Result: $frac{7+5}{35} = frac{12}{35}$
You can change this picture for other examples, like the one you mentioned: $frac{1}{x-1} + frac{1}{x+1}$.
Again, $frac{1}{x-1}$ is the area of a rectangle with width $frac{1}{x-1}$ and height $1 = frac{x+1}{x+1}$, and $frac{1}{x+1}$ is the area of a rectangle with width $frac{1}{x+1}$ and height $1 = frac{x-1}{x-1}$. It is easy to sum these together, as each small piece has area $frac{1}{(x-1)(x+1)}$. You just add the $x+1$ of them from the first picture and the $x-1$ of them from the second picture. Answer: $frac{x+1+x-1}{(x-1)(x+1)} = frac{2x}{(x-1)(x+1)}$.
Of course, what you really want to do is motivate the reverse process -- partial fraction decomposition. While I don't know how to let the picture solve this problem, it does show the objects we're counting when trying to find $A$ and $B$ in $frac{A}{x-a} + frac{B}{x-b}$.
For example, suppose you wanted to decompose $frac{3x+4}{(x-1)(x+1)}$ into partial fractions. From the picture, it is clear that we need some number of the smallest pieces from the left-hand side and the right-hand side. Since the fraction $frac{1}{x-1}$ is made of all $x+1$ of the $frac{1}{(x-1)(x+1)}$'s, we will need some multiple $A$ of them: $A(x+1)$. Similarly, $frac{1}{x+1}$ takes $x-1$ of the $frac{1}{(x-1)(x+1)}$'s, so we will take some multiple $B$ of those: $B(x-1)$. In total, we need $3x+4$ of these little pieces, so we have an equation: $$A(x+1) + B(x-1) = 3x+4$$
I agree it would be cool to have a picture method to actually find the new coefficients, but I suspect we can't have that. The integration-by-parts picture does a nice job of motivating that the formula should seem reasonable/intuitive, but of course it doesn't actually perform the calculus.
Answered by Nick C on December 28, 2021
I wasn't taught the partial fractions decomposition (PFD) in calculus. We didn't cover it in high school, and when I went to college, they assumed we all knew it. Somehow it was when I read the proof in van der Waerden's Modern Algebra that I understood why it was called partial fractions. I looked at it again a couple days ago, and interestingly, his presentation is not exactly the idea I came away with. But that's the way of learning: sometimes you learn it in your own way. Suppose the degree of a polynomial $g$ is less than the degree of a product of two polynomials $pq$. We can consider a decomposition into fractions $$ f(x) = {g(x) over p(x),q(x)} = {A(x) over p(x)} + {B(x) over q(x)} $$ as partial if $A/p$ or $B/q$ can be further resolved into fractions (for instance if $p$ or $q$ is not a power of an irreducible polynomial). The standard PFD can be found inductively: Take one of the factors $p$ to be the power of an irreducible appearing in the denominator; find $A/p$ and subtract from $f$, leaving $B/q$; and repeat until no factors are left. It has a procedural form similar to partial integration (now called "integration by parts" to avoid confusing the two).
We'll summarize the takeaways to understanding PFD in the perspectives below. They don't all meet the OP's goal of a picture, but they provide some insight into how things work. First of all, the PFD can be viewed as a weighted sum (with the weights $A$ and $B$ in the formula above). This can be connected to a weighted average, center of mass, barycentric coordinates, and interpolation.
Let's consider visualizing the PFD. Say we have a curve going off to infinity at an asymptote, and how it goes to infinity is very important to the form of the PFD. But, by eye, one cannot really see the differences precisely enough as the the curve goes to infinity, except whether the power in the denominator is even or odd. So I think there is some limit to how well we can see what is going on. I think we'll have to use our imagination.
The algebraic structure of the partial fractions problem is not simple, except in the case of two simple poles. It is complicated by the way that multiplication of polynomials is a sort of convolution and not a simple operation like addition and multiplication by a constant. Just how to picture it is not easy. One could model linear and quadratic factors as lengths and areas, but it quickly gets out of hand. Indeed, driven by Pappus' problem, Descartes went the other direction and introduced algebra into geometry to derive necessary relations that could not (easily) be pictured.
1.1 One approach, which leads to a picture, is an extension of the Methodus Incrementorum (1715), of Brook Taylor, who used interpolation to derive Taylor's Theorem. Once, finite differences were a quite common analytical tool. Euler based calculus on them. One application is still quite common, if not universal: The use of the secant line to develop the notion of the tangent line and the derivative. Some teachers may give a similar development of the second derivative in terms of evaluating the function at $x$, $x+h$, $x+2h$, which corresponds to a quadratic interpolant. Taylor's approach extends the notion of an interpolant approximating the tangent line to higher orders to develop the notion of Taylor series.
The method may be adapted to determining a fractional part of the partial fractions decomposition. The idea is to interpolate $f(x)$ near an asymptote $x=a$ of order $k$, where the interpolation has the form $$I(x)={p(x) over (x-a)^k}$$ with the degree of the polynomial $p(x)$ less than $k$. The interpolation conditions are for $hne0$, $$ f(x_j) = p(x_j)/(x_j-a)^k, x_j = a+jh, j=1,dots,k,. $$ Equivalently, $p(x)$ is the interpolating polynomial that interpolates $(x-a)^kf(x)$ at the $x_j$. As $h rightarrow 0$, $p(x),/,(x-a)^k$ approaches the partial fractions part associated with $x=a$: $$ {A_1 over x-a}+cdots+{A_{k} over (x-a)^{k}},. $$
A. On the left are shown the function $f(x)$ (blue) and interpolant $I(x)$ (gold). The interpolation points are shown in red ($h<0$). B. On the right are shown the difference between the interpolant $I(x)$ and each of the function $f(x)$ (blue), the difference being zero at the interpolation points; and the difference between the interpolant $I(x)$ and the fractional part $p(x)$ (gold), the difference approaching zero as $hrightarrow0$. Link to Desmos version of graphic.
1.2 From (Newton) interpolation theory, one can derive a formula for the partial fractions decomposition, although it might not give a clear intuitive picture. If $$f(x) = {g(x) over (x-a_1)^{n_1} cdots (x-a_m)^{n_m}},$$ where the $a_r$ are distinct (possibly complex) roots of the denominator and the degree of $g$ is less than the degree of the denominator, then the partial fractions decomposition is $$f(x)=sum_r sum_{k=0}^{n_r-1} {A_{r,k} over (x-a_r)^{n_r-k}} ,,quad A_{r,k}= {{d^k over dx^k} left[f(x)(x-a_r)^{n_r}right]Big|_{a_r} over k!}$$ The appearance of the derivatives intuitively comes from the several points $a_r+jh$ approaching the same root $a_r$, similar to what was mentioned in the introduction of this section.
2.1 Let's say we have two distinct linear factors: $alpha (x-c),/,[(x-a)(x-b)]$. Then $c$ is the weighted average of $a$ and $b$ or center of mass: $c = (Ab + Ba)/(A+B)$ when $alpha (x-c),/,[(x-a)(x-b)] = A/(x-a) + B/(x-b)$.
The linear factor in the numerator disappears, as in $alpha,/,[(x-a)(x-b)]$, when $c=infty$, that is, when $A+B=0$.
Thus we can think of $[A:B]$ as (non-normalized) barycentric coordinates of the point $c$ on the projective line, relative to the points $a$ and $b$.
2.2 With Lagrange interpolation, we can connect 1 and 2 in the case that $f(x)=g(x)/h(x)$ with $h(x)$ having distinct (possibly complex) roots. One can, if desired, restrict the class of problems further and just deal with distinct real roots.
Let $a_r$, $r=1,dots,n$ be distinct real or complex numbers. Define the Lagrange polynomial and weights by $$ ell(x) = prod_{r=1}^n (x-a_r),,quad w_r = {1 over prod_{j ne r} (a_r-a_j)},. $$ The polynomials $ell(x)$ and $h(x)$, having the same roots, differ by a constant factor equal to the leading coefficient of $h(x)$. We can call this constant $m$. Thus $h(x) = m,ell(x)$. One form of the barycentric Lagrange interpolation formula is $$g(x) = f(x)h(x) = ell(x) sum_{r=1}^n {w_r over x-a_r},g(a_r),.$$ Therefore $$f(x) = {1 over m} sum_{r=1}^n w_r,g(a_r),{1over x-a_r},,$$ which is at once an interpolating formula between poles, a weighted sum, and the partial fractions decomposition. See this answer by David Speyer for it worked out another way with four poles.
The connections with the interpolation in 1 and the weighted sum in 2 are worth contrasting, because they are not analogies per se. In 1, interpolation is used to resolve high-order poles, one at a time; here it was used to resolve distinct simple poles. The approaches are somewhat orthogonal or complementary to each other. In 2, the focus was on the roots $a$, $b$, and $c$, which is possible only in the case of two distinct simple poles. It comes from the weighted sum, though: $$ {A over x-a} + {B over x-b} = {A(x-b) + B(x-a) over (x-a)(x-b)} = (A+B),{x-(Ab + Ba)/(A+B) over (x-a)(x-b)} $$
3.1 Alternatively, an analogy with number theory: If $a$ and $b$ are distinct primes, and you have a fraction $p,/,[a^j b^k]$, you can decompose it $p,/,[a^j b^k] = alpha,/,a^j + beta,/,b^k$ with $2|alpha|le a^j$, $2|beta|le b^k$ and perhaps expand $|alpha| = alpha_0+alpha_1a+alpha_2a^2+cdots+alpha_{j-1}a^{j-1}$ and $|beta|=beta_0+beta_1b+beta_2b^2+cdots+beta_{k-1}b^{k-1}$ with $0lealpha_m<a$, $0lebeta_n<b$.
3.2 To extend the number theory analogy, I would have said that the right intuition for partial fractions is the Chinese Remainder Theorem, not complex analysis. It is essentially an algebra problem, not a geometry/analysis problem.
Suppose $f(x)=g(x)/h(x)$ is a rational function in a polynomial ring $F[x]$ over a field $F$, and $h(x) = p(x)q(x)$ with $p$ and $q$ relatively prime. There is no need to factor them into irreducibles, although it is done for the purposes of integration. If we can decompose $$ f(x) = {g(x) over p(x),q(x)} = {A(x) over p(x)} + {B(x) over q(x)},, $$ we can continue decomposing each partial fraction inductively until the denominators are irreducible.
The partial fractions decomposition can be viewed in terms of the isomorphism $$ F[x]/(pq) cong F[x]/(p) oplus F[x]/(q) $$ given by $$ g mapsto (A,B) $$ where $A$ and $B$ satisfy the congruences $$ g equiv Aq (p),quad g equiv Bp (q) ,. $$ Since $p$ and $q$ are relatively prime, we can solve for the projections $A$ and $B$ onto the $(p)$ and $(q)$ components as $$ A equiv (q)^{-1}_p,g (p),,quad B equiv (p)^{-1}_q,g (q),, $$ where $(q)^{-1}_p$ is the inverse of $q$ mod $(p)$ and $(p)^{-1}_q$ is the inverse of $p$ mod $(q)$. These may be computed via Euclid's algorithm.
4.1 Treat $F[x]$ as a graded algebra, by degree, and assume the linear algebra works out so that there is a always a unique solution to the partial fractions decomposition. In the solution to $$f(x) = {A(x) over p(x)} + {B(x) over q(x)},,$$ if the degree of $f$ is less than the degree of $pq$, the degrees of $A$ and $B$ can turn out be anything less than the degrees of $p$ and $q$, respectively. For $F = {Bbb R}$ the field of real numbers, we can consider the cases where $p$ is of the form $(x-a)^k$ or $(x^2+ax+b)^k$. In both cases $A(x)/p(x)$ can be put in standard form by some simple algebra: $$ {A_1 over x-a}+cdots+{A_{k} over (x-a)^{k}} $$ $$ {A_1 x + B_1 over x^2+ax+b}+cdots+{A_{k} x + B_k over (x^2+ax+b)^{k}} $$ The algebraic steps in the first case is to replace $x$ by $a+u$, expand, and replace $u$ by $x-a$. In the second case, you have to successively reduce any power of $x$ greater than or equal to $2$ by replace factors $x^2$ by $u-ax-b$, expanding all the while, until no more replacements are possible; then with the numerator expanded, replace $u$ by $x^2+ax+b$.
As I said in the introduction, this is the way I looked at partial fractions intuitively as a young mathematician, even before I formally learned or connected it with "graded algebras": how degrees of polynomials work is just obvious.
Answered by Raciquel on December 28, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP