Mathematics Asked by Orlin Aurum on December 15, 2021
I know how to prove the equality when $m$ is a rational number and $n$ is an integer, but do not know how to go about proving this for real numbers. On a semi-related note, I was trying to prove this when both $m$ and $n$ are rational, and found out that I have to prove that $(frac{1}{z})^{frac{1}{y}}$=$frac{1}{z^{frac{1}{y}}}$. Does this need to be proven or can I accept it as a definition?
The very first thing you need to do is ask yourself what the definitions are. Without proper definitions, you'll never have a complete proof. So, if $a>0$ and $min Bbb{R}$, how are you even supposed to define $a^m$? This is not at all a trivial task.
For example, here's one possible approach to things:
From this point, it is a simple matter to use the various properties of the exponential and logarithmic functions: for any $a>0$ and $m,n in Bbb{R}$ begin{align} a^{m+n} &:= exp((m+n)log (a)) \ &= exp[mlog(a) + n log (a)]\ &= exp[m log(a)] cdot exp[nlog(a)] \ &:= a^m cdot a^n tag{$*$} end{align} Similarly, begin{align} (a^m)^n &:= exp[n log(a^m)] \ &:= exp[n log(exp(m log(a)))] \ &= exp[nm log(a)] tag{since $log circ exp = text{id}_{Bbb{R}}$} \ &:= a^{nm} \ &= a^{mn} end{align} where in the last line, we make use of commutativity of multiplication of real numbers.
Note that steps 1,2,3 are not at all trivial, and indeed there are entire chapters of calculus/analysis textbooks devoted to proving these facts carefully. So, while I only listed out various statements, if you want the proofs for the statements I made, you should take a look at any analysis textbook, for example, Rudin's Principles of Mathematical Analysis, or Spivak's Calculus (I recall Spivak motivating these things pretty nicely).
As for your other question, yes it is something which needs to be proven. This result can be easily deduced from two other facts.
Now, if $z>0$, then for any $xin Bbb{R}$, begin{align} z^x cdot left(frac{1}{z}right)^x &= left(zcdot frac{1}{z}right)^x = 1^x = 1 end{align} Hence, $left(frac{1}{z}right)^x = frac{1}{z^x}$. In particular, you can take $x=1/y$ to prove what you wanted.
Edit: Motivating the definition $a^x := exp(xlog(a))$, for $a>0, x in Bbb{R}$.
The long story short: this definition is unique in certain sense, and is almost forced upon us once we impose a few regularity conditions.
Now, let me once again stress that you should be careful to distinguish between definitions, theorems and motivation. Different authors have different starting points, so Author 1 may have one set definitions and motivations, and hence different theorems, while author 2 may have a completely different set of definitions, and hence have different theorems, and motivation.
So, let's start with some motivating remarks. Fix a number $a>0$. Then, we usually start start by defining $a^1 = a$. Next, given a positive integer $min Bbb{N}$, we define $a^m = underbrace{acdots a}_{text{$m$ times}}$ (If you want to be super formal, then ok, this is actually a recursive definition: $a^1:= 1$, and then for any integer $mgeq 2$, we recursively define $a^{m}:= acdot a^{m-1}$).
Now, at this point what we observe from the definition is that for any positive integers $m,nin Bbb{N}$, we have $a^{m+n} = a^m cdot a^n$. The proof of this fact follows very easily by induction.
Next, we typically define $a^0 = 1$. Why do we do this? One answer is that it is a definition, so we can do whatever we want. Another answer, is that we are almost forced to do so. Why? notice that for any $min Bbb{N}$, we have $a^m = a^{m+0}$, so if we want this to be equal to $a^m cdot a^0$, then we had better define $a^0 = 1$.
Next, if $m>0$ is an integer, then we usually define $a^{-m} := dfrac{1}{a^{m}}$. Once again, this is just a definition, so we can do whatever we want. The motivation for making this definition is that we have $1 =: a^0 = a^{-m+m}$ for any positive integer $m$. So, if we want the RHS to equal $a^{-m}cdot a^m$, then we had better define $a^{-m}:= frac{1}{a^m}$.
Similarly, if $m>0$, then we define $a^{1/m} = sqrt[m]{a}$ (assuming you've somehow proven existence of $m^{th}$ roots of positive real numbers). Again, this is just a definition. But why do we do this? Because we have $a =: a^1 = a^{frac{1}{m} + dots +frac{1}{m}}$, so if we want the RHS to equal $(a^{frac{1}{m}})^m$, then of course, we had better define $a^{1/m}:= sqrt[m]{a}$.
Finally, we define $a^{frac{m}{n}}$, for $m,n in Bbb{Z}$ and $n >0$ as $a^{m/n} = (a^{1/n})^m$. Once again, this is just a definition, so we can do whatever we want, but the reason we do this is to ensure the equality $a^{m/n} = a^{1/n + dots + 1/n} = (a^{1/n})^m$ is true.
Now, let's think slightly for what we have done. We started with a number $a>0$, and we defined $a^1 := a$, and we managed to define $a^x$ for every rational number $x$, simply by the requirement that the equation $a^{x+y} = a^x a^y$ hold true for all rational $x,y$. So, if you actually read through everything once again, what we have actually done is shown the following theorem:
Given $a>0$, there exists a unique function $F_a:Bbb{Q} to Bbb{R}$ such that $F_a(1) = a$, and such that for all $x,yin Bbb{Q}$, $F_a(x+y) = F_a(x)cdot F_a(y)$.
(Note that rather than writing $a^x$, I'm just writing $F_a(x)$, just to mimic the function notation more)
Our motivation has actually been to preserve the functional equation $F_a(x+y) = F_a(x)cdot F_a(y)$ as much as possible. Now, we can ask whether we can extend the domain from $Bbb{Q}$ to $Bbb{R}$, while preserving the functional equation, and if such an extension is unique. If the answer is yes, then we just define $a^x := F_a(x)$ for all real numbers $x$, and then we are happy. It turns out that if we impose a continuity requirement, then the answer is yes; i.e the following theorem is true:
Given $a>0$, there exists a unique continuous function $F_a:Bbb{R} to Bbb{R}$ such that $F_a(1) = a$, and such that for all $x,yin Bbb{R}$, $F_a(x+y) = F_a(x)cdot F_a(y)$.
Uniqueness is pretty easy (because $Bbb{Q}$ is dense in $Bbb{R}$ and $F_a$ is continuous). The tough part is showing the existence of such an extension.
Of course, if you already know about the $exp$ function and its basic properties like 1,2,3, then you'll see that the function $F_a:Bbb{R} to Bbb{R}$ defined by $F_a(x):= exp(x ln(a))$ has all the nice properties (i.e is continuous, it satisfies that functional equation, and $F_a(1) = a$). Because of this existence and uniqueness result, this is the only reasonable way to define $a^x equiv F_a(x) := exp(x log(a))$; anything other than this would be pretty absurd.
The purpose of the rest of my answer is to try to motivate how anyone could even come up with the function $F_a(x) = exp(xln(a))$; sure the existence and uniqueness result is very nice and powerful, but how could you try to come up with it by yourself? This certainly doesn't come from thin air (though at some points we have to take certain leaps of faith, and then check that everything works out nicely).
To do this, let's start with a slightly more restrictive requirement. Let's try to find a function $f:Bbb{R} to Bbb{R}$ with the following properties:
The first two conditions seem reasonable, but the third one may seem a little strange, but let's just impose it for now (it's mainly there to try to motivate things and hopefully simlify the argument and to convince you that $xmapsto exp(xln(a))$ didn't come from thin air).
First, we shall deduce some elementary consequences of properties 1,2,3:
In (2), we assumed $f$ is non-zero at a single point. We'll now show that $f$ is no-where vanishing, and that $f(0)=1$. Proof: we have for any $xinBbb{R}$, $f(x) cdot f(x_0-x) = f(x_0) neq 0$. Hence, $f(x) neq 0$. In particular, $f(0) = f(0+0) = f(0)^2$. Since $f(0)neq 0$, we can divide it on both sides to deduce $f(0) = 1$.
We also have for every $x in Bbb{R}$, $f(x)>0$. Proof: We have begin{align} f(x) = f(x/2 + x/2) = f(x/2)cdot f(x/2) = f(x/2)^2 > 0, end{align} where the last step is because $f(x/2) neq 0$ (this is why in real analysis, we always impose the condition $a = f(1) > 0$).
$f$ is actually differentiable on $Bbb{R}$ (not just at the origin). This is because for $tneq 0$, we have begin{align} dfrac{f(x+t) - f(x)}{t} &= dfrac{f(x)cdot f(t) - f(x) cdot f(0)}{t} = f(x) cdot dfrac{f(0+t) - f(0)}{t} end{align} now, the limit as $tto 0$ exists by hypothesis since $f'(0)$ exists. This shows that $f'(x)$ exists and $f'(x) = f'(0) cdot f(x)$. As a result of this, it immediately follows that $f$ is infinitely differentiable.
Now, we consider two cases. Case ($1$) is that $f'(0) = 0$. Then, we have $f'(x) = 0$ for all $x$, and hence $f$ is a constant function, $f(x) = f(0) = 1$ for all $x$. This is clearly not very interesting. We want a non-constant function with all these properties. So, let's assume in addition that $f'(0) neq 0$. With this, we have that $f'(x) = f'(0)cdot f(x)$; this is a product of a non-zero number and a strictly positive number. So, this means the derivative $f'$ always has the same sign. So, $f$ is either strictly increasing or strictly decreasing. Next, notice that $f''(x) = [f'(0)]^2 f(x)$, is always strictly positive; this coupled with $f(x+y) = f(x)f(y)$ implies that $f$ is injective and has image equal to $(0,infty)$. i.e $f:Bbb{R} to (0,infty)$ is bijective.
Theorem 1.
Let $f:Bbb{R} to Bbb{R}$ be a function such that:
- for all $x,yin Bbb{R}$, $f(x+y) = f(x)f(y)$
- $f$ is non-zero
- $f$ is differentiable at the origin, with $f'(0) neq 0$
Suppose $g:Bbb{R} to Bbb{R}$ is a function which also satisfies all these properties. Then, there exists a number $cin Bbb{R}$ such that for all $xin Bbb{R}$, $g(x) = f(cx)$. In other words, such functions are uniquely determined by a constant $c$.
Conversely, for any non-zero $cin Bbb{R}$, the function $xmapsto f(cx)$ satisfies the three properties above.
Proof
To prove this, we use a standard trick: notice that begin{align} dfrac{d}{dx}dfrac{g(x)}{f(cx)} &= dfrac{f(cx) g'(x) - g(x) cf'(cx)}{[f(cx)]^2} \ &= dfrac{f(cx) g'(0) g(x) - g(x) c f'(0) f(cx)}{[f(cx)]^2} \ &= dfrac{g'(0) - c f'(0)}{f(cx)} cdot g(x) end{align} Therefore, if we choose $c = dfrac{f'(0)}{g'(0)}$, then the derivative of the function on the LHS is always zero. Therefore, it must be a constant. To evaluate the constant, plug in $x=0$, and you'll see the constant is $1$. Thus, $g(x) = f(cx)$, where $c= frac{g'(0)}{f'(0)}$. This completes the proof of the forward direction. The converse is almost obvious
Remark
Notice also that from $g(x) = f(cx)$, by plugging in $x=1$, we get $g(1) = f(c)$, and hence $c = (f^{-1} circ g)(1) = frac{g'(0)}{f'(0)}$ (recall that we already stated that such functions are invertible from $Bbb{R} to (0,infty)$). It is this relation $c = (f^{-1} circ g)(1)$, which is the key to understanding where $xmapsto exp(xln(a))$ comes from. We're almost there.
Now, once again, just recall that we have been assuming the existence of a function $f$ with all these properties. We haven't proven the existence yet. Now, how do we go about trying to find such a function $f$? Well, recall that we have the fundamental differential equation $f'(x) = f'(0) f(x)$. From this, it follows that for every positive integer $n$, $f^{(n)}(0) = [f'(0)]^n$. We may WLOG suppose that $f'(0) = 1$ (other wise consider the function $xmapsto fleft(frac{x}{f'(0)}right)$), then we get $f^{(n)}(0) = 1$. Finally, if we make the leap of faith that our function $f$ (which is initially assumed is only differentiable at $0$ with $f'(0) = 1$, and then proved it is $C^{infty}$ on $Bbb{R}$) is actually analytic on $Bbb{R}$, then we know that the function $f$ must equal its Taylor series: begin{align} f(x) &= sum_{n=0}^{infty} dfrac{f^{(n)}(0)}{n!} x^n = sum_{n=0}^{infty}dfrac{x^n}{n!} end{align} This is one of the many ways of how one might guess the form of the exponential function, $exp$. So, we now take this as a definition: $exp(x):= sum_{n=0}^{infty}frac{x^n}{n!}$. Of course using basic power series techniques, we can show that $exp$ is differentiable everywhere, and satisfies that functional equation with $exp(0)=exp'(0) = 1$.
So, now, back to our original problem. Given any $a>0$, we initially wanted to find a function $F_a:Bbb{R} to Bbb{R}$ such that $F_a$ satisfies the functional equation, and $F_a(1) = a$, and such that $F_a$ is differentiable at $0$ with $F_a'(0) neq 0$. Well, in this case, both $F_a$ and $exp$ satisfy the hypothesis of theorem 1. Thus, there exists a constant $c in Bbb{R}$ such that for all $xin Bbb{R}$, $F_a(x) = exp(cx)$. To evaluate the constant $c$, we just plug in $x=1$, to get $c = (exp^{-1}circ F_a)(1) := log(a)$. Therefore we get $F_a(x) = exp(x log(a))$. This is why we come up with the definition $a^x := exp(xlog(a))$.
Answered by peek-a-boo on December 15, 2021
The rule is merely a specific case of the addition law of indices/powers. Consider: $$(a^m)^n=a^mtimes a^m times a^m times a^m times a^m...times a^m$$ where there are $n$ lots of $a^m$ multiplied together. We have the law that $$a^p times a^q=a^{p+q}$$ Applying this to your question we have: $$(a^m)^n=a^{m+m+m+m+m...+m}$$ where we have $n$ lots of $m$. Now, $n$ lots of $m$ is equal to $ntimes m$ which is equal to $mn$. So, finally, we have that $$(a^m)^n=a^{mn}$$ as required.
Edit: Since writing the above proof for positive integer exponents I have been shown a proof that $ln {a^x}=xln a $ without using what we are trying to prove in the question itself, which allows us to use the laws of logs etc to answer the question, as previous answers have done:
Consider $$f(x)=ln {x^n} -nln ximplies f'(x)=frac{nx^{n-1}}{x^n}-frac{n}{x}=frac{n}{x}-frac{n}{x}=0$$ This means that $f(x)$ must be equal to some constant, $c$, as only constants diiferentiate to $0$. Let's try to find $c$: We have $$ln {x^n} -nln x=c$$ Let $x=1$: $$ln1-nln1=c=0-0=0$$ So we have $c=0$, leaving us with $$ln {x^n} =nln x$$ For the sake of completeness, I'll now go on to answer the question for all real exponents. Note that $ln {x^n} =nln x$ is true for all $ninmathbb R$.
Let $x=a^m$: $$e^{ln {x^n}}=e^{ln {(a^m)^n}}=(a^m)^n$$ But we also have $$e^{ln {x^n}}=e^{nln {x}}=e^{nln {a^m}}=e^{mnln {a}}=e^{ln {a^{mn}}}=a^{mn}$$ So, at last, we have: $$(a^m)^n=a^{mn}$$. Thanks to peek-a-boo, among others, who attempted to make me understand this in more detail.
Answered by A-Level Student on December 15, 2021
I am not yet allowed to comment, so I write here. For the case a>0 you can easily prove by use of logarithm. For negative a you need to use complex analysis to prove the same thing.
Use following $$ln (a^m)^n=nln (a^m)=nmln a= ln a^{mn}.$$
New edit for comments:
You have $y=a^x$ where $a>0$. By definition $x=log_a y$.
Now instead in your problem statement you have $y=(a^m)^n$ so choose $b=a^m$ so that $y=b^n$ then by using the definition you get $n=log_b y=frac{ln y}{ln b}=frac{ln y}{ln a^m}=frac{ln y}{mln a}=frac{1}{m}log_a y$ which gives you $mn=log_a y$.
Choose $x=mn$ and use the definition again and you have proven the result.
Answered by Mikael Helin on December 15, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP