MathOverflow Asked by MathCrawler on November 3, 2021
All rings considered will be commutative and unitary. Let $A$ be a ring, $S subseteq A$ a multiplicatively closed subset. The localization $lambda_S : A longrightarrow A[S^{-1}]$ can be characterized as a ring homomorphism $lambda : A longrightarrow B$ with the following three properties:
(LC1) $lambda$ localizes $S$, i.e. $lambda(s)$ is invertible in $B$ for all $s in S$;
(LC2) for every $b in B$ there is $s in S$ such that $s b in text{im $lambda$}$;
(LC3) $ker lambda = {a in A ,| ,exists s in S: sa = 0}$.
One way to achieve this is to define the localization by means of generators and relations: take an indeterminate $T_s$ for each $s in S$, form the polynomial ring $A[T] = A[T_s|s in S]$ over $A$ in these indeterminates and quotient out the ideal generated by the $sT_s – 1$ , $s in S$, thus defining the localization $A[S^{-1}]$:
begin{equation*}
A[S^{-1}] := A[T_s|s in S],/,(sT_s-1|s in S).
end{equation*}
The structure map $lambda_S : A longrightarrow A[S^{-1}]$ then comes along as the composite
begin{equation*}
A longrightarrow A[T_s|s in S] longrightarrow A[S^{-1}].
end{equation*}
See [1], pp. I-7-8. The question is how to verify properties (LC1-3) for this construction. In fact, (LC1-2) are straightforward, but (LC3) seems hard. It is known to be true, since it holds in the other widespread model of localization, given by $mu_S : A longrightarrow S^{-1}A$ with
begin{equation*}
S^{-1}A := A times S / sim,
end{equation*}
where $sim$ denotes the equivalence relation
begin{equation*}
(a,s) sim (b,t) :iff exists u in S:, u(ta-sb) = 0,
end{equation*}
and
begin{equation*}
mu_S(a) := a/1,
end{equation*}
where, for $(a,s) in A times S$, $a/s$ denotes its equivalence class in $S^{-1}A$. Here, $(LC3)$ is trivial for $mu_S$, holding by construction. Since both $lambda_S$ and $mu_S$ are universal among the ring homomorphisms localizing $S$, it holds for $lambda_S$, too. But to show this directly for $lambda_S$ using its definition, is surprisingly difficult: if $lambda_S(a) = 0$, this means there are $s_1, dots s_n in S$ and polynomials $p_1(T), dots, p_n(T) in A[T]$ such that ($T_i:=T_{s_i}$)
begin{equation*}
a = sum_{i=1}^n p_i(T)(s_iT_i – 1).
end{equation*}
From this I can conclude
begin{equation*}
a = -sum_{i=1}^n a_i quad,quad a_i := p_i(0)
end{equation*}
but this is, for the time being, the end of the flagpole. In the best
of all possible worlds, I would have $p_i(T) = a_i$; this would give
$a_is_i = 0$ for $i=1, dots, n$, and so $sa = 0$ with $s := s_1 cdots s_n$, but I see no reason for that.
So does somebody know what is needed to make progress towards (LC3)?
[1] Serre, J.-P.,
Algèbre locale – Multiplicités
(Lecture Notes in Mathematics 11). Springer 1965
The proof of (LC3), in the given setting, is surprisingly difficult, or, at least, elaborate. Let $a in A$ with $lambda_S(a) = [a] = 0$ in $A[S^{-1}]$, i.e. one has begin{equation} tag{1} a in (sT_s-1,|,s in S). end{equation} To show is that begin{equation} tag{2} sa = 0 end{equation} for some $s in S$. Because of (1), there are elements $s_1, dots s_n in S$ and polynomials $p_1(T), dots,p_m(T) in A[T]$ such that begin{equation} a = sum_{i=1}^n p_i(T) (s_iT_i - 1) quad text{in $A[T]$},quad,quad T_i := T_{s_i}. end{equation} As a first reduction, we may assume $p_i(T) = p_i(T_1, dots,T_n)$ for all $i$, so that begin{equation} tag{3} a = sum_{i=1}^n p_i(T_1, dots, T_n) (s_iT_i - 1) quad text{in $A[T]$}. end{equation} Namely, let $T' subseteq T$ be those indeterminates which either equal some $T_i$, or which appear in at least one $p_i(T)$, $i = 1, dots, n$, so that we may write $T' = {T_1, dots, T_n, T_{n+1}, dots, T_q}$. By eventually introducing dummy terms with coefficient 0, we may assume $p_i(T) = p_i(T') = p_i(T_1, dots, T_q)$, so that $$a = sum_{i=1}^n p_i(T_1, dots, T_q) (s_iT_i - 1)$$. Putting $p_i(T):=0$ for $i=n+1, dots, q$ then gives begin{equation*} a = sum_{i=1}^q p_i(T_1, dots, T_q) (s_iT_i - 1) quad text{in $A[T]$}, end{equation*} which upon renaming $q$ by $n$ gives (3).
To prove that $sa = 0$ for some $s in S$ we proceed by induction on $n$. For $n = 1$ we start with
begin{equation*}
a = p(T_s) (sT_s - 1) quad text{in $A[T]$}
end{equation*}
for some indeterminate $T_s in X$. We abbreviate notation by writing $u := T_s$, so that we have the equation
begin{equation}
a = p(u) (su - 1) quad text{in $A[T]$}.
end{equation}
Let $p(u) = sum_{k=0}^d a_k u^k$; then
begin{equation*}
begin{split}
p(u) (su - 1)
&= sum_{k=0}^d sa_k u^{k+1}-sum_{k=0}^d a_ku^k\
&= sum_{k=1}^{d+1} sa_{k-1} u^k -
sum_{k=0}^d a_k u^k\
&= sa_du^d + sum_{k=1}^d(sa_{k-1}-a_k) u^k-a_0\
&= a,
end{split}
end{equation*}
so that
begin{equation*}
a_0=-a quad,quad a_k=sa_{k-1},,,k=1,dots, d-1
quad,quad sa_d = 0,
end{equation*}
hence
begin{equation*}
a_k = -s^ka ,,, k=0, dots, d quad,quad
sa_d = 0 ,
end{equation*}
so that
begin{equation*}
s^{d+1}a = -sa_d = 0,
end{equation*}
as was to be shown. This establishes the base clause of the induction.
We now assume that $n ge 1$, and that, with $k < n$, begin{equation*} a = sum_{i=1}^k p_i(T_1,dots,T_n)(s_iT_i-1) quad text{in $A[T]$} end{equation*} implies that $sa = 0$ for some $s in S$, and we want to show that the same is true for $k = n$. So we assume, with a given ring $A$, that $a in ker lambda_S$ and (2) holds. We put $A' := A[T_n]/(s_nT_n - 1)$. The projection $A longrightarrow A'$ then realizes(!) the localization $$lambda_{S'} : A longrightarrow A[S'^{-1}]$$ with $S' := {s_n}$; in particular, $A'= A[S'^{-1}]$. The canonical map begin{equation*} A[T_n] longrightarrow A[T] longrightarrow A[S^{-1}] end{equation*} induces, by passing to the quotient, $$A'= A[S'^{-1}] longrightarrow A[S^{-1}] = (A[S'^{-1}])[S^{-1}]$$, which realizes the localization begin{equation*} lambda_S' : A[S'^{-1}] longrightarrow (A[S'^{-1}]) [S^{-1}]. end{equation*} The localization map $lambda_S : A longrightarrow A[S^{-1}]$ then factors as the composite of localizations begin{equation*} A longrightarrow A' longrightarrow A[S^{-1}] = A longrightarrow A[S'^{-1}] longrightarrow (A[S'^{-1}])[S^{-1}]. end{equation*} Let $overline{a} in A' = A[S'^{-1}]$ be the image of $a in A$ under $A longrightarrow A'$. Then $lambda_S(a) = lambda_S'(overline{a}) = 0$. and so, by (3), begin{equation*} overline{a} = sum_{i=1}^{n-1} overline{p_i}(T_1, dots, T_{n-1}) (s_iT_i - 1) quad text{in $A'[T]$} end{equation*} with $overline{p_i}(T_1, dots, T_{n-1}) = p_i(T_1,dots, T_{n-1},1/s_n)$, $i=1, dots, n-1$, since $s_nT_n - 1 = 0$ in $A' = A[S'^{-1}]$. Therefore, by the induction hypothesis, $soverline{a} = overline{sa} = 0$ for some $s in S$. Thus $sa in ker lambda_{S'}$, and so, by the base clause $n=1$ applied to $lambda_{S'}$, begin{equation*} s_n^{d+1}(sa) = (s_n^{d+1}s)a = 0, end{equation*} which finishes the proof. As a byproduct of the proof we obtain that $s$ in (2) may be chosen as a product of the $s_i$'s (with repeated factors), i.e. as an element of the multiplicative closure of ${s_1, dots, s_n}$.
Answered by MathCrawler on November 3, 2021
Here is a proof that $$ker lambda = { a in A , vert , sa = 0 text{ for some } s in S} quad (LC_3)$$ holds true assuming that the following definition is in use: $$A[S^{-1}] = A[T_s vert s in S] /left(sT_s -1 vert s in Sright).$$
If $ta = 0$ for some $a in A$ and some $t in S$, then we have $$a = -(tT_t - 1)a in (sT_s -1 vert s in S).$$ Hence the inclusion ${ a in A , vert , sa = 0 text{ for some } s in S} subseteq ker lambda$ comes (almost) for free.
To prove the reverse inclusion, consider $a = sum_{i = 1}^n p_i(T_1, dots, T_n)(s_i T_i - 1) in A[T_1, dots, T_n]$ and reason by induction on $n ge 1$.
Let us suppose that $n = 1$, i.e., $a = p_1(T_1)(s_1 T_1 - 1)$. Replacing simultaneously $a$ by $s_1^m a$ and $p_1(T_1)$ by $s_1^m p_1(T_1)$ for some $m > 0$ if need be, we can assume that either $p_1(T_1) = 0 = a$, or $deg(s_1 p_1(T_1)) = deg(p_1(T_1))$. As the latter identity is clearly impossible, the induction base is settled.
Suppose now that $n > 1$ and let $overline{a}$ be the image of $a$ in $overline{A}[T_1, dots, T_{n - 1}] simeq A[T_1, dots, T_n]/(s_nT_n - 1)$ where $overline{A} = A[T_n]/left(s_n T_n - 1right)$ (see claim below). Since $overline{a} = sum_{i = 1}^{n - 1} overline{p_i}(T_1, dots, T_{n - 1}) (s_i T_i -1)$ where $overline{p_i} inoverline{A}[T_1, dots, T_{n - 1}]$ is obtained from $p_i$ by assigning the last indeterminate $T_n$ to its image in $overline{A}$, the induction hypothesis yields $s overline{a} = 0$ for some $s in S$. This means that $sa in (s_n T_n - 1) subset A[T_n]$ so that we can conclude by resorting to the case $n = 1$.$square$
Note that we have used the following:
Claim. Let $R$ be a commutative and unital ring. Let $R[T_1, dots, T_n]$ be the ring of multivariate polynomials over $R$ with $n$ indeterminates $T_1, dots, T_n$. Let $P_1, dots, P_k in R[T_n]$ with $k ge 0$. Then the natural isomorphism $R[T_1, dots, ,T_n] rightarrow (R[T_n])[T_1, dots, T_{n - 1}]$ induces a ring isomorphism $R[T_1, dots, T_n]/(P_1, dots, P_k) rightarrow overline{R}[T_1, dots, T_{n - 1}]$ where $overline{R} Doteq R[T_n]/(P_1, dots, P_k)$.
Answered by Luc Guyot on November 3, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP