Tải bản đầy đủ (.pdf) (250 trang)

Theory algebra (Tài liệu tiếng anh)

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (1017.45 KB, 250 trang )

Contents
1 Rings

1

1.1

Definition and Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

1.2

Domains, Fields, and Division Rings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6

1.3

Homomorphisms and Subrings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

11

1.4

Products of Rings

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

18


1.5

Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

23

2 Ideals

28

2.1

Ideals and Quotient Rings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

28

2.2

Generating Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

32

2.3

The Unit Ideal and Maximal Ideals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

36

2.4


Prime Ideals in a Commutative Ring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

40

2.5

The Chinese Remainder Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

49

3 Modules

56

3.1

Definition and Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

56

3.2

The Usual Constructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

59

3.3

Products, Sums, and Free Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


62

3.4

Mixing Ideals and Submodules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

70

4 Categories

78

4.1

Definitions and Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

78

4.2

Universal Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

82

4.3

Change of Rings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

95


5 Some Tools
5.1

101

Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101


5.2

Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

5.3

Exact Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

5.4

Projective and Injective Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

5.5

Noetherian Rings and Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

5.6

Graded Rings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144

6 Principal Ideal Domains


151

6.1

Euclidean Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151

6.2

Unique Factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

6.3

Modules over a Euclidean Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157

6.4

Modules over a Principal Ideal Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164

6.5

Canonical Forms for Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172

A A Brief Introduction to Metric Spaces
A.1 The Basics

188

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188

A.2 Tietze’s Extension Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197

B Rank of Free Modules, Infinite Case

202

C Unique Factorization of Polynomials

206

C.1 Groups of Divisibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
C.2 Divisibility in Polynomial Rings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
D The Intersection of Two Plane Curves
D.1 Associated Primes and Primary Decompositions

222
. . . . . . . . . . . . . . . . . . . . . . . . . 222

D.2 Euler Characteristics and Hilbert Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
D.3 Monomial Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245


Chapter 1: Rings
§1.1

Definition and Examples

We’ll begin with what you should already know:
Definition: A group is a set G with a binary operation (which we will express as multiplication) that
satisfies the following three properties:
(i) For every x, y, x ∈ G, (xy)z = x(yz).
(ii) There is an e ∈ G such that ex = x = xe for every x ∈ G.

(iii) For every x ∈ G there is a y ∈ G such that xy = e = yx.
If the operation is also commutative, then we say that G is an abelian group.
From the abstract point of view, a ring is simply a group with an extra operation. Formally:
Definition: A ring is a set R with two binary operations (which we will express as addition and multiplication) that satisfy the following properties:
(i) The addition makes R an abelian group (and we write the identity element as 0).
(ii) For any x, y, z ∈ R we have x(y + z) = xy + xz and (y + z)x = yx + zx.
(iii) For any x, y, z ∈ R we have x(yz) = (xy)z.
(iv) There is an element 1 ∈ R such that 1x = x = x1 for all x ∈ R.
If the multiplication is also commutative, then we say that R is a commutative ring
For the sake of completeness, I should mention that while there is no debate whatsoever about the
definition of group, there are a number of people who use a weaker definition of ring, dropping property (iii)
or (iv). They would then call my rings “associative rings with identity”. However, for this course we will
always assume that rings satisfy properties (iii) and (iv).
Example 1.1.1: Z, Q, R, and C are all commutative rings.
1


Example 1.1.2: Let R be any ring (usually taken to be commutative). The polynomial ring with coefficients
in R and variable x is the set


ak xk | each ak ∈ R and ak = 0 for all but finitely many k

R[x] =
k=0

with addition






ak xk +
k=0

and multiplication



bk xk =

(ak + bk )xk

k=0

k=0





ak xk
k=0



bk xk

ck xk


=

k=0

k=0

k

ai bk−i — this multiplication comes from the distributive axiom and the idea that (ai xi )(bj xj )

where ck =
i=0

should be ai bj xi+j . Note in particular that this means that the variable x commutes with every element
of the coefficient ring R; this will be important later on.
You should keep in mind that the “x” here is just a symbol — the elements of R[x] are basically
just sequences of elements of R, and we only use the powers of x as place holders to say where we are
in the sequence (and to motivate the definition of the multiplication). In particular, two elements of the
polynomial ring are equal if and only if all of their coefficients are equal.


ak xk ∈ R[x] we will get a function R → R which takes an r ∈ R to

Of course, given any f (x) =
k=0


k

f (r) =


ak r . If R is one of the rings Z, Q, R, or C then two polynomials f and g are the same as
k=0

elements of R[x] if and only if they give the same function R → R — for these coefficient rings, a non-zero
polynomial can have only finitely many roots but if f and g define the same function then f − g will have
infinitely many roots. However, this is not true for all choices of coefficient ring. For example, suppose
R = {0, 1} with addition and multiplication given by the tables:
+

0 1

×

0

1

0

0 1

0

0

0

1


1 0

1

0

1

Then r2 = r for all r ∈ R, so x2 and x give the same function R → R, but they are not the same as
elements of R[x].
2


Example 1.1.3: We can modify the previous example to give polynomial rings in more than one variable.
At first glance, it would make sense to do this by letting R[x1 , . . . , xn ] be the set of all formal sums
ak1 ,...,kn xk11 · · · xknn with the kj being non-negative integers and all but finitely many of the ak1 ,...,kn
being zero. However, this makes a formal definition of multiplication quite complicated — with a bit of
fussing, you could come up with one, but is it really something you want to use to prove theorems?
Instead, we will define R[x1 , . . . , xn ] inductively, letting
R[x1 , . . . , xn ] = R[x1 , . . . , xn−1 ][xn ]
— that is, R[x1 , . . . , xn ] is a polynomial ring in one variable with the coefficients being polynomials in
n − 1 variables. This inductive definition of polynomial rings makes it relatively easy to prove theorems
of the type “If R is nice then so is R[x1 , . . . , xn ]” — all you really have to do is prove that R[x] is nice
and then use induction on the number of variables. For example, if you want to call my bluff and check
all of the ring axioms, you only have to check them for R[x]; you then get R[x1 , . . . , xn ] for free.
To see how this fancy definition fits with the more straightforward one, consider the two variable case:


fk y k where the coefficient fk is an element of R[x]. If we


An element of R[x, y] = R[x][y] looks like g =
k=0



ak,j xj , then our element g of R[x][y] becomes g =

write this fk as
j=0

ak,j xj y k , which looks more
k≥0,j≥0

bk,j xj y k , we

like what we think an element of R[x, y] should be. Conversely, given a formal sum
k≥0,j≥0

can make it look like an element of R[x][y] just by grouping together terms that involve the same power
of y. To multiply two elements in R[x][y] we use the distributive axiom together with the formula:
(ai,j xi )y j

(bk,l xk )y l = (ai,j xi )(bk,l xk ) y j+l = (ai,j bk,l xi+k )y j+l

which hopefully matches what you think the product of two polynomials in two variables should be.
Example 1.1.4:

Let R be a ring (again usually taken to be commutative) and let G be a group. The

group ring of G with coefficients in R is the set





R[G] =
ag g | each ag ∈ R and ag = 0 for all but finitely many g


g∈G

3


with addition
ag g +
and multiplication

bg g =
g∈G

g∈G










ag g  
g∈G

where cg =

(ag + bg )g

g∈G

bg g  =
g∈G

cg g
g∈G

ah bh−1 g .
h∈G

This multiplication formula should look a lot like the one for a polynomial ring, with “h−1 g” taking
the place of “k − i”. In fact, we’re doing basically the same thing, except that instead of adding powers of
some fixed variable, we’re multiplying group elements — that, is when we multiply two single terms ah h
and bk k we get (ah bk )(hk). Again, note that this means that we are forcing the coefficients to commute
with the group elements.
If G is written additively instead of multiplicatively, then this formula would become (ah h)(bk k) =
(ah bk )(h+k), which looks a bit ridiculous. To make it look a bit more natural we instead write elements of
ag xg

R[G] like polynomials with the powers taken from G — that is, an element of R[G] has the form
g∈G


where x is just a symbol. Then the multiplication formula




g∈G

ag xg  

bg xg  =

g∈G

ah b−h+g
g∈G

xg

h∈G

amounts to saying xh xk = xh+k .
Example 1.1.5: Let A be an abelian group. Recall that an endomorphism of A is a group homomorphism
from A to itself. The set End A of endomorphisms of A is a ring where the addition is defined point-wise
and the multiplication is given by composition: Given f, g ∈ End A, the sum f + g is the function defined
by (f + g)(a) = f (a) + g(a) and the product f g is the function defined by (f g)(a) = f (g(a)).
Similarly, if V is a real vector space then the set EndR V of linear transformations V → V is a ring
with point-wise addition and composition for multiplication.
Example 1.1.6: Let R be a ring (usually taken to be commutative) and let n be a positive integer. Let
Mn (R) be the set of n × n matrices whose entries are elements of R. Given two such matrices A and B,
4



we can define the sum A + B and product AB just as for real matrices, and these operations make the set
Mn (R) into a ring.
You can think of this example as a variation on the previous example: If V is a finite dimensional
real vector space (say with dimension dim V = n) and you have a specific basis for V in mind, then you
learned in linear algebra that any endomorphism of V is given by an element of Mn (R); also, the sum
and product operations on matrices correspond to the sum and product on endomorphisms. Similarly,
any endomorphism of the free abelian group Zn = Z ⊕ · · · ⊕ Z is given by an element of Mn (Z). (If you
haven’t seen this before, start with the case n = 1 and show that any group homomorphism f : Z → Z is
given by multiplication by f (1). For the general case, given a group homomorphism g : Zn → Zn , and two
integers i and j between 1 and n, let gi,j be the composition πi ◦ g ◦ ιj of g with the j th inclusion Z → Zn
and the ith projection Zn → Z. Letting B ∈ Mn (Z) be the matrix whose ij th entry is gij (1), you should
be able to prove that g(a) = Ba for every a ∈ Zn — viewing a as a column vector, of course.)
Example 1.1.7: Let R be a ring (usually taken to be commutative) and let X be any set. The set F (X; R)
of functions from X to R is a ring where both addition and multiplication are defined pointwise.
Traditionally, groups are used mostly to describe symmetries, in some sense of the word. On the other
hand, the examples in this section (and variations that will appear later in this course) show rings in three
different contexts:
• Like groups, they can be used to describe “symmetries” of some object, provided that this object itself
has an algebraic structure: For example, the endomorphism ring of an abelian group A can be thought
of as describing the symmetries of A. Also, we will see later that the group ring Z[G] gives information
about how the group G can act on abelian groups (see Example 3.1.6); likewise, R[G] tells us how G
can act on real vector spaces.
• Rings also appear in number theory, giving us a context in which to talk about divisibility and prime
factorizations; in fact parallels between factoring integers and factoring polynomials — that is, parallels
between the rings Z and R[x] — were among the first reasons for creating the concept of an abstract

5



ring. (See Section 6.2, for example.)
• Finally, rings show up in geometry where certain subrings of F (X; R) can tell us about the geometry
of the set X (as in Example 1.4.3). Likewise, viewing elements of the ring R[x1 , . . . , xn ] as functions
from Rn to R lets us use the polynomial ring to describe the geometry of Rn ; we will come back to
this theme often.
You should also keep in mind that the “symmetry” rings (R[G], End A, and Mn (R)) are usually noncommutative while the “arithmetic” rings (Z and R[x]) and “geometry” rings (R[x1 , . . . , xn ] and F (X; R))
are commutative (as long as the “coefficient” ring R is commutative).

§1.2

Domains, Fields, and Division Rings

It is easy to prove that if R is a ring and x is any element of R, then x0 = 0 = 0x (just use distributivity
and the equation 0 + 0 = 0). However, it is not always true that xy = 0 implies x = 0 or y = 0:
Example 1.2.1: Let R be the ring of functions F ({0, 1}, Z). Let f, g ∈ R be defined by f (0) = 1, f (1) = 0,
g(0) = 0, and g(1) = 1. Then (f g)(0) = 1 · 0 = 0 and (f g)(1) = 0 · 1 = 0, so f g = 0, but neither f nor g
is zero.
Definitions: Let R be a ring and x ∈ R.
(i) We say that x is a left zerodivisor if there is a y ∈ R, y = 0 such that xy = 0. Similarly, x is a right
zerodivisor if zx = 0 for some z = 0.
(ii) A left inverse for x is a y ∈ R with yx = 1. Similarly, a right inverse for x is a z ∈ R with xz = 1.
(iii) We say that x is a unit if it has both a left inverse and a right inverse.
If R is commutative, then the distinctions “left” and “right” are meaningless: x is a left zerodivisor if and
only if it is right zerodivisor, and similarly for inverses; so, if we are in a situation where we are assuming R
is commutative we will just talk about zerodivisors and inverses without any reference to sidedness.
Lemma 1.2.1 Let R be a ring and x ∈ R.
6



(i) If x has a left (resp. right) inverse then x is not a left (resp. right) zerodivisor.
(ii) If x is a unit then any left inverse of x is also a right inverse and vice versa. Furthermore, this inverse
is unique.
Proof:
(i) Suppose y is a left inverse of x. If xz = 0 then yxz = 0 as well, but yx = 1, so this says z = 0.
Thus x is not a left zerodivisor. The proof of the “right” case is similar.
(ii) Since x is a unit, we have a left inverse y and a right inverse z. Using the fact that yx = 1 we get
yxz = z, and using the fact that xz = 1 we get yxz = y. Therefore y = z, which shows that our
left inverse y is also a right inverse for x and our right inverse z is also a left inverse. It further
shows that any two inverses for x have to be the same.
Part (ii) of this lemma lets us speak of “the” inverse of a unit u; we write this inverse as u−1 .
The existence of a left inverse does not keep a ring element from being a right zero-divisor:
Example 1.2.2:

Let A be a countably generated free abelian group — A = Z ⊕ Z ⊕ · · · — and let

R = End A. Let f be the “shift right” endomorphism of A, g the “shift left”, and h the projection on the
first component. That is:
f (a1 , a2 , . . .) =

(0, a1 , a2 , . . .)

g(a1 , a2 , . . .) =

(a2 , a3 , . . .)

h(a1 , a2 , . . .) =

(a1 , 0, 0, . . .).


Then gf = 1, so f has a left inverse, but hf = 0, so f is a right zerodivisor. This also shows that in part
(ii) of the lemma we definitely had to assume that our x had inverses on both sides before we could prove
that a left inverse was also a right inverse.
If a is a unit of a ring R, then for any b ∈ R, the equation ax = b will have a unique solution — namely
x = a−1 b. More generally, if a has a right inverse then there will always be at least one solution, and if a
7


has a left inverse there will be at most one solution. In fact, to get solutions to be unique, one only needs to
know that a is not a left zerodivisor. (Make sure you can prove all these statements and formulate analogs
for equations of the form xa = b — start by working with the case where a is a unit and see what you use
to prove that a−1 b is a solution and what you use to prove that it is the only solution.)
Definitions: Let R be a ring in which 1 = 0.
(i) R is a division ring if every non-zero element of R is a unit.
(ii) R is a field if it is both commutative and a division ring.
(iii) R is an integral domain (or sometimes just domain) if it is commutative and no non-zero element of
R is a zerodivisor.
In this definition we explicitly assumed that 1 = 0. If 1 = 0 then R has only that one element, since
for any x ∈ R we have x = 1x = 0x = 0. This is called the zero ring and it is frequently a troublemaker
— there are a lot of theorems which are true for all rings except the zero ring. So, to keep from having to
worry about that case when we talk about division rings, fields, and domains (which are supposed to be nice
rings), we simply declare that these classes of rings do not include the zero ring.
By Lemma 1.2.1, any field is also an integral domain. The converse, of course, is not true, since Z is a
domain but not a field.
Proposition 1.2.2 If R is an integral domain, then the polynomial ring R[x] is also an integral domain.
Proof: If f and g are two non-zero elements of R[x] then we can write them as f = an xn + · · · + a0
and g = bm xm + · · · + b0 with an and bm both non-zero. Then the coefficient of xn+m in the product f g
is an bm — all the other terms of f and g are too small to contribute to the xn+m part of the product.
Since R is an integral domain, an bm = 0, which makes f g = 0.
Note that this proof also shows that if R is a domain and f and g are two elements of R[x], then

deg f g = deg f + deg g. This should let you prove that R[x] will never be a field. You should be careful
with polynomial rings R[x] where the coefficient ring R is not an integral domain — if a and b are non-zero
8


elements of R with ab = 0, then deg ab = −∞ while deg a + deg b = 0 + 0 = 0, so the degree of a product
need not be the same as the sum of the degrees.
Group rings usually have zerodivisors:
Example 1.2.3:

Suppose G is a group and g is a non-trivial torsion element of G. Let n be the smallest

power with g n = e. Then (e − g)(e + g + · · · + g n−1 ) = e − g n = 0 while e − g and e + g + · · · + g n−1 are
both non-zero since they are sums of distinct group elements.
On the other hand, if G is a torsionfree abelian group, then R[G] will be an integral domain whenever
R is: Suppose

ag g

bg g

= 0. Let H be the subgroup generated by those g ∈ G with either

ag = 0 or bg = 0. There can only be finitely many such g, so H is finitely generated; as a subgroup of
a torsionfree group, H is also torsionfree. Therefore, H ∼
= Zn for some n. By Exercise 1.1, R[H] is a
domain. However, we can look at
that either

ag g = 0 or


ag g

bg g

as a product of elements of R[H], so this tells us

bg g = 0.

The traditional example of a non-commutative division ring is the ring of quaternions:
Example 1.2.4:

Let H be a real vector space with basis {1, i, j, k}. We define a multiplication on H by

first setting i2 = j 2 = k 2 = −1, ij = −(ji) = k, jk = −(kj) = i, and ki = −(ik) = j, and then “extending
by linearity” — that is, we want the product on more complicated elements of H to be distributive, we
want 1 to be the identity element, and we want real numbers to commute with quaternions. This makes
the product of two arbitrary quaternions
(a0 + a1 i + a2 j + a3 k)(b0 + b1 i + b2 j + b3 k) =

(a0 b0 − a1 b1 − a2 b2 − a3 b3 )
+(a1 b0 + a0 b1 − a3 b2 + a2 b3 )i
+(a2 b0 + a3 b1 + a0 b2 − a1 b3 )j
+(a3 b0 − a2 b1 + a1 b2 + a0 b3 )k.

If we fix a specific x = a0 + a1 i + a2 j + a3 k ∈ H, then given any y ∈ H we can find the coordinates of
9


xy relative to the basis {1, i, j, k} from the coordinates of y by multiplying by the matrix


a
 0

a1



a2

a3


−a1

−a2

a0

−a3

a3

a0

−a2

a1

−a3




a2 

.

−a1 

a0

Assuming x = 0, this matrix will be invertible — you can check that its determinant is (a20 +a21 +a22 +a23 )2 =
0 — so the linear transformation µx : H → H given by µx (y) = xy is a bijection. In particular, that means
that there is a y with xy = 1, so x has a right inverse. Now, right-multiplication by x — that is, the
function H → H given by z → zx is also a linear transformation. Since x has a right inverse, Lemma 1.2.1
says that this linear transformation is injective. Since H is finite dimensional, this linear transformation
must also be surjective, which says that x has a left inverse. Thus x is a unit. Since this works for any
non-zero x ∈ H, that shows that H is a division ring.
Note that the degree 2 polynomial x2 + 1 ∈ H[x] has 3 roots: i, j, and k. So, even in a “nice” noncommutative ring, one cannot say that the number of roots of a polynomial is at most its degree. (This is
true for integral domains, however; see Exercise 1.2b.)
One can modify the last part of the proof that H is a division ring to get the following fact:
Proposition 1.2.3 Let R be a finite ring. Then every element of R is either a unit or a two-sided zero
divisor. In particular, every finite integral domain is a field.
Proof: Let x ∈ R. We will show that if x has no right inverse then x is a left zerodivisor, which will
imply that x has no left inverse either. An analogous proof will show that if x has no left inverse then
it is a right zerodivisor and thus has no right inverse. That means that either x has inverses on both
sides or it is a zerodivisor on both sides.
Suppose x has no right inverse and let µx : R → R be the function given by left-multiplication by
x — that is, µx (y) = xy. Since x has no right inverse, 1 is not in the image of µx , so this function is
not surjective. Since R is finite, the pigeonhole principle tells us that there are two distinct elements

10


y, z ∈ R which get mapped to the same thing. From xy = xz we can conclude that x(y − z) = 0; since
y − z = 0, that makes x a left zerodivisor, as claimed.

§1.3

Homomorphisms and Subrings

Definition: Let R and S be two rings. A function f : R → S is a ring homomorphism if it satisfies the
following properties:
(i) For all x, y ∈ R, f (x + y) = f (x) + f (y).
(ii) For all x, y ∈ R, f (xy) = f (x)f (y).
(iii) f (1) = 1.
Note that property (i) says that f gives a homomorphism of abelian groups (R, +) → (S, +). Since
group homomorphisms always preserve the identity of the group, we automatically get f (0) = 0. However,
property (ii) is not enough to guarantee f (1) = 1:
Example 1.3.1: Let X be a set and let Y be a proper subset of X. Let R be any non-zero ring. We define
a function ϕ : F (Y ; R) → F (X; R) as follows: Given f ∈ F (Y ; R) we let ϕ(f ) be the function X → R
defined by
ϕ(f )(x) =




f (x)

if x ∈ Y




0

otherwise.

Then ϕ satisfies properties (i) and (ii). Now let 1X and 1Y be the identity elements of F (X; R) and
F (Y ; R) respectively. Then 1X (x) = 1 for all x ∈ X, but ϕ(1Y )(z) = 0 for all z ∈ X \ Y . Since we are
assuming that Y is a proper subset of X, there is at least one such z, so ϕ(1Y ) = 1X .
On the other hand, you should be able to prove that the function ψ : F (X; R) → F (Y ; R) defined by
ψ(f ) = f |Y is a ring homomorphism (where “f |Y ” means the restriction of f to the subset Y ).
The difference between the ring theory and group theory situations is that we don’t know that f (1) has
an inverse — if f (1) has an inverse on either side, then from f (1) = f (1)f (1) we could conclude f (1) = 1.
So, for example, if S is a division ring and we know f (1) = 0, then property (ii) will guarantee that f (1) = 1.
11


Homomorphisms from the ring of integers are very easy to describe:
Proposition 1.3.1 Let R be any ring. Then there is precisely one ring homomorphism from Z to R.
Proof: We define f : Z → R by
• f (0) = 0.
• For n > 0, we define f (n) inductively by f (n) = f (n − 1) + 1.
• For n < 0, we let f (n) = −f (−n).
To see that this f preserves addition, let m, n ∈ Z. If n = 0 (and m is arbitrary) we have
f (m + 0) = f (m) = f (m) + 0 = f (m) + f (0).
If m + n = 0 and n > 0, then m = −n < 0 and
f (m) + f (n) = −f (−m) + f (n) = −f (n) + f (n) = 0 = f (m + n).
Next we consider the case where both n and m + n are positive, and prove that f (m + n) = f (m) + f (n)
by induction on n. We’ve already dealt with the base case, and the inductive step is:
f (m + n) = f (m + n − 1) + 1 = f (m) + f (n − 1) + 1 = f (m) + f (n).

If m < 0 and n > 0 then f (n) = f m+n+(−m) = f (m+n)+f (−m) (since both −m and m+n+(−m)
are positive), so
f (m + n) = −f (−m) + f (n) = f (m) + f (n).
Finally, if m and n are both negative, then
f (m) + f (n) = −f (−m) − f (−n) = −(f (−m) + f (−n)) = −f −(m + n) = f (m + n).
To see that f preserves multiplication, we again start with the easy verification that f (m0) =
f (m)f (0) for any m ∈ Z. By induction, given m, n ∈ Z with n > 0 we have
f (m)f (n) = f (m) f (n − 1) + 1 = f (m)f (n − 1) + f (m) = f (mn − m) + f (m) = f (mn).
12


On the other hand, if n < 0 and m is still any integer, we have
f (m)f (n) = f (m)(−f (−n)) = −f (−mn) = f (mn)
(the last equation works because f , being an abelian group homomorphism, preserves additive inverses).
Lastly, f (1) = f (0) + 1 = 1, so f is a ring homomorphism.
Conversely, if g : Z → R is any ring homomorphism, then we must have g(0) = 0 = f (0) and
g(1) = 1 = f (1). Then for any n > 0 induction gives g(n) = g(n − 1) + g(1) = f (n − 1) + f (1) = f (n).
If n < 0, then since g preserves additive inverses, we also have g(n) = −g(−n) = −f (−n) = f (n). Thus
f is unique.
Definition: Let ϕ : R → S be a homomorphism of rings. We say that ϕ is an isomorphism if there is a ring
homomorphism ψ : S → R such that ϕ ◦ ψ = idS and ψ ◦ ϕ = idR . In this case we say that R and S are
isomorphic, written R ∼
= S.
As with groups, to see that ϕ is an isomorphism it is enough to show that it is both injective and
surjective: if we know that much, then ϕ has an inverse function ψ : S → R on the level of sets, and one can
prove that this inverse is automatically a ring homomorphism (the proof is essentially the same as in the
group theory case).
Example 1.3.2: In Example 1.1.6, I claimed that the rings Mn (Z) and End Zn were the same. To make
this explicit, we consider the ring homomorphism ϕ : Mn (Z) → End Zn defined as follows: Given any
A ∈ Mn (Z) we let ϕ(A) be the group homomorphism Zn → Zn with ϕ(A)(x) = Ax for all x ∈ Zn . This

ϕ is a ring homomorphism because matrix multiplication is distributive and associative.
In Example 1.1.6, I showed you how to prove that any endomorphism of Zn is given by a matrix, which
is to say that ϕ is surjective.
To see that ϕ is injective, first note that as a ring homomorphism ϕ is also a homomorphism of the
underlying additive groups; that means that to show that ϕ is injective we only need to show that ϕ(A) = 0
implies A = 0. So, suppose A = 0. That means that there must be a non-zero entry in A, say in the j th
column. Letting ej be the j th standard basis vector in Zn — ej is 1 in the j th spot and 0 everywhere else
13


— Aej is the j th column of A. Since we are assuming that A has a non-zero element in that column, this
says that ϕ(A) is not the zero homomorphism.
Therefore ϕ is both surjective and injective, which makes it an isomorphism.
Definition: Let R be a ring and let S be a subset of R. We say that S is a subring of R provided
(i) If x and y are in S then so are x + y and xy.
(ii) 1 ∈ S.
This says not only that S is a ring in its own right but that the inclusion S → R is a ring homomorphism.
Example 1.3.3:

Let X be any set, let Y be a proper subset of X, and let R be any non-zero ring. Let

S = {f ∈ F (X; R) | f (x) = 0 for all x ∈
/ Y }. Then S is closed under addition and multiplication, but S is
not a subring of F (X; R) because the constant function 1 is not in S. However, S is still a ring in its own
right: its identity is the “indicator function” IY defined by



1 if x ∈ Y
IY (x) =



0 otherwise.
Example 1.3.4: Let X be a metric space. The set C(X; R) of continuous, real-valued functions on X is a
subring of F (X; R). (If you are unfamiliar with metric spaces, see Appendix A.)
Example 1.3.5:

Let f : R → S be a homomorphism of rings. Then im f = {f (x) | x ∈ R} is a subring

of S. However, as long as S is not the zero ring, ker f = {x ∈ R | f (x) = 0} will not be a subring of R
because f (1) = 1 = 0.
If f : R → S is an injective ring homomorphism, then we often identify R with the image of f and think
of R as a subring of S, particularly in the following cases:
Example 1.3.6:

Let R be a ring. Then the set of constant polynomials is a subring of R[x] which is

isomorphic to R. In fact, the function ι : R → R[x] which takes an r ∈ R to the corresponding constant

14


polynomial is an injective ring homomorphism and its image is the set of all constant polynomials. We
call this ι the “natural” or “canonical” inclusion of R into R[x].
Example 1.3.7: Similarly, if G is a group with identity element e, then {re | r ∈ R} is a subring of R[G]
and this subring can be expressed as the image of an obvious ring homomorphism R → R[G]; just as in the
polynomial case, we call this ring homomorphism the “natural” or “canonical” inclusion of R into R[G].
Definition: Let R be a ring. The center of R is Z(R) = {x ∈ R | xy = yx for every y ∈ R}.
It is an easy exercise to show that Z(R) is always a subring of R.
Example 1.3.8:


Let R be a ring and n a positive integer. Then the center of Mn (R) is {rI | r ∈ Z(R)}.

To see why this is true, we start by looking at the matrix Eij whose entries are all zeros except for a 1 in
the ith row and j th column. Given A ∈ Mn (R), the effect of multiplying A by Eij on the left is to move
the j th row of A into the ith position and to annihilate all the other rows of A. That is (writing aij for
the ij th entry of A)


 0
 .
 .
 .


 0


Eij A = 
aj1


 0

 .
 ..


0



···

···
···
···

···

0
..
.







0 


th
ajn 
 ← i row.


0 

.. 

. 


0

Similarly, the effect of multiplying on the right by Eij is to move the ith column of A into the j th position
and to annihilate all the other columns of A.
Now suppose A ∈ Z Mn (R) . Let i and j be two indices with i = j. The ij th entry of Eii A is aij
— the ith row of Eii A is the same as the ith row of A — while the ij th entry of AEii is 0 — the entire
j th column of AEii is zero. Since A is central, we must have Eii A = AEii , so aij = 0. Thus A is a
diagonal matrix. Similarly, the ij th entry of Eij A is ajj while the ij th entry of AEij is aii . Thus to get
15


Eij A = AEij we need aii = ajj which shows that A = rI for some r ∈ R. Finally, if s is any element of
R then (rI)(sI) = (rs)I while (sI)(rI) = (sr)I, so to get rI ∈ Z Mn (R) we need r ∈ Z(R).
This shows that any element of Z Mn (R) has the desired form. Conversely, given any r ∈ Z(R) we
can see that rI ∈ Z Mn (R) by noting that for any matrix A, (rI)A is what we get by multiplying each
entry of A by r on the left, while A(rI) is what we get by multiplying each of entry of A by r on the right.
Since r is central, it doesn’t matter which side we multiply on, so (rI)A = A(rI).
As with groups, we can generalize the idea of the center of a ring:
Definition: Let R be a ring and X a subset of R. The centralizer of X in R is the set
CR (X) = {r ∈ R | rx = xr for all x ∈ X}.
Again, the centralizer of any subset is always a subring.
Example 1.3.9:

Let R = M2 (R) and X = {[ 01 10 ]}. Then you can check that the centralizer of X is the

set of all matrices of the form


a b
b a

.

When I introduced the polynomial ring R[x] with coefficients in R, I made a big deal about the fact that
while elements of R[x] do define functions R → R, we shouldn’t think of two polynomials as being the same
just because they define the same function. We make this distinction because we want to be able to evaluate
a polynomial not just at elements of R, but at elements of rings that can be reached from R. For example,


given f ∈ Z[x], we can talk about f ( 2) even though 2 ∈
/ Z.
We can formalize this property of polynomial rings as follows:
Theorem 1.3.2 Let ϕ : R → S be a ring homomorphism and let s ∈ CS (im ϕ). Then there exists a unique
ring homomorphism ϕs : R[x] → S with the property that ϕs (x) = s and the diagram
ϕs
R[x] • • • • • • •/@ S
agg
Ð
gg
ÐÐ
Ð
gg
Ð
ι g
g ÐÐÐ ϕ
R

commutes (where ι is the natural inclusion R → R[x]).

16


Proof: First I’ll prove that there can only be one such function: If ψ : R[x] → S makes the diagram
commute and satisfies ψ(x) = s, then for any an xn + · · · + a1 x + a0 ∈ R[x] we have to have
ψ(an xn + · · · + a1 x + a0 ) = ψ(an )ψ(x)n + · · · + ψ(a1 )ψ(x) + ψ(a0 )
= ϕ(an )sn + · · · + ϕ(a1 )s + ϕ(a0 ).
This shows that our only option is to let ϕs (an xn + · · · + a1 x + a0 ) = ϕ(an )sn + · · · + ϕ(a1 )s + ϕ(a0 );
all that remains is to show that this ϕs is a ring homomorphism.
It’s easy to see that ϕs is additive and ϕs (1) = 1. To show that ϕs is multiplicative, we need to use
the fact that s commutes with the image of ϕ: If we consider two single terms axn , bxm ∈ R[x], we have
ϕs (axn )ϕs (bxm ) = ϕ(a)sn · ϕ(b)sm = ϕ(a)ϕ(b)sn sm = ϕ(ab)sn+m = ϕs (axn · bxm ).
The general case then follows from additivity of ϕs and distributivity:
ϕs (an xn + · · · + a0 )ϕs (bm xm + · · · + b0 )

=

(ϕs (an xn ) + · · · + ϕs (a0 )) (ϕs (bm xm ) + · · · + ϕs (b0 ))
n

m

ϕs (ai xi )ϕs (bj xj )

=
i=0 j=0
n m

ϕs (ai xi · bj xj )


=
i=0 j=0



n



m

ai xi · bj xj 

= ϕs 
i=0 j=0

= ϕs ((an xn + · · · + a0 ) · (bm xm + · · · + b0 ))
The commutativity part of the hypothesis cannot be dropped:
Example 1.3.10:

The inclusion C → H (where H is the ring of quaternions from Example 1.2.4) is a

ring homomorphism, but that inclusion cannot be extended to a ring homomorphism ϕj : C[x] → H which
takes x to j: On the one hand, if we think of the term ix as i · x, then we would have
ϕj (ix) = ϕj (i)ϕj (x) = ij = k.
On the other hand, ix is also x · i, which says ϕj (ix) would have to be j · i = −k.
17


Theorem 1.3.2 can be interpreted as saying that given any ring homomorphism ϕ : R → S, we can think

of every polynomial f ∈ R[x] as giving a function f : CS (im ϕ) → CS (im ϕ), where f (s) = ϕs (f ). The
fact that ϕs is a ring homomorphism tells us that addition and multiplication of polynomials translates to
addition and multiplication of functions — that is, for every s ∈ CS (im ϕ) and every pair f, g ∈ R[x] we
have (f + g)(s) = f (s) + g(s) and (f g)(s) = f (s)g(s).
Example 1.3.11:

Let R be a commutative ring and let n be any positive integer. There is a ring

homomorphism ∆ : R → Mn (R) given by ∆(r) = rI, the diagonal matrix with all r’s on the main
diagonal. As we saw in Example 1.3.8, the matrices in the image of ∆ commute with every n × n
matrix, so CMn (R) (im ∆) = Mn (R). Thus, given any matrix A ∈ Mn (R) there is a unique homomorphism
∆A : R[x] → Mn (R) with ∆A (x) = A. This allows us to evaulate a polynomial at a matrix: given p ∈ R[x],
we let p(A) = ∆A (p).

§1.4

Products of Rings

Let R and S be two rings. We can make the cartesian product R × S into a ring by letting (r, s) · (r , s ) =
(rr , ss ) and (r, s) + (r , s ) = (r + r , s + s ).
The sets R × 0 = {(r, 0) | r ∈ R} and 0 × S = {(0, s) | s ∈ S} are not subrings of R × S — neither
contains the identity element (1, 1) — but they are rings in their own right, with R × 0 ∼
= R and 0 × S ∼
= S.
Furthermore, every element of R × S is the sum of an element of R × 0 and an element of 0 × S, and this
fact makes it easy to prove that certain properties of R and S are inherited by R × S. For example, we have
the following fact (I leave the proof to you; use the fact that every element of R × 0 will commute with every
element of 0 × S):
Proposition 1.4.1 Let R and S be two rings. Then Z(R × S) = Z(R) × Z(S). In particular, R × S is
commutative if and only if R and S are both commutative.

However, you do need to be careful about other properties of R and S passing to R × S. For example,
the product of an element of R × 0 with an element of 0 × S will always be zero; thus, assuming that neither
R nor S is the zero ring, R × S will have lots of zerodivisors, even if R and S are both fields.
18


Note that if S is the zero ring, then R × S = R × 0 ∼
= R; likewise if R = 0 then R × S ∼
= S.
Given a ring R we sometimes want to know if we can write R ∼
= S1 × S2 for two non-zero rings S1 and
S2 . The key idea is the following:
Definition: Let R be a ring. An idempotent in R is an e ∈ R with e2 = e. If e ∈ Z(R) then we say that e
is a central idempotent.
We will see in Theorem 1.4.3 that every central idempotent lets us write R as a product of two rings.
First, though, we need to know how the two rings come into being:
Lemma 1.4.2 Let R be a ring and e ∈ R a central idempotent of R. Let Re = {re | r ∈ R}. Then Re is a
ring (with the operations inherited from R) and the function r → re is a ring homomorphism.
Proof: First of all, you should check that Re is closed under addition and multiplication. Since the
operations in R satisfy the abelian group, distributivity, and associativity axioms, Re will satisfy them
as well. So, the only thing we need to check is that Re has an identity. However, because e is central
and an idempotent we have e(re) = (re)e = re2 = re for all re ∈ Re; thus e is our identity. This shows
that Re is a ring; I leave it to you to check that r → re gives a ring homomorphism.
Theorem 1.4.3 Let R be a ring. If e is a central idempotent of R then so is 1 − e and R ∼
= Re × R(1 − e).
Conversely, if R ∼
= S1 × S2 , then there is a central idempotent e ∈ R such that S1 ∼
= Re and S2 ∼
= R(1 − e).
Proof: Let e be an idempotent. Then e(1 − e) = e − e2 = 0, so (1 − e)2 = (1 − e) − e(1 − e) = 1 − e,

which says that (1 − e) is also an idempotent. Since Z(R) is a subring of R, 1 − e will be a central
idempotent whenever e is.
Now define a function ϕ : R → Re × R(1 − e) by ϕ(r) = (re, r(1 − e)) and a function ψ : Re × R(1 − e)
by ψ(re, s(1 − e)) = re + s(1 − e). It should be easy to verify that ϕ is a ring homomorphism, so all we
have to do is verify that ψ is the set-theoretic inverse of ϕ. For every r ∈ R we have
(ψ ◦ ϕ)(r) = re + r(1 − e) = r,

19


so ψ ◦ ϕ = id. To get ϕ ◦ ψ = id, we start with the equation e(1 − e) = 0 from the first paragraph;
combining that with the fact that 1 − e is an idempotent, we find that for any r, s ∈ R
re + s(1 − e) (1 − e) = s(1 − e).
Similarly, (1 − e)e = 0 gives
re + s(1 − e) e = re.
Putting these pieces together,
(ϕ ◦ ψ) re, s(1 − e) =

re + s(1 − e) e, re + s(1 − e) (1 − e) = re, s(1 − e) .

Therefore ϕ : R → Re × R(1 − e) is an isomorphism.
On the other hand, suppose we have an isomorphism θ : S1 × S2 → R. Looking at the previous
paragraph, we would like to match this θ with our old ψ. This tells us how we should pick our idempotent
e — in the old case we have e = ψ(1e, 0(1 − e)), and (1e, 0(1 − e)) corresponds to (1, 0) ∈ S1 × S2 . So,
we let e = θ(1, 0). Since θ(1, 1) has to be 1, we also have θ(0, 1) = 1 − e, which matches with the old
formula 1 − e = ψ(0e, 1(1 − e)).
Now, to build an isomorphism θ1 : S1 → Re, we start by noticing that for any s ∈ S1 we have
θ(s, 0) = θ((s, 0) · (1, 0)) = θ(s, 0)θ(1, 0) ∈ Re. So, we at least have a function θ1 : S1 → Re defined by
θ1 (s) = θ(s, 0). Since θ commutes with addition and multiplication, so does θ1 . Also, our definition of
e makes θ1 (1) = e, and e is the identity of Re, so θ1 is a ring homomomorphism. To see that θ1 is an

isomorphism, we first note that it is injective because θ is injective. To see that θ1 is surjective, consider
an arbitrary re ∈ Re. Since θ is surjective, we can write r = θ(s1 , s2 ). Then from e = θ(1, 0) we have
re = θ((s1 , s2 ) · (1, 0)) = θ(s1 , 0) = θ1 (s1 ). Thus S1 ∼
= Re.
A similar argument shows that we have an isomorphism θ2 : S2 → R(1−e) defined by θ2 (s) = θ(0, s).
Example 1.4.1: If R is an integral domain, then the equation e(1−e) = 0 shows that the only idempotents
of R are 1 and 0. Therefore the theorem says that the only way to have R ∼
= S1 × S2 is to have S1 = 0
(e = 0) or S2 = 0 (e = 1).

20


Example 1.4.2: Let R be an integral domain and X any set. A function f ∈ F (X; R) is an idempotent if
and only if f (x) is an idempotent of R for every x ∈ X. Since R is an integral domain, f is an idempotent
if and only if each f (x) is either 0 or 1. Letting Y = {x ∈ X | f (x) = 1} we see that f is just the indicator
function IY (see Example 1.3.3). In this case the other idempotent 1 − IY is just the indicator function
IX\Y of the complement of Y .
Now, given a function f ∈ F (X; R), multiplying f by IY doesn’t change the value of the function at
points in Y , but does make the function 0 outside of Y ; thus
F (X; R)IY = {f ∈ F (X; R) | f (x) = 0 for all x ∈
/ Y }.
The ring homomorphism F (X; R) → F (Y ; R) given by f → f |Y then induces an isomorphism F (X; R)IY ∼
=
F (Y ; R) (compare with Example 1.3.1). So, the first part of the theorem effectively says that F (X; R) ∼
=
F (Y ; R) × F (X \ Y ; R).
You shouldn’t really need a theorem to tell you this — it’s easy to check that the function θ : F (X; R) →
F (Y ; R) × F (X \ Y ; R) defined by θ(f ) = (f |Y , f |X\Y ) is an isomorphism. The important part of the
theorem is that it tells us that these are the only decompositions of F (X; R). That is, writing F (X; R) ∼

=
S1 × S2 always amounts to writing X as a disjoint union of two subsets.
This example shows that it is very easy to decompose F (X; R); this happens because we are completely
ignoring any geometric properties of the set X. If X happens to be a metric space and we restrict our
attention to just the continuous functions, then we will get fewer decompositions:
Example 1.4.3:

Let X be a metric space. Since C(X; R) is a subring of F (X; R), we see as in the

previous example that every idempotent in C(X; R) has to be the indicator function IY of some Y ⊂ X.
Now, if IY is continuous, then Y , being IY−1 ({1}) has to be a closed subset of X; likewise X \ Y = IY−1 ({0})
must be closed. Conversely, if Y and X \ Y are both closed, then IY will be continuous: To see this, first
consider x ∈ Y . Since X \ Y is closed, Y is open, which means there is a δ > 0 such that any x with
d(x, x ) < δ will be in Y . Then IY (x ) = 1 = IY (x), so in particular |IY (x ) − IY (x)| < ε for any ε > 0,
which says that IY is continuous at x. A similar proof works for x ∈ X \ Y .
21


This shows that writing C(X; R) as S1 × S2 is the same as writing X as a disjoint union of two closed
subsets. At this point I’d like to remind you of a definition from analysis (or topology): a metric space X
is said to be connected if the only way to write X = Y ∪ Z where Y and Z are disjoint closed subsets is to
have Y = ∅ or Z = ∅. Now, we just saw that writing X = Y ∪ Z with Y and Z disjoint closed sets is the
same as writing C(X; R) as a product of two rings, (and those two rings will be C(Y ; R) and C(Z; R)).
Also, given any metric space Ξ, we have C(Ξ; R) = 0 if and only if Ξ = ∅. We therefore have an algebraic
condition for connectedness: X is connected if and only if C(X; R) is indecomposable — that is, if the
only way to write C(X; R) as a product of two rings is to have one of the rings be the zero ring.
Going back to the non-geometric case, if we let X be the two element set {1, 2} and R be any ring, then
we can factor F (X; R) as F ({1}; R) × F ({2}; R). However, defining a function on a one-point set amounts
to just picking an element of R, so F ({1}; R) ∼
=R∼

= F ({2}; R). Thus F ({1, 2}; R) ∼
= R × R. This suggests
a way to define the product of a large number of rings: given an arbitrary set Λ, we can think of F (Λ; R) as
the product of copies of R, one copy for each λ ∈ Λ. Generalizing this to products where not all the factors
are the same ring, we have:
Definition: Let Λ be any set and suppose that for every λ ∈ Λ we have a ring Rλ . Then the product


λ∈Λ

of the rings Rλ is
Rλ | f (λ) ∈ Rλ for every λ ∈ Λ}.

{f : Λ →
λ∈Λ

We give

Rλ a ring structure by letting f + g be the function with (f + g)(λ) = f (λ) + g(λ) and f g be
λ∈Λ

the function (f g)(λ) = f (λ)g(λ), both of which are well-defined since f (λ) and g(λ) both belong to the
same ring Rλ .
I will stick to this version of a product as long as Λ is an arbitrary “weird” set. However, if Λ = {1, . . . , n},
then I will usually treat elements of

Rλ in a more conventional manner as n-tuples (r1 , . . . , rn ).
λ∈Λ

22



§1.5

Algebras

If you go back to the list of examples of rings in the first section, you will see that most of the examples
started with “Let R be a ring (usually taken to be commutative).” In all of those examples it is possible to
view the “coefficient” ring R as sitting inside of the new ring S: In R[x] the elements of R are the constant
polynomials; in R[G] they are multiples of the group’s identity; in Mn (R) they are the diagonal matrices with
all the diagonal entries the same; and in F (X; R) they are the constant functions. In all of these cases, as
long as R is itself commutative the elements of R will commute with all of the elements of S. We generalize
these examples with the following definition:
Definition: Let K be a commutative ring. A K-algebra (or an algebra over K) is a ring R together with a
ring homomorphism K → Z(R) (called the structure morphism of the algebra).
It’s traditional to use the letter K here because one most often talks about algebras over a field, and
the German word for field is K¨
orper. (Yes, I know that K¨
orper literally means body, but that’s what the
Germans call them. The French and Spanish agree, calling fields corps and cuerpos, respectively, but the only
time I’ve seen fields mentioned in Italian they were called campos, so we English-speakers aren’t completely
alone.)
In addition to the examples already mentioned, we have:
Example 1.5.1:

Let R be any ring. Recall (Proposition 1.3.1) that there is a unique homomorphism

Z → R. This also applies to the subring Z(R) — that is we have a unique homomorphism Z → Z(R) —
so R is automatically a Z-algebra in a unique way. (Note, by the way, that this also says that the image
of the homomorphism Z → R has to lie in Z(R) since the composite Z → Z(R) → R has to be our unique

homomorphism Z → R.)
A homomorphism of algebras should “preserve the algebra structure” which we express by using a commutative diagram:
Definition: Let K be a commutative ring and let R and S be K-algebras, with ιR : K → Z(R) and ιS : K →
23


×