Difference Algebra
Algebra and Applications
Volume 8
Managing Editor:
Alain Verschoren
University of Antwerp, Belgium
Series Editors:
Alice Fialowski
Eötvös Loránd University, Hungary
Eric Friedlander
Northwestern University, USA
John Greenlees
Sheffield University, UK
Gerhard Hiss
Aachen University , Germany
Ieke Moerdijk
Utrecht University, The Netherlands
Idun Reiten
Norwegian University of Science and Technology, Norway
Christoph Schweigert
Hamburg University, Germany
Mina Teicher
Bar-llan University, Israel
Algebra an d Applications aims to publish well- written and carefully refereed
monographs with up-to-date expositions of research in all fields of algebra, in cluding its classical impact on commutative and noncommutative algebraic and
differential geometry, K-theor y and algebraic topology, and further applications
in related do mains, such as number theory, homotopy and (co)homology theory
through to discrete mathematics and mathematical physics.
Particular emphasis will be put on state-of-the-art topics such as rings of differential
operators, Lie algebras and super-algebras, group rings and algebras, Kac-Moody theory,
arithmetic algebraic geometry, Hopf algebras and quantum groups, as well as their
applications within mathematics and beyond. Books dedicated to computational aspects
of these topics will also be welcome.
www.pdfgrip.com
Alexander Levin
Difference Algebra
ABC
www.pdfgrip.com
Alexander Levin
T he Catholic University of America
Washington, D.C.
USA
ISBN 978-1-4020-6946-8
e-ISBN 978-1-4020-6947-5
Library of Congress Control Number: 2008926109
c 2008 Springer Science+Business Media B.V.
No part of this work may be reproduced, stored in a retrieval system, or transmitted in any form or by
any means, electronic, mechanical, photocopying, microfilming, recording or otherwise, without written
permission from the Publisher, with the exception of any material supplied specifically for the purpose
of being entered and executed on a computer system, for exclusive use by the purchaser of the work.
Printed on acid-free paper
9 8 7 6 5 4 3 2 1
springer.com
www.pdfgrip.com
Preface
Difference algebra as a separate area of mathematics was born in the 1930s when
J. F. Ritt (1893 - 1951) developed the algebraic approach to the study of systems
of difference equations over functional fields. In a series of papers published
during the decade from 1929 to 1939, Ritt worked out the foundations of both
differential and difference algebra, the theories of abstract algebraic structures
with operators that reflect the algebraic properties of derivatives and shifts of
arguments of analytic functions, respectively. One can say that differential and
difference algebra grew out of the study of algebraic differential and difference
equations with coefficients from functional fields in much the same way as the
classical algebraic geometry arose from the study of polynomial equations with
numerical coefficients.
Ritt’s research in differential algebra was continued and extended by
H. Raudenbuch, H. Levi, A. Seidenberg, A. Rosenfeld, P. Cassidy, J. Johnson,
W. Keigher, W. Sit and many other mathematicians, but the most important
role in this area was played by E. Kolchin who recast the whole subject in
the style of modern algebraic geometry with the additional presence of derivation operators. In particular, E. Kolchin developed the contemporary theory
of differential fields and created the differential Galois theory where finite dimensional algebraic groups played the same role as finite groups play in the
theory of algebraic equations. Kolchin’s monograph [105] is the most deep and
complete book on the subject, it contains a lot of ideas that determined the
main directions of research in differential algebra for the last thirty years.
The rate of development of difference algebra after Ritt’s pioneer works and
works by F. Herzog, H. Raudenbuch and W. Strodt published in the 1930s (see
[82], [166], [172], and [173]) was much slower than the rate of expansion of its
differential counterpart. The situation began to change in the 1950s due to R.
Cohn whose works [28] - [44] not only raised the difference algebra to the level
comparable with the level of the development of differential algebra, but also
clarified why many ideas that are fruitful in differential algebra cannot be successfully applied in the difference case, as well as many methods of difference
algebra cannot have differential analogs. R. Cohn’s book [41] hitherto remains
the only fundamental monograph on difference algebra. Since 60th various problems of difference algebra were developed by A. Babbitt [4], I. Balaba [5] - [7],
I. Bentsen [10], A. Bialynicki-Birula [11], R. Cohn [45] - [53], P. Evanovich [60],
[61], C. Franke [67] - [73], B. Greenspan [75], P. Hendrics [78] - [81], R. Infante
v
www.pdfgrip.com
PREFACE
vi
[86] - [91], M. Kondrateva [106] - [110], B. Lando [117], [118], A. Levin [109],
[110] and [120] - [136], A. Mikhalev [109], [110], [131] - [136] and [139] - [141],
E. Pankratev [106] - [110], [139] - [141] and [151] - [154], M. van der Put and
M. Singer [159], and some other mathematicians. Nowadays, difference algebra
appears as a rich theory with its own methods that are very useful in the study
of system of equations in finite differences, functional equations, differential
equations with delay, algebraic structures with operators, group and semigroup
rings. A number of interesting applications of difference algebra in the theory
of discrete-time nonlinear systems can be found in the works by M. Fliess [62] [66], E. Aranda-Bricaire, U. Kotta and C. Moog [2], and some other authors.
This book contains a systematic study of both ordinary and partial difference algebraic structures and their applications. R. Cohn’s monograph [41] was
limited to ordinary cases that seemed to be reasonable because just a few works
on partial difference algebra had been published by 1965. Nowadays, due to
efforts of I. Balaba, I. Bentsen, R. Cohn, P. Evanovich, A. Levin, A. Mikhalev,
E. Pankratev and some other mathematicians, partial difference algebra possesses a body of results comparable to the main stem of ordinary difference algebra. Furthermore, various applications of the results on partial difference fields,
modules and algebras (such as the theory of strength of systems of difference
equations or the dimension theory of algebraic structures with operators) show
the importance of partial difference algebra for the related areas of mathematics.
This monograph is intended to help researchers in the area of difference
algebra and algebraic structures with operators; it could also serve as a working
textbook for a graduate course in difference algebra. The book is almost entirely
self-contained: the reader only needs to be familiar with basic ring-theoretic and
group-theoretic concepts and elementary properties of rings, fields and modules.
All more or less advanced results in the ring theory, commutative algebra and
combinatorics, as well as the basic notation and conventions, can be found in
Chapter 1. This chapter contains many results (especially in the field theory,
theory of numerical polynomials, and theory of graded and filtered algebraic
objects) that cannot be found in any particular textbook.
Chapter 2 introduces the main objects of Difference Algebra and discusses
their basic properties. Most of the constructions and results are presented in
maximal possible generality, however, the chapter includes some results on ordinary difference structures that cannot be generalized to the partial case (or the
existence of such generalizations is an open problem). One of the central topics
of Chapter 2 is the study of rings of difference and inversive difference polynomials. We introduce the concept of reduction of such polynomials and present
the theory of difference characteristic sets in the spirit of the corresponding
Ritt-Kolchin scheme for the rings of differential polynomials. This chapter is
also devoted to the study of perfect difference ideals and rings that satisfy the
ascending chain condition for such ideals. Another important part of Chapter 2
is the investigation of Ritt difference rings, that is, difference rings satisfying the
ascending chain condition for perfect difference ideals. In particular, we prove
the Ritt-Raudenbush basis theorem for partial difference polynomial rings and
decomposition theorems for perfect difference ideals and difference varieties.
www.pdfgrip.com
PREFACE
vii
Chapter 3 is concerned with properties of difference and inversive difference
modules and their applications in the theory of linear difference equations. We
prove difference versions of Hilbert and Hilbert-Samuel theorems on characteristic polynomials of graded and filtered modules, introduce the concept of a
difference dimension polynomial, and describe invariants of such a polynomial.
In this chapter we also present a generalization of the classical Gră
obner basis
method that allows one to compute multivariable dierence dimension polynomials of difference and inversive difference modules. In addition to the applications in difference algebra, our construction of generalized Gră
obner bases, which
involves several term orderings, gives a tool for computing dimension polynomials in many other cases. In particular, it can be used for computation of Hilbert
polynomials associated with multi-filtered modules over polynomial rings, rings
of differential and difference-differential operators, and group algebras.
Chapter 4 is devoted to the theory of difference fields. The main concepts and
objects studied in this chapter are difference transcendence bases, the difference
transcendence degree of a difference field extension, dimension polynomials associated with finitely generated difference and inversive difference field extensions,
and limit degree of a difference filed extension. The last concept, introduced by
R. Cohn [39] for the ordinary difference fields, is generalized here and used in
the proof of the fundamental fact that a subextension of a finitely generated
partial difference field extension is finitely generated. The chapter also contains
some specific results on ordinary difference field extensions and discussion of
difference algebras.
Chapter 5 treats the problems of compatibility, replicability, and monadicity
of difference field extensions, as well as properties of difference specializations.
The main results of this chapter are the fundamental compatibility and replicability theorems, the stepwise compatibility condition for partial difference fields,
Babbitt’s decomposition theorem and its applications to the study of finitely
generated pathological difference field extensions. The chapter also contains basic notions and results of the theory of difference kernels over ordinary difference
fields, as well as the study of prolongations of difference specializations.
In Chapter 6 we introduce the concept of a difference kernel over a partial
difference field and study properties of such difference kernels and their prolongations. In particular, we discuss generic prolongations and realizations of
partial difference kernels. In this chapter we also consider difference valuation
rings, difference places and related problems of extensions of difference specializations.
Chapter 7 is devoted to the study of varieties of difference polynomials, algebraic difference equations, and systems of such equations. One of the central
results of this chapter is the R. Cohn’s existence theorem stating that every
nontrivial ordinary algebraically irreducible difference polynomial has an abstract solution. We then discuss some consequences of this theorem and some
related statements for partial difference polynomials. (As it is shown in [10],
the Cohn’s theorem cannot be directly extended to the partial case, however,
some versions of the existence theorem can be obtained in this case, as well.)
Most of the results on varieties of ordinary difference polynomials were obtained
www.pdfgrip.com
PREFACE
viii
in R. Cohn’s works [28] - [37]; many of them have quite technical proofs which
hitherto have not been simplified. Trying not to overload the reader with technical details, we devote a section of Chapter 7 to the review of such results which
are formulated without proofs. The other parts of this chapter contain the discussion of Greenspan’s and Jacobi’s bounds for systems of ordinary difference
polynomials and the concept of the strength of a system of partial difference
equations. The last concept arises as a difference version of the notion of the
strength of a system of differential equations studied by A. Einstein [55]. We
define the strength of a system of equations in finite differences and treat it
from the point of view of the theory of difference dimension polynomials. We
also give some examples where we determine the strength of some well-known
systems of difference equations.
The last chapter of the book gives some overview of difference Galois theory. In the first section we consider Galois correspondence for difference field
extensions and related problems of compatibility and monadicity. The other two
sections are devoted to the review of the classical Picard-Vissiot theory of linear
homogeneous difference equations and some results on Picard-Vessiot rings. We
do not consider the Galois theory of difference equations based on Picard-Vessiot
rings; it is perfectly covered in the book by M. van der Put and M. Singer [159],
and we would highly recommend this monograph to the reader who is interested
in the subject.
I am very grateful to Professor Richard Cohn for his support, encouragement
and help. I wish to thank Professor Alexander V. Mikhalev, who introduced
me into the subject, and my colleagues P. Cassidy, R. Churchill, W. Keigher,
M. Kondrateva, J. Kovacic, S. Morrison, E. Pankratev, W. Sit and all participants of the Kolchin Seminar in Differential Algebra at the City University of
New York for many fruitful discussions and advices.
www.pdfgrip.com
Contents
1 Preliminaries
1
1.1
Basic Terminology and Background Material . . . . . . . . . . .
1
1.2
Elements of the Theory of Commutative Rings . . . . . . . . . .
15
1.3
Graded and Filtered Rings and Modules . . . . . . . . . . . . . .
37
1.4
Numerical Polynomials . . . . . . . . . . . . . . . . . . . . . . . .
47
1.5
Dimension Polynomials of Sets of m-tuples . . . . . . . . . . . .
53
1.6
Basic Facts of the Field Theory . . . . . . . . . . . . . . . . . . .
64
1.7
Derivations and Modules of Differentials . . . . . . . . . . . . . .
89
1.8
Gră
obner Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . .
96
2 Basic Concepts of Difference Algebra
103
2.1
Difference and Inversive Difference Rings . . . . . . . . . . . . . . 103
2.2
Rings of Difference and Inversive Difference Polynomials . . . . . 115
2.3
Difference Ideals . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
2.4
Autoreduced Sets of Difference and Inversive Difference
Polynomials. Characteristic Sets . . . . . . . . . . . . . . . . . . 128
2.5
Ritt Difference Rings . . . . . . . . . . . . . . . . . . . . . . . . . 141
2.6
Varieties of Difference Polynomials . . . . . . . . . . . . . . . . . 149
3 Difference Modules
155
3.1
Ring of Difference Operators. Difference Modules . . . . . . . . . 155
3.2
Dimension Polynomials of Difference Modules . . . . . . . . . . . 157
3.3
Gră
obner Bases with Respect to Several Orderings
and Multivariable Dimension Polynomials of Difference
Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
3.4
Inversive Difference Modules . . . . . . . . . . . . . . . . . . . . . 185
3.5
σ ∗ -Dimension Polynomials and their Invariants . . . . . . . . . . 195
3.6
Dimension of General Difference Modules . . . . . . . . . . . . . 232
ix
www.pdfgrip.com
CONTENTS
x
4 Difference Field Extensions
4.1 Transformal Dependence. Difference Transcendental Bases
and Difference Transcendental Degree . . . . . . . . . . . . . . .
4.2 Dimension Polynomials of Difference and Inversive Difference
Field Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3 Limit Degree . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4 The Fundamental Theorem on Finitely Generated Difference
Field Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.5 Some Results on Ordinary Difference Field Extensions . . . . . .
4.6 Difference Algebras . . . . . . . . . . . . . . . . . . . . . . . . . .
245
5 Compatibility, Replicability, and Monadicity
5.1 Compatible and Incompatible Difference Field Extensions
5.2 Difference Kernels over Ordinary Difference Fields . . . .
5.3 Difference Specializations . . . . . . . . . . . . . . . . . .
5.4 Babbitt’s Decomposition. Criterion of Compatibility . . .
5.5 Replicability . . . . . . . . . . . . . . . . . . . . . . . . . .
5.6 Monadicity . . . . . . . . . . . . . . . . . . . . . . . . . .
311
311
319
328
332
352
354
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
245
255
274
292
295
300
6 Difference Kernels over Partial Difference Fields. Difference
Valuation Rings
371
6.1 Difference Kernels over Partial Difference Fields
and their Prolongations . . . . . . . . . . . . . . . . . . . . . . . 371
6.2 Realizations of Difference Kernels over Partial Difference Fields . 376
6.3 Difference Valuation Rings and Extensions of Difference
Specializations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
7 Systems of Algebraic Difference Equations
7.1 Solutions of Ordinary Difference Polynomials . . . . . . . . . . .
7.2 Existence Theorem for Ordinary Algebraic Difference
Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.3 Existence of Solutions of Difference Polynomials in the Case
of Two Translations . . . . . . . . . . . . . . . . . . . . . . . . .
7.4 Singular and Multiple Realizations . . . . . . . . . . . . . . . . .
7.5 Review of Further Results on Varieties of Ordinary Difference
Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.6 Ritt’s Number. Greenspan’s and Jacobi’s Bounds . . . . . . . . .
7.7 Dimension Polynomials and the Strength of a System of Algebraic
Difference Equations . . . . . . . . . . . . . . . . . . . . . . . . .
7.8 Computation of Difference Dimension Polynomials in the Case
of Two Translations . . . . . . . . . . . . . . . . . . . . . . . . .
393
393
402
412
420
425
433
440
455
8 Elements of the Difference Galois Theory
463
8.1 Galois Correspondence for Difference Field Extensions . . . . . . 463
8.2 Picard-Vessiot Theory of Linear Homogeneous Difference
Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
www.pdfgrip.com
CONTENTS
8.3
xi
Picard-Vessiot Rings and the Galois Theory of Difference
Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
Bibliography
495
Index
507
www.pdfgrip.com
Chapter 1
Preliminaries
1.1
Basic Terminology and Background
Material
1. Sets, relations, and mappings. Throughout the book we keep the standard notation and conventions of the set theory. The union, intersection, and
difference of two sets A and B are denoted by A B, A B, and A \ B, respectively. If A is a subset of a set B, we write A ⊆ B or B ⊇ A; if the inclusion is
proper (that is, A ⊆ B, A = B), we use the notation A ⊂ B (or A B) and
A ⊃ B (or B
A). If A is not a subset of B, we write A B or B
A. As
usual, the empty set is denoted by ∅.
If F is a family of sets, then the union and intersection of all sets of the
family are denoted by {X | X ∈ F} (or X∈F X) and {X | X ∈ F} (or
X∈F X), respectively. The power set of a set X, that is, the set of all subsets
of X is denoted by P(X).
The set of elements of a set X satisfying a condition φ(x) is denoted by
{x ∈ X | φ(x)}. If a set consists of finitely many elements x1 , . . . , xk , it is denoted by {x1 , . . . , xk } (sometimes we abuse the notation and write x for the set
{x} consisting of one element). Throughout the book Z, N, N+ , Q, R, and C
will denote the sets of integers, non-negative integers, positive integers, rational numbers, real numbers, and complex numbers respectively. For any positive
integer m, the set {1, . . . , m} will be denoted by Nm .
In what follows, the cardinality of a set X is denoted by Card X or |X| (the
last notation is convenient when one considers relationships involving cardinalities of several sets). In particular, if X is finite, Card X (or |X|) will denote the
number of elements of X. We shall often use the following principle of inclusion
and exclusion (its proof can be found, for example in [17, Section 6.1].
Theorem 1.1.1 Let P1 , . . . , Pm be m properties referring to the elements of a
finite set S, let Ai = {x ∈ S | x has property Pi } and let A¯i = S \ Ai = {x ∈ S | x
does not have property Pi } (i = 1, . . . , m). Then the number of elements of S
which have none of the properties P1 , . . . , Pm is given by
1
www.pdfgrip.com
CHAPTER 1. PRELIMINARIES
2
m
|A¯1 ∩ A¯2 · · · ∩ A¯m | = |S| −
|Ai | +
|Ai ∩ Aj |
1≤i
i=1
|Ai ∩ Aj ∩ Ak | + · · · + (−1)m |A1 ∩ A2 ∩ · · · ∩ Am | .
+
(1.1.1)
1≤i
As a consequence of formula (1.1.1) we obtain that if M1 , . . . , Mq are finite
sets, then
q
|M1 ∩ · · · ∩ Mq | =
|Mi | −
i=1
|Mi ∩ Mj |
1≤i
|Mi ∩ Mj ∩ Mk | − · · · + (−1)q+1 |M1 ∩ · · · ∩ Mq |.
+
1≤i
Given sets A and B, we define their Cartesian product A × B to be the set
of all ordered pairs (a, b) where a ∈ A, b ∈ B. (An order pair (a, b) is formally
defined as collection of sets {a, {a, b}}; a and b are said to be the first and the
second coordinates of (a, b). Two ordered pairs (a, b) and (c, d) are equal if and
only if a = c and b = d.) A subset R of a Cartesian product A × B is called
a (binary) relation from A to B. We usually write aRb instead of (a, b) ∈ R
and say that a is in the relation R to b. The domain of a relation R is the set
of all first coordinates of members of R, and its range is the set of all second
coordinates.
A relation f ⊆ A × B is called a function or a mapping from A to B if for
every a ∈ A there exists a unique element b ∈ B such that (a, b) ∈ f . In this
f
→ B, and f (a) = b or f → b if (a, b) ∈ f . The
case we write f :A → B or A −
element f (a) is called the image of a under the mapping f or the value of f at a.
If A0 ⊆ A, then the set {f (a) | a ∈ A0 } is said to be the image of A0 under f ;
it is denoted by f (A) or Im f . If B0 ⊆ B, then the set and {a ∈ A | f (a) ∈ B0 }
is called the inverse image of f ; it is denoted by f −1 (B0 ).
A mapping f :A → B is said to be injective (or one-to-one) if for any a1 , a2 ∈
A, the equality f (a1 ) = f (a2 ) implies a1 = a2 . If f (A) = B, the mapping is
called surjective (or a mapping of A onto B). An injective and surjective mapping
is called bijective. The identity mapping of a set A into itself is denoted by idA .
If f : A → B and g : B → C, then the composition of these mappings is denoted
by g ◦ f or gf . (Thus, g ◦ f (a) = g(f (a)) for every a ∈ A.)
If f : A → B is a mapping and X ⊆ A, then the restriction of f on X is
denoted by f |X . (This is the mapping from X to B defined by f |X (a) = f (a)
for every a ∈ A.)
If I is a nonempty set and there is a function that associates with every
i ∈ I some set Ai , we say that we have a family of sets {Ai }i∈I (also denoted
by {Ai | i ∈ I}) indexed by the set I. The set I is called the index set of the
family. The union and intersection of all sets of a family {Ai }i∈I are denoted,
respectively, by i∈I Ai (or {Ai |i ∈ I}) and i∈I Ai (or {Ai |i ∈ I}). If
m
m
I = {1, . . . , m} for some positive integer m, we also write i=1 Ai and i=1 Ai
for the union and intersection of the sets A1 , . . . , Am , respectively.
www.pdfgrip.com
1.1. BASIC TERMINOLOGY AND BACKGROUND MATERIAL
3
A family of sets is said to be (pairwise) disjoint if for any two sets A and B
of this family, either A B = ∅ or A = B. In particular, two different sets A
and B are disjoint if A B = ∅. A family {Ai }i∈I of nonempty subsets of a set
A is said to be a partition of A if it is disjoint and i∈I Ai = A.
A Cartesian product of a family of sets {Ai }i∈I , that is, the set of all functions f from I to i∈I Ai such that f (i) ∈ Ai for all i ∈ I, is denoted by i∈I Ai .
For any f ∈ i∈I Ai , the element f (i) (i ∈ I) is called the ith coordinate of f ;
it is also denoted by fi , and the element f is written as (fi )i∈I or {fi | i ∈ I}. If
Ai = A for every i ∈ I, we write AI for i∈I Ai .
If I = N+ , the Cartesian product of the sets of the sequence A1 , A2 , . . . is de∞
noted by i=1 Ai . The Cartesian product of a finite family of sets {A1 , . . . , Am }
m
is denoted by i=1 Ai . Elements of this product are called m-tuples. If A1 =
m
· · · = Am = A, the elements of i=1 Ai are called m-tuples over the set A.
Let M and I be two sets. By an indexing in M with the index set I we mean
an element a = (ai )i∈I of the Cartesian product i∈I Mi where Mi = M for
all i ∈ I. Thus, an indexing in M with the index set I is a function a : I → M ;
the image of an element i ∈ I is denoted by ai and called the ith coordinate of
the indexing. If J ⊆ I, then the restriction of the function a on J is called a
subindexing of the indexing a; it is denoted by (ai )i∈J . A finite indexing a with
an index set {1, . . . , m} is also referred to as an m-tuple a = (a1 , . . . , am ).
If A is a set and R ⊆ A × A, we say that R is a relation on A. It is called
- reflexive if aRa for every a ∈ A;
- symmetric if aRb implies bRa for every a, b ∈ A;
- antisymmetric if for any elements a, b ∈ A, the conditions aRb and bRa
imply a = b;
- transitive if aRb and bRc imply aRc for every a, b, c ∈ A. (A transitive
relation on a set A is also called a preorder on A.)
If R is reflexive, symmetric and transitive, it is called an equivalence relation
on A. For every a ∈ A, the set [a] = {x ∈ A | xRa} is called the (R-) equivalence
class of the element a. The fundamental result on equivalence relations states
that the family {[a] ∈ P(A) | a ∈ A} of all equivalence classes forms a partition
of A and, conversely, every partition of A determines an equivalence relation
R on A such that xRy if and only if x and y belong to the same set of the
partition. (In this case every equivalence class [a] (a ∈ A) coincides with the set
of the partition containing a.)
Let R be a relation on a set A. The reflexive closure of R is a relation
R on A such that aR b (a, b ∈ A) if and only if aRb or a = b (the last
equality means that a and b denote the same element of A). A relation R on A,
such that aR b if and only if aRb or bRa, is called the symmetric closure of R.
A relation R+ on A is said to be a transitive closure of the relation R if it
satisfies the following condition: aR+ b (a, b ∈ A) if and only if there exist finitely many elements a1 , . . . , an ∈ A such that aRa1 , . . . , an Rb. It is easy to
see that R , R , and R+ are the reflexive, symmetric, and transitive relations
on A, respectively. Combining the foregoing constructions, one arrives at the
concepts of reflexive-symmetric, reflexive-transitive, symmetric-transitive, and
reflexive-symmetric-transitive closures of a relation R. For example, a reflexive-
www.pdfgrip.com
4
CHAPTER 1. PRELIMINARIES
symmetric closure R∗ of R is defined by the following condition: aR∗ b if and
only if aR b or bR a (that is, aRb or bRa or a = b).
A relation R on a set A is called a partial order on A if it is reflexive, transitive, and antisymmetric. Partial orders are usually denoted by ≤ (sometimes
we shall also use symbols or ). If a ≤ b, we also write this as b ≥ a; if a ≤ b
and a = b, we write a < b or a > b.
If ≤ is a partial order on a set A, the ordered pair (A, ≤) is called a partially
ordered set. We also say that A is a partially ordered set with respect to the order ≤. If A0 ⊆ A, we usually treat A0 as an ordered set with the same partial
order ≤.
An element m of a partially ordered set (A, ≤) is called minimal if for every
x ∈ A the condition x ≤ m implies x = m. Similarly, and element s ∈ A is called
maximal if for every x ∈ A, the condition s ≤ x implies x = s. An element a ∈ A
is the smallest element (least element or first element) in A if a ≤ x for every
x ∈ X, and b ∈ A is the greatest element (largest element or last element) in A
if x ≤ b for every x ∈ X. Obviously, a partially ordered set A has at most one
greatest and one smallest element, and the smallest (the greatest) element of A,
if it exists, is the only minimal (respectively, maximal) element of A.
Let (A, ≤) be a partially ordered set and B ⊆ A. An element a ∈ A is called
an upper bound for B if b ≤ a for every b ∈ B. Also, a is called a least upper
bound (or supremum) for B if a is an upper bound for B and a ≤ x for every
upper bound x for B. Similarly, c ∈ A is a lower bound for B if c ≤ b for every
b ∈ B. If, in addition, for every lower bound y for B we have y ≤ c, then the
element c is called a greatest lower bound (or infimum) for B. We write sup(B)
and inf(B) to denote the supremum and infimum of B, respectively.
If ≤ is a partial order on a set A and for every distinct elements a, b ∈ A
either a ≤ b or b ≤ a, then ≤ is called a linear or total order on A. In this case
we say that the set A is linearly (or totally) ordered by ≤ (or with respect to
≤). A linearly ordered set is also called a chain. Clearly, if (A, ≤) is a linearly
ordered set, then every minimal element in A is the smallest element and every
maximal element in A is the greatest element. In particular, linearly ordered
sets can have at most one maximal element and at most one minimal element.
If (A, ≤) is a partially ordered set and for every a, b ∈ A there is an element
c ∈ A such that a ≤ c and b ≤ c, then A is said to be a directed set.
The proof of the following well-known fact can be found, for example, in
[102, Chapter 0].
Proposition 1.1.2 Let (A, ≤) be a partially ordered set. Then the following
conditions are equivalent.
(i) (The condition of minimality.) Every nonempty subset of A contains a
minimal element.
(ii) (The induction condition.) Suppose that all minimal elements of A have
some property Φ, and for every a ∈ A the fact that all elements x ∈ A, x < a
have the property Φ implies that the element a has this property as well. Then
all elements of the set A have the property Φ.
www.pdfgrip.com
1.1. BASIC TERMINOLOGY AND BACKGROUND MATERIAL
5
(iii) (Descending chain condition) If a1 ≥ a2 ≥ . . . is any chain of elements
of A, then there exists n ∈ N+ such that an = an+1 = . . . .
A linearly ordered set (A, ≤) is called well-ordered if it satisfies the equivalent
conditions of the last proposition. In particular, a linearly ordered set A is wellordered if every nonempty subset of A contains a smallest element.
Throughout the book we shall use the axiom of choice and its alternative
versions contained in the following proposition whose proof can be found, for
example, in [102, Chapter 0]).
Proposition 1.1.3 The following statements are equivalent.
(i) (Axiom of Choice.) Let A be a set, let Ω be a collection of nonempty
subsets of a set B, and let φ be a function from A to Ω. Then there is a function
f : A → B such that f (a) ∈ φ(a) for every a ∈ A.
(ii) (Zermelo Postulate) If Ω is a disjoint family of nonempty sets, then there
is a set C such that A ∩ C consists of a single element for every A ∈ Ω.
(iii) (Hausdorff Maximal Principle.) If Ω is a family of sets and Σ is a chain
in Ω (with respect to inclusion as a partial order), then there is a maximal chain
in Ω which contains Σ.
(iv) (Kuratowski Lemma) Every chain in a partially ordered set is contained
in a maximal chain.
(v) (Zorn Lemma) If every chain in a partially ordered set has an upper bound,
then there is a maximal element of the set.
(vi) (Well-ordering Principle) Every set can be well-ordered.
2. Dependence relations. Let X be a nonempty set and ∆ ⊆ X × P(X)
a binary relation from X to the power set of X. We write x ≺ S and say that x
is dependent on S if (x, S) ∈ ∆. Otherwise we write x ⊀ S and say that x does
not depend on S or that x is independent of S. If S and T are two subsets of X,
we write S ≺ T and say that the set S is dependent on T if s ≺ T for all s ∈ S.
Definition 1.1.4 With the above notation, a relation ∆ ⊆ X × P(X) is said
to be a dependence relation if it satisfies the following properties.
(i) S ≺ S for any set S ⊆ X.
(ii) If x ∈ X and x ≺ S, then there exists a finite set S0 ⊆ S such that x ≺ S0 .
(iii) Let S, T , and U be subsets of X such that S ≺ T and T ≺ U . Then S ≺ U .
(iv) Let S ⊆ X and s ∈ S. If x is an element of X such that x ≺ S, x ⊀ S\{s},
then s ≺ (S \ {s}) {x}.
In what follows we shall often abuse the language and refer to ≺ as a relation
on X.
www.pdfgrip.com
CHAPTER 1. PRELIMINARIES
6
Definition 1.1.5 Let S be a subset of a set X. A set S is called dependent
if there exists s ∈ S such that s ≺ S \ {s} (equivalenly, if S ≺ S \ {s}). If
s ⊀ S \ {s} for all s ∈ S, the set S is called independent. (In particular, the
empty set is independent.)
The proof of the following two propositions can be found in [167, Chapter 4].
Proposition 1.1.6 Let S and U be subsets of a set X and let T ⊆ U .
(i) If S ≺ T , then S ≺ U .
(ii) If T is dependent, so is U .
(iii) If U is independent, so is T .
(iv) If S is dependent, then some finite subset S0 of S is dependent. Equivalently, if every finite subset of S is independent, then S is independent.
(v) Let S be independent and let x be an element of X such that x ⊀ S. Then
S {x} is independent.
(vi) Let S be finite and dependent, and let S be an independent subset of S.
Then there exists s ∈ S \ S such that S ≺ S \ {s}.
Definition 1.1.7 If X is a nonempty set, then a set B ⊆ X is called a basis
for X if B is independent and X ≺ B.
Proposition 1.1.8 Let X be a nonempty set with a dependence relation ≺.
(i) B ⊆ X is a basis for X if and only if it is a maximal independent set
in X.
(ii) B ⊆ X is a basis for X if and only if B is minimal with respect to the
property X ≺ B.
(iii) Let S ⊆ T ⊆ X where S is an independent set (possibly empty) and
X ≺ T . Then there is a basis B for X such that S ⊆ B ⊆ T .
(iv) Any two bases for X have the same cardinality.
3. Categories and functors. A category C consists of a class of objects,
obj C, together with sets of morphisms which arise as follows. There is a function
M or which assigns to every pair A, B ∈ obj C a set of morphisms M or(A, B)
(also written as M orC (A, B)). Elements of M or(A, B) are called morphisms
from A to B, and the inclusion f ∈ M or(A, B) is also indicated as f : A → B
f
→ B. The sets M or(A, B) and M or(C, D) are disjoint unless A = C and
or A −
B = D in which case they coincide. Furthermore, for any three objects A, B, C ∈
obj C there is a mapping (called a composition) M or(B, C) × M or(A, B) →
M or(A, C), (g, f ) → gf , with the following properties:
1) The composition is associative. That is, if f ∈ M or(C, D), g ∈ M or(B, C),
and h ∈ M or(A, B), then (f g)h = f (gh).
2) For every object A, there exists a morphism 1A ∈ M or(A, A) (called an
identity morphism) such that f = f 1A = 1B f for any object B and for any
morphism f ∈ M or(A, B).
www.pdfgrip.com
1.1. BASIC TERMINOLOGY AND BACKGROUND MATERIAL
7
If f : A → B is a morphism in a category C, we say that the objects A and
B are the domain and codomain (or range) of f , respectively. The morphism f
is called an equivalence (or an isomorphism or an invertible morphism) if there
exists a morphism g ∈ M or(B, A) such that gf = 1A and f g = 1B . (Then g is
unique; it is called the inverse of f ).
A category C is said to be a subcategory of a category D if (i) every object
of C is an object of D; (ii) for every objects A and B in C, M orC (A, B) ⊆
M orD (A, B); (iii) the composite of two morphisms in C is the same as their
composite in D; (iv) for every object A in C, the identity morphism 1A is the
same in D as it is in C.
We now list several examples of categories.
(a) The category Set: the class of objects is the class of all sets, the morphisms
are functions (with the usual composition).
(b) The category Grp: the class of objects is the class of all groups and
morphisms are group homomorphisms.
(c) The category Ab where the class of objects is the class of all Abelian
groups, and morphisms are group homomorphisms.
(d) The category Ring: the class of objects is the class of all rings, and the
morphisms are ring homomorphisms.
(e) The category R Mod of left modules over a ring R: the class of objects
is the class of all left R-modules, the morphisms are module homomorphisms.
The category ModR of right modules over R is defined in the same way.
(f) The category Top: the class of objects is the class of all topological spaces,
the morphisms are continuous functions between them.
It is easy to see that the categories in examples (b) - (f) are subcategories of
Set, Ab is a subcategory of Grp, etc. In what follows, we adopt the standard
notation of the categories of algebraic objects and denote the sets of morphisms
M or(A, B) in categories Grp, Ab and Ring by Hom(A, B); the corresponding
notation in the categories R Mod, and ModR is HomR (A, B).
Let C and D be two categories. A covariant functor F from C to D is a pair
of mappings (denoted by the same letter F ): the mapping that assigns to every
object A of C an object F (A) in D, and the mapping defined on morphisms of
C which assigns to every morphism α : A → B in C a morphism F (α) : F (A) →
F (B) in D. This pair of mappings should satisfy the following two conditions:
(a) F (1A ) = 1F (A) for every object A in C;
(b) If α and β are two morphisms in C such that the composition αβ is defined,
then F (αβ) = F (α)F (β).
If F and G are two covariant functors from a category C to a category D (in
this case we write F : C → D and G : C → D), then the natural transformation
of functors h : F → G is a mappping that assigns to every object A in C a
morphism h(A) : F (A) → G(A) in D such that for every morphism α : A → B
in C, we have the following commutative diagram in the category D:
www.pdfgrip.com
CHAPTER 1. PRELIMINARIES
8
F (A)
h(A) ✲
✻
F (α)
F (B)
G(A)
✻
G(α)
h(B)
✲
G(B)
If in addition each h(A) is an equivalence, we say that h is a natural isomorphism.
A contravariant functor G from a category C to a category D consists of
an object function, which assigns to each object A in C an object G(A) in D,
and a mapping function which assigns to each morphism α : A → B in C a
morphism G(α) : G(B) → G(A) in D (both object and mapping functions are
denoted by the same letter G). This pair of function must satisfy the following
two conditions: G(1A ) = 1G(A) for any object A in C, and G(αβ) = G(β)G(α)
whenever the composition of two morphisms α and β in C is defined. A natural
transformation and natural isomorphism of contravariant functors are defined
in the same way as in the case of their covariant counterparts.
An object A in a category C is called an initial object (respectively, a terminal
or final object) if for any object B in C, M or(A, B) (respectively, M or(B, A))
consists of a single morphism. If A is both an initial object and a terminal
object, it is called a zero (or null) object in C. For example, ∅ is an initial object
in Sets, while the trivial group is a zero object in Grp. If a category C has a
zero object 0, then a morphism f ∈ M or(A, B) in C is called a zero morphism
if there exists morphisms g : A → 0 and h : 0 → B such that f = hg. This zero
morphism is denoted by 0AB or 0. It is easy to see, that if C is a category with
a zero object, then there is exactly one zero morphism from each object A to
each object B.
A morphism f : A → B in a category C is called monomorphism (we also
say that f is monic) if whenever f g = f h for some morphisms g, h ∈ M or(C, A)
(C is an object in C), then g = h. A morphism f : A → B in C is called
an epimorphism (we also say that f is epi) if whenever gf = hf for some
morphisms g, h ∈ M or(B, D) (D is an object in C), then g = h. f : A → B is
called a bimorphism if f is both monic and epi.
Two monomorphisms f : A → C and g : B → C in a category C are
called equivalent if there exists an isomorphism τ : A → B such that f = gτ .
An equivalence class of monomorphisms with codomain C is called a subobject
of the object C. The dual notion is the notion of quotient (or factor object)
of C. Note that in the categories Grp, Ab, Ring, R Mod, and ModR (R is a
ring) the monomorphisms and epimorphisms are simply injective and surjective
homomorphisms, respectively. Bimorphisms in these categories are precisely the
equivalences (that is, isomorphisms of the corresponding structures); the concept
of a subobject of an object becomes the concept of a subgroup or a subring or
a submodule in the categories of groups or rings or modules, respectively.
Let I be a set and let {Ai }i∈I be a family of objects in a category C. A product
of this family is a pair (P, {pi }i∈I ) where P is an object of C and pi (i ∈ I) are
www.pdfgrip.com
1.1. BASIC TERMINOLOGY AND BACKGROUND MATERIAL
9
morphisms in M or(P, Ai ) satisfying the following condition. If Q is an object
of C and for every i ∈ I there is a morphism qi : Q → Ai , then there exists
a unique morphism η : Q → P such that qi = pi η for all i ∈ I. Similarly, a
coproduct of a family {Ai }i∈I of objects in C is defined as a pair (S, {si }i∈I )
where S is an object of C and si (i ∈ I) are morphisms in M or(Ai , S) with the
following property. If T is an object of C and for every i ∈ I there is a morphism
ti : Ai → T , then there exists a unique morphism ζ : S → T such that ti = ζsi
for all i ∈ I.
The product and coproduct of a family of objects {Ai }i∈I are denoted by
i∈I Ai and
i∈I Ai , respectively. It is easy to see that if a product (or coproduct) of a family of objects exists, then it is unique up to an isomorphism. Notice
that the concept of a coproduct in the categories Grp, Ab, Ring, R Mod, and
ModR coincides with the concept of a direct sum of the corresponding algebraic
structures. In this case we use the symbol
rather than .
A category C is called preadditive if for every pair A, B of its objects,
M or(A, B) is an Abelian group satisfying the following axioms:
(a) The composition of morphisms M or(B, C) × M or(A, B) → M or(A, C) is
bilinear, that is, f (g1 + g2 ) = f g1 + f g2 and (f1 + f2 )g = f1 g + f2 g whenever
the compositions are defined;
(b) C contains a zero object 0.
If C is a preadditive category, then the following conditions are equivalent:
(1) C has an initial object; (2) C has a terminal object; (3) C has a zero object.
The proof of this fact, as well as the proof of the following statement, can be
found, for example, in [15, Vol. 2, Section 1.2].
Proposition 1.1.9 Given two objects A and B in a preadditive category C, the
following conditions are equivalent.
(i) The product (P, pA , pB ) exists.
(ii) The coproduct (S, sA , sB ) exists.
(iii) There exists an object P and morphisms pA : P → A, pB : P → B,
sA : A → P , and sB : B → P such that pA sA = 1A , pB sB = 1B , pA sB = 0,
pB sA = 0, and sA pB + sB pA = 1P .
If A and B are two objects in a preadditive category C, then a quintuple
(P, pA , pB , sA , sB ) satisfying condition (iii) of the last proposition is called a
biproduct of A and B.
A preadditive category which satisfies one of the equivalent conditions of the
last proposition is called additive. A functor F from an additive category C to
an additive category D is called additive if for any two objects A and B in C
and for any two morphisms f, g ∈ M or(A, B), F (f + g) = F (f ) + F (g).
Let C be a category with a zero object and let f : A → B be a morphism in
C. We say that a monomorphism α : M → A (or a pair (M, α)) is a kernel of f
if (i) f α = 0 and (ii) whenever f ν = 0 for some morphism ν : N → A, then
there exists a unique morphism τ : N → M such that ν = ατ . In this case M is
called the kernel object; it is often referred to as the kernel of f as well. Dually,
www.pdfgrip.com
CHAPTER 1. PRELIMINARIES
10
a cokernel of a morphism f : A → B is an epimorphism β : B → C (or a pair
(C, β)) such that βf = 0, and for any morphism γ : B → D with γf = 0 there
exists a unique morphism µ : C → D such that γ = µf . The object C is called
a cokernel object; it is often referred to as the cokernel of f as well.
An additive category C is called Abelian if it satisfies the following conditions.
(a) Every morphism in C has a kernel and cokernel.
(b) Every monomorphism in C is the kernel of its cokernel.
(c) Every epimorphism in C is the cokernel of its kernel.
It can be shown (see [15, Vol. II, Proposition 1.5.1]) that a morphism in an
Abelian category is a bimorphism if and only if it is an isomorphism.
As it follows from the classical theories of groups, rings and modules, the
categories Grp, Ab, Ring, R Mod, and ModR (R is a fixed ring) are Abelian.
The concepts of kernel and cokernel of a morphism in each of these categories
coincide with the concepts of kernel and cokernel of a homomorphism. Indeed,
consider, for example, the category of all left modules over a ring R, where
a kernel and a cokernel of a homomorphism f : A → B are defined as left
R-modules N = {a ∈ A | f (a) = 0} and P = B/f (A), respectively. Then
the embedding N → A and the canonical epimorphism B → B/f (A) can be
naturally treated as kernel and cokernel of the morphism f in R Mod (in the
sense of the category theory).
It is easy to prove (see, for example, [19, Propositions 5.11, 5.12]) that if
A is an object of an Abelian category C, (A , i) is a subobject of A (that is,
i : A → A is a representative of an equivalence class of monomorphisms) and
(A , j) is a factor object defined by the cokernel of i, then Ker j = (A , i).
Dually, if (A , j) is a factor object of A (that is, j : A → A is a representative
of an equivalence class of epimorphisms) and (A , i) is a subobject of A defined
by the kernel of j, then Coker i = (A , j). These two statements give the one-toone correspondence between the classes of all subobjects and all factor objects
of an object A. In what follows, the factor object corresponding to a subobject
A of A will be denoted by A/A . (This notation agrees with the usual notation
for algebraic structures; for example, if A is an object of the category R Mod,
then A/A is the factor module of a module A by its submodule A .) The proof
of the following classical result can be found, for example, in [83, Chapter 2,
Section 9].
Proposition 1.1.10 In an Abelian category C, every morphism f : A → B
has a factorization f = me, with m : N → B monic and e : A → N epi;
moreover, m = Ker(Coker f ) and e = Coker(Ker f ). Furthermore, given any
e
m
other factorization f = m e (A −→ N −−→ B ) with m monic and e epi, and
given two morphisms g : A → A and h : B → B such that f g = hf , there
exists a unique morphism k : N → N such that e g = ke and m k = hm.
An object P in a category C is called projective if for any epimorphism
π : A → B and for any morphism f : P → B in C, there exists a morphism
h : P → A such that f = πh. An object E in C is called injective if for any
www.pdfgrip.com
1.1. BASIC TERMINOLOGY AND BACKGROUND MATERIAL
11
monomorphism i : A → B and for any morphism g : A → E, there exists
a morphism λ : B → E such that g = λi. An Abelian category C is said to
have enough projective (injective) objects if for every its object A there exists
an epimorphism P → A (respectively, a monomorphism A → Q) where P
is a projective object in C (respectively, Q is an injective object in C). The
importance of Abelian categories with enough projective and injective objects
is determined by the fact that one can develop the homological algebra in such
categories (one can refer to [15 ] or [19])). Furthermore, ( see, for example, [149,
Chapter 2]), every module is a factor module of a projective module and every
module can be embedded into an injective module, so R Mod and ModR are
Abelian categories with enough projective and injective objects.
Let C be an Abelian category. By a (descending) filtration of an object A of
C we mean a family (An )n∈Z of subobjects of A such that An ⊇ Am whenever
m ≥ n. The subobjects An are said to be the components of the filtration. An
object A together with some filtration of this object is said to be a filtered object
or an object with filtration. (While considering filtered objects in categories of
rings and modules we shall often consider ascending filtration which are defined
in a dual way.)
If B is another filtered objects in C with a filtration (An )n∈Z , then a morphism f : A → B is said to be a morphism of filtered objects if f (An ) ⊆ Bn
for every n ∈ Z. It is easy to see that filtered objects of C and their morphisms
form an additive (but not necessarily Abelian) category. An associated graded
object for a filtered object A with a filtration (An )n∈Z is a family {grn A}n∈Z
where grn A = An /An+1 .
A spectral sequence in C is a system of the form E = (Erp,q , E n ), p, q, r ∈
Z, r ≥ 2 and the family of morphisms between these objects described as follows.
(i) {Erp,q | p, q, r ∈ Z, r ≥ 2} is a family of objects of C.
p,q
(ii) For every p, q, r ∈ Z, r ≥ 2, there is a morphism dp,q
→ Erp+r,q−r+1 .
r : Er
p+r,q−r+1 p,q
dr = 0.
Furthermore, dr
(iii) For every p, q, r ∈ Z, r ≥ 2, there is an isomorphism αrp,q : Ker(dp,q
r )/Im
p,q
)
→
E
.
(dp−r,q+r−1
r
r+1
(iv) {E n | n ∈ Z} is a family of filtered objects in C.
(v) For every fixed pair (p, q) ∈ Z2 , dp,q
= 0 and dp−r,q+r−1
= 0 for all
r
r
sufficiently large r ∈ Z (that is, there exists an integer r0 such that the last
equalities hold for all r ≥ r0 ). It follows that for sufficiently large r, an object
p,q
.
Erp,q (p, q ∈ Z) does not depend on r; this object is denoted by E∞
(vi) For every n ∈ Z, the components Ein of the filtration of E n are equal to
0 for all sufficiently large i ∈ Z; also Ein = E for all sufficiently small i.
p,q
(vii) There are isomorphisms β p,q : E∞
→ grp E p+q .
The family {E n }n∈Z (without filtrations) is said to be the limit of the spectral sequence E. A morphism of a spectral sequence E = (Erp,q , E n ) to a spectral
p,q
→ Frp,q
sequence F = (Frp,q , F n ) is defined as a family of morphisms up,q
r : Er
n
n
n
and filtered morphisms u : E → F which commute with the morphisms dp,q
r ,
αrp,q , and β p,q . The category of spectral sequences in an Abelian category form
www.pdfgrip.com
CHAPTER 1. PRELIMINARIES
12
an additive (but not Abelian) category. An additive functor from an Abelian
category to a category of spectral sequences is called a spectral functor. A spectral sequence is called cohomological if Erp,q = 0 whenever at least one of the
p,q
for r > max{p, q + 1}, E n = 0
integers p, q is negative. In this case, Erp,q = E∞
for n < 0, and the m-th component of the filtration of E n is 0, if m > 0, and
E n , if m ≤ 0.
4. Algebraic structures. Throughout the book, by a ring we always
mean an associative ring with an identity (usually denoted by 1). Every ring
homomorphism is unitary (maps an identity onto an identity), every subring of
a ring contains the identity of the ring, every module, as well as every algebra
over a commutative ring, is unitary (the multiplication by the identity of the
ring is the identity mapping of the module or algebra). By a proper ideal of a
ring R we mean an ideal I of R such that I = R.
An injective (respectively, surjective or bijective) homomorphism of rings,
modules or algebras is called monomorphism (respectively, epimorphism or isomorphism) of the corresponding algebraic structures. An isomorphism of any
two algebraic structures is denoted by ∼
= (if A is isomorphic to B, we write
A∼
= B). The kernel, image, and cokernel of a homomorphism f : A → B are
denoted by Ker f , Im f (or f (A)), and Coker f , respectively. (This terminology
agrees with the corresponding terminology for objects in Abelian categories).
If A and B are (left or right) modules over a ring R or if they are algebras over
a commutative ring R, the set of all homomorphisms from A to B is denoted by
HomR (A, B). This set is naturally considered as an Abelian group with respect
to addition (f, g) → f +g defined by (f +g)(x) = f (x)+g(x) (x ∈ A). As usual,
a homomorphism of an algebraic structure into itself is called an endomorphism,
and a bijective endomorphism is called an automorphism.
g
f
→B−
→ C is said to
A pair of homomorphisms of modules (rings, algebras) A −
be exact at B if Ker g = Im f . A sequence (finite or infinite) of homomorphisms
fn−1
fn
fn+1
. . . −−−→ An−1 −→ An −−−→ An+1 → . . . is exact if it is exact at each An ; i.
e., for each successive pair fn , fn+1 , Im fn = Ker fn+1 . For example, every
i
f
→A−
→
homomorphism f : A → B produces the exact sequence 0 → Ker f −
π
→ Coker f → 0 where i is the inclusion mapping (also called an embedding)
B−
and π is the natural epimorphism B → B/Im f . An exact sequence of the form
f
g
→B−
→ C → 0 is called a short exact sequence.
0→A−
The direct product and direct sum of a family of (left or right) modules
(Mi )i∈I over a ring R are denoted by i∈I Mi and
i∈I Mi , respectively. (If
n
n
the index set I is finite, I = {1, . . . , n}, we also write i=1 Mi and i=1 Mi ; if
∞
∞
+
I = N , we write i=1 Mi and i=1 Mi , respectively.)
Let R and S be two rings. A functor F : R Mod → S Mod is called left
j
i
→B−
→ C → 0 is an exact sequence in R Mod, then
exact if whenever 0 → A −
F (i)
F (j)
F (j)
F (i)
the sequence 0 → F (A) −−−→ F (B) −−−→ F (C) is exact, if F is covariant, or
the sequence 0 → F (C) −−−→ F (B) −−−→ F (A) is exact if F is contravariant.
i
→
Similarly, a functor G : R Mod → S Mod is right exact if whenever 0 → A −
www.pdfgrip.com
1.1. BASIC TERMINOLOGY AND BACKGROUND MATERIAL
j
B −
→ C → 0 is an exact sequence in
F (j)
R Mod,
13
F (i)
then the sequence F (A) −−−→
F (j)
F (B) −−−→ F (C) → 0 is exact, if F is covariant, or the sequence F (C) −−−→
F (i)
F (B) −−−→ F (A) → 0 is exact if F is contravariant. The exactness of functors
between categories of right modules is defined similarly.
A chain complex A is sequence of R-modules and homomorphisms · · · →
dn+1
d
n
An+1 −−−→ An −→
An−1 → . . . (n ∈ Z) such that dn dn+1 = 0 for each n. The
submodule Ker dn of An is denoted by Zn (A); its elements are called cycles.
The submodule Im dn+1 of An is denoted by Bn (A); its elements are called
boundaries. Bn (A) ⊆ Zn (A) since dn dn+1 = 0, and Hn (A) = Zn (A)/Bn (A) is
called the nth homology module of A. The chain complex A is called exact if it
is exact at each An (in this case Hn (A) = 0 for all n).
As we have mentioned, if R is a ring then R Mod and ModR (as well as
Ab) are Abelian categories. The projective and injective objects of these categories are called, respectively, projective and injective (right or left) R-modules.
The following two propositions summarize basic properties of these concepts.
The statements, whose proofs can be found, for example, in [176, Chapter 3],
are formulated for left R-modules called “R-modules”; the results for right Rmodules are similar.
Proposition 1.1.11 Let R be a ring. Then
(i) An R-module P is projective if and only if P is a direct summand of a
free R-module. In particular, every free R-module is projective.
(ii) A direct sum of R-modules j∈J Pj is projective if and only if every Pj
(j ∈ J) is a projective R-module.
(iii) An R-module P is projective if and only if the covariant functor
HomR (P, ·) : R Mod → Ab is exact, that is, every exact sequence of R-modules
j
i
→ B −
→ C → 0 induces the exact sequence of Abelian groups 0 →
0 → A −
∗
i
j∗
HomR (P, A) −→ HomR (P, B) −→ HomR (P, C) → 0 (i∗ (φ) = iφ, j ∗ (ψ) = jψ
for any φ ∈ HomR (P, A), ψ ∈ HomR (P, B)).
(iv) Every projective module is flat, that is, P R · is an exact functor from
R Mod to Ab. (In the case of right R-modules, the flatness means the exactness
of the functor · R P : ModR → Ab.)
(v) Any R-module A has a projective resolution, that is, there exists an
dn+1
d
d
n
1
exact sequence P : · · · → Pn+1 −−−→ Pn −→
Pn−1 → . . . −→
P0 −
→ A → 0 in
which every Pn is a projective R-module.
Proposition 1.1.12 Let R be a ring. Then
(i) (Baer’s Criterion) A left (respectively, right) R-module M is injective if
and only if for any left (respectively, right) ideal I of R every R-homomorphism
I → M can be extended to an R-homomorphism R → M .
(ii) A direct product of R-modules j∈J Ej is injective if and only if every Ej
(j ∈ J) is an injective R-module.
www.pdfgrip.com
CHAPTER 1. PRELIMINARIES
14
(iii) An R-module E is injective if and only if the contravariant functor
HomR (·, E) : R Mod → Ab is exact.
(iv) Every R-module can be embedded into an injective R-module.
(v) Any R-module M has an injective resolution, that is, there exists an
η
d
dn−1
d
d
0
1
n
exact sequence E : 0 → M −
→ E0 −→
E1 −→
· · · → En−1 −−−→ En −→
En+1 → . . . in which every En is an injective R-module.
Let R and S be two rings and F :
dn+1
dn
R Mod
d1
→ S Mod a covariant functor.
Let · · · → Pn+1 −−−→ Pn −→ Pn−1 → . . . −→ P0 −
→ M → 0 be a projective
resolution of a leftR-module M (we shall denote it by (P, dn )). If we apply F
to this resolution and delete F (M ), we obtain a chain complex (F (P), F (dn )) :
F (dn+1 )
F (dn )
F (d1 )
F (d0 )
· · · → F (Pn+1 ) −−−−−→ F (Pn ) −−−−→ F (Pn−1 ) → . . . −−−−→ P0 −−−−→ 0 (with
d0 = 0). It can be shown (see, for example, [176, Chapter 5]) that every homology
of the last complex is independent (up to isomorphism) of the choice of a projective resolution of M . We set Ln F (M ) = Hn (F (P)) = Ker F (dn )/Im F (dn+1 ).
Also, if N is another left R-module and (P , dn ) is any its projective resolution,
then every R-homomorphism φ : M → N induces R-homomorphisms of homologies Ln φ : Hn (F (P)) → Hn (F (P )). We obtain a functor Ln F called the
nth left derived functor of F .
If we apply F to an injective resolution (E, dn ) of an R-module M and drop
F (M ), then the n-th homology of the result complex is, up to isomorphism, independent of the injective resolution. This homology is denoted by Rn F (M ). As
in the case of projective resolutions, every homomorphism of left R-modules φ :
M → N induces R-homomorphisms of homologies Rn φ : Rn F (M ) → Rn F (N ),
so we obtain a functor Rn F called the nth right derived functor of F . The
concepts of left and right derived functors for a contravariant functor can be
introduced in a similar way (see [176, Chapter 5] for details).
The following result will be used in Chapter 3 where we consider the category
of inversive difference modules.
Theorem 1.1.13 Let C = A Mod, C = B Mod, and C = C Mod be the categories of left modules over rings A, B, and C, respectively. Let F : C → C and
G : C → C be two covariant functors such that G is left exact and F maps
every injective A-module M to a B-module F (M ) annulled by every right derived
functor Rq G (q > 0) of the functor G. Then for every left A-module N , there
exists a spectral sequence in the category C which converges to Rp+q (GF )(N )
(p, q ∈ N) and whose second term is of the form E2p,q = Rp G(Rq F (N )).
If R is a commutative ring and Σ ⊆ R, then (Σ) will denote the ideal of R
generated by the set Σ, that is, the smallest ideal of R containing Σ. If R0 is a
subring of R, we say that R is a ring extension or an overring of R0 . In such a
case, if B ⊆ R, then the smallest subring of R containing R0 and B is denoted
by R0 [B]; if B is finite, B = {b1 , . . . , bm }, this smallest subring is also denoted
by R0 [b1 , . . . , bm ]. If R = R0 [B], we say that B is the set of generators of R over
R0 or that R is generated over R0 by the set B. In this case every element of
www.pdfgrip.com