Tải bản đầy đủ (.pdf) (679 trang)

from vectors to tensors - j. ruiz-tolosa, e. castillo (springer, 2005) ww

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (22.74 MB, 679 trang )

Universitext
Juan Ramon Ruiz-Tolosa
Enrique Castillo
From Vectors
to Tensors
Springer
Authors
Juan Ramon Ruiz-Tolosa
Enrique Castillo
Universidad Cantabria
Depto. Matematica Aplicada
Avenida de los Castros
39005 Santander, Spain
castie@unican,es
Library of Congress Control Number: 20041114044
Mathematics Subject Classification (2000): 15A60,15A72
ISBN3-540-22887-X Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is
concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication
of this publication or parts thereof
is
permitted only under the provisions of the German Copyright
Law of September 9,1965, in its current version, and permission for use must always be obtained from
Springer. Violations are liable for prosecution under the German Copyright Law.
Springer-Verlag is a part of Springer Science+Business Media
springeronline.com
© Springer-Verlag Berlin Heidelberg 2005
Printed in Germany
The use of designations, trademarks, etc. in this publication does not imply,
even in the absence of a specific statement, that such names are exempt from


the relevant protective laws and regulations and therefore free for general use.
Cover Design: Erich Kirchner, Heidelberg
Printed on acid-free paper 41/3142XT 543210
To the memory of Bernhard Riemann and Albert Einstein
Preface
It is true that there exist many books dedicated to linear algebra and some-
what fewer to multilinear algebra, written in several languages, and perhaps
one can think that no more books are needed. However, it is also true that in
algebra many new results are continuously appearing, different points of view
can be used to see the mathematical objects and their associated structures,
and different orientations can be selected to present the material, and all of
them deserve publication.
Under the leadership of Juan Ramon Ruiz-Tolosa, Professor of multilin-
ear algebra, and the collaboration of Enrique Castillo, Professor of applied
mathematics, both teaching at an engineering school in Santander, a tensor
textbook has been born, written from a practical point of view and free from
the esoteric language typical of treatises written by algebraists, who are not
interested in descending to numerical details. The balance between follow-
ing this line and keeping the rigor of classical theoretical treatises has been
maintained throughout this book.
The book assumes a certain knowledge of linear algebra, and is intended as
a textbook for graduate and postgraduate students and also as a consultation
book. It is addressed to mathematicians, physicists, engineers, and applied
scientists with a practical orientation who are looking for powerful tensor
tools to solve their problems.
The book covers an existing chasm between the classic theory of tensors
and the possibility of solving tensor problems with a computer. In fact, the
computational algebra is formulated in matrix form to facilitate its implemen-
tation on computers.
The book includes 197 examples and end-of-chapter exercises, which makes

it specially suitable as a textbook for tensor courses. This material combines
classic matrix techniques together with novel methods and in many cases the
questions and problems are solved using different methods. They confirm the
applied orientation of the book.
A computer package, written in Mathematica, accompanies this book,
available on: In it, most of the
novel methods developed in the book have been implemented. We note that
existing general computer software packages (Mathematica, Mathlab, etc.) for
tensors are very poor, up to the point that some problems cannot be dealt
VIII Preface
with using computers because of the lack of computer programs to perform
these operations.
The main contributions of the book are:
1.
The book employs a new technique that permits one to extend (stretch)
the tensors, as one-column matrices, solve on these matrices the desired
problems, and recover the initial format of the tensor (condensation). This
technique, applied in all chapters, is described and used to solve matrix
equations in Chapter 1.
2.
An important criterion is established in Chapter 2 for all the components
of a tensor to have a given ordering, by the definition of a unique canonical
tensor basis. This permits the mentioned technique to be applied.
3.
In Chapter 3, factors are illustrated that have led to an important con-
fusion in tensor books due to inadequate notation of tensors or tensor
operations.
4.
In addition to dealing with the classical topics of tensor books, new tensor
concepts are introduced, such as the rotation of tensors, the transposer

tensor, the eigentensors, and the permutation tensor structure, in Chapter
5.
5.
A very detailed study of generalized Kronecker deltas is presented in Chap-
ter 8.
6. Chapter 10 is devoted to mixed exterior algebras, analyzing the problem
of change-of-basis and the exterior product of this kind of tensors.
7.
In Chapter 11 the rules for the "Euclidean contraction" are given in detail.
This chapter ends by introducing the geometric concepts to tensors.
8. The orientation and polar tensors in Euclidean spaces are dealt with in
Chapter 12.
9. In Chapter 13 the Gram matrices G(r) are established to connect exterior
tensors.
10.
Chapter 14 is devoted to Euclidean tensors in E^(R), affine geometric
tensors (homographies), and some important tensors in physics and me-
chanics, such as the stress and strain tensors, the elastic tensor and the
inertial moment tensor. It is shown how tensors allow one to solve very
interesting practical problems.
In summary, the book is not a standard book on tensors because of its
orientation, the many novel contributions included in it, the careful notation
and the stretching-condensing techniques used for most of the transformations
used in the book. We hope that our readers enjoy reading this book, discover
a new world, and acquire stimulating ideas for their applications and new
contributions and research.
The authors want to thank an anonimous French referee for the careful
reading of the initial manuscript, and to Jeffrey Boys for the copyediting of
the final manuscript.
Santander, Juan Ramon Rmz-Tolosa

September 30, 2004 Enrique Castillo
Contents
Part I Basic Tensor Algebra
Tensor Spaces 3
1.1 Introduction 3
1.2 Dual or reciprocal coordinate frames in affine Euclidean spaces 3
1.3 Different types of matrix products 8
1.3.1 Definitions 8
1.3.2 Properties concerning general matrices 10
1.3.3 Properties concerning square matrices 11
1.3.4 Properties concerning eigenvalues and eigenvectors 12
1.3.5 Properties concerning the Schur product 13
1.3.6 Extension and condensation of matrices 13
1.3.7 Some important matrix equations 17
1.4 Special tensors 26
1.5 Exercises . 30
Introduction to Tensors 33
2.1 Introduction 33
2.2 The triple tensor product linear space 33
2.3 Einstein's summation convention 36
2.4 Tensor analytical representation 37
2.5 Tensor product axiomatic properties 38
2.6 Generalization 40
2.7 Illustrative examples , 41
2.8 Exercises 46
Homogeneous Tensors 47
3.1 Introduction 47
3.2 The concept of homogeneous tensors 47
3.3 General rules about tensor notation 48
3.4 The tensor product of tensors 50

X Contents
3.5 Einstein's contraction of the tensor product 54
3.6 Matrix representation of tensors 56
3.6.1 First-order tensors 56
3.6.2 Second-order tensors 57
3.7 Exercises 61
4 Change-of-basis in Tensor Spaces 65
4.1 Introduction 65
4.2 Change of basis in a third-order tensor product space 65
4.3 Matrix representation of a change-of-basis in tensor spaces 67
4.4 General criteria for tensor character 69
4.5 Extension to homogeneous tensors 72
4.6 Matrix operation rules for tensor expressions 74
4.6.1 Second-order tensors (matrices) 74
4.6.2 Third-order tensors 77
4.6.3 Fourth-order tensors 78
4.7 Change-of-basis invariant tensors: Isotropic tensors 80
4.8 Main isotropic tensors 80
4.8.1 The null tensor 80
4.8.2 Zero-order tensor (scalar invariant) 80
4.8.3 Kronecker's delta 80
4.9 Exercises 106
5 Homogeneous Tensor Algebra: Tensor Homomorphisms Ill
5.1 Introduction Ill
5.2 Main theorem on tensor contraction Ill
5.3 The contracted tensor product and tensor homomorphisms 113
5.4 Tensor product applications 119
5.4.1 Common simply contracted tensor products 119
5.4.2 Multiply contracted tensor products 120
5.4.3 Scalar and inner tensor products 120

5.5 Criteria for tensor character based on contraction 122
5.6 The contracted tensor product in the reverse sense: The
quotient law 124
5.7 Matrix representation of permutation homomorphisms 127
5.7.1 Permutation matrix tensor product types in K"^ . . . 127
5.7.2 Linear span of precedent types 129
5.7.3 The isomers of a tensor 137
5.8 Matrices associated with simply contraction homomorphisms . 141
5.8.1 Mixed tensors of second order (r = 2): Matrices 141
5.8.2 Mixed tensors of third order (r = 3) 141
5.8.3 Mixed tensors of fourth order (r = 4) 142
5.8.4 Mixed tensors of fifth order (r = 5) 143
5.9 Matrices associated with doubly contracted homomorphisms . 144
5.9.1 Mixed tensors of fourth order (r = 4) 144
Contents XI
5.9.2 Mixed tensors of fifth order (r = 5) 145
5.10 Eigentensors 159
5.11 Generahzed multihnear mappings 165
5.11.1
Theorems of simihtude with tensor mappings 167
5.11.2
Tensor mapping types 168
5.11.3
Direct n-dimensional tensor endomorphisms 169
5.12 Exercises 183
Part II Special Tensors
6 Symmetric Homogeneous Tensors: Tensor Algebras 189
6.1 Introduction 189
6.2 Symmetric systems of scalar components 189
6.2.1 Symmetric systems with respect to an index subset 190

6.2.2 Symmetric systems. Total symmetry 190
6.3 Strict components of a symmetric system 191
6.3.1 Number of strict components of a symmetric system
with respect to an index subset 191
6.3.2 Number of strict components of a symmetric system . 192
6.4 Tensors with symmetries: Tensors with branched symmetry,
symmetric tensors 193
6.4.1 Generation of symmetric tensors 194
6.4.2 Intrinsic character of tensor symmetry: Fundamental
theorem of tensors with symmetry 197
6.4.3 Symmetric tensor spaces and subspaces. Strict
components associated with subspaces 204
6.5 Symmetric tensors under the tensor algebra perspective 206
6.5.1 Symmetrized tensor associated with an arbitrary
pure tensor 210
6.5.2 Extension of the symmetrized tensor associated with
a mixed tensor 210
6.6 Symmetric tensor algebras: The (8)5 product 212
6.7 Illustrative examples 214
6.8 Exercises 220
7 Anti-symmetric Homogeneous Tensors, Tensor and Inner
Product Algebras 225
7.1 Introduction 225
7.2 Anti-symmetric systems of scalar components 225
7.2.1 Anti-symmetric systems with respect to an index
subset 226
7.2.2 Anti-symmetric systems. Total anti-symmetry 228
7.3 Strict components of an anti-symmetric system and with
respect to an index subset 228
XII Contents

7.3.1 Number of strict components of an anti-symmetric
system with respect to an index subset 229
7.3.2 Number of strict components of an anti-symmetric
system 229
7.4 Tensors with anti-symmetries: Tensors with branched
anti-symmetry; anti-symmetric tensors 230
7.4.1 Generation of anti-symmetric tensors 232
7.4.2 Intrinsic character of tensor anti-symmetry:
Fundamental theorem of tensors with anti-symmetry . 236
7.4.3 Anti-symmetric tensor spaces and subspaces. Vector
subspaces associated with strict components 243
7.5 Anti-symmetric tensors from the tensor algebra perspective . . 246
7.5.1 Anti-symmetrized tensor associated with an
arbitrary pure tensor 249
7.5.2 Extension of the anti-symmetrized tensor concept
associated with a mixed tensor 249
7.6 Anti-symmetric tensor algebras: The ^H product 252
7.7 Illustrative examples 253
7.8 Exercises 265
8 Pseudotensors; Modular, Relative or Weighted Tensors 269
8.1 Introduction 269
8.2 Previous concepts of modular tensor establishment 269
8.2.1 Relative modulus of a change-of-basis 269
8.2.2 Oriented vector space 270
8.2.3 Weight tensor 270
8.3 Axiomatic properties for the modular tensor concept 270
8.4 Modular tensor characteristics 271
8.4.1 Equality of modular tensors 272
8.4.2 Classification and special denominations 272
8.5 Remarks on modular tensor operations: Consequences 272

8.5.1 Tensor addition 272
8.5.2 Multiplication by a scalar 274
8.5.3 Tensor product 275
8.5.4 Tensor contraction 276
8.5.5 Contracted tensor products 276
8.5.6 The quotient law. New criteria for modular tensor
character 277
8.6 Modular symmetry and anti-symmetry 280
8.7 Main modular tensors 291
8.7.1 e systems, permutation systems or Levi-Civita
tensor systems 291
8.7.2 Generalized Kronecker deltas: Definition 293
8.7.3 Dual or polar tensors: Definition 301
8.8 Exercises 310
Contents XIII
Part III Exterior Algebras
9 Exterior Algebras:
Totally Anti-symmetric Homogeneous Tensor Algebras 315
9.1 Introduction and Definitions 315
9.1.1 Exterior product of two vectors 315
9.1.2 Exterior product of three vectors 317
9.1.3 Strict components of exterior vectors. Multivectors. 318
9.2 Exterior product of r vectors: Decomposable multivectors 319
9.2.1 Properties of exterior products of order r:
Decomposable multivectors or exterior vectors 321
9.2.2 Exterior algebras over
V'^{K)
spaces: Terminology . 323
9.2.3 Exterior algebras of order r=0 and r=l 324
9.3 Axiomatic properties of tensor operations in exterior algebras 324

9.3.1 Addition and multiplication by an scalar 324
9.3.2 Generalized exterior tensor product: Exterior
product of exterior vectors 325
9.3.3 Anti-commutativity of the exterior product /\ 331
9.4 Dual exterior algebras over V^{K) spaces 331
9.4.1 Exterior product of r linear forms over V^{K) 332
9.4.2 Axiomatic tensor operations in dual exterior
Algebras
/\^^{K).
Dual exterior tensor product 333
9.4.3 Observation about bases of primary and dual
exterior spaces 334
9.5 The change-of-basis in exterior algebras 337
9.5.1 Strict tensor relationships for /\^\K) algebras 338
9.5.2 Strict tensor relationships for An* (^) cilgebras 339
9.6 Complements of contramodular and comodular scalars 341
9.7 Comparative tables of algebra correspondences 342
9.8 Scalar mappings: Exterior contractions 342
9.9 Exterior vector mappings: Exterior homomorphisms 345
9.9.1 Direct exterior endomorphism 350
9.10 Exercises 383
10 Mixed Exterior Algebras 387
10.1 Introduction 387
10.1.1 Mixed anti-symmetric tensor spaces and their strict
tensor components 387
10.1.2 Mixed exterior product of four vectors 390
10.2 Decomposable mixed exterior vectors 394
10.3 Mixed exterior algebras: Terminology 397
10.3.1 Exterior basis of a mixed exterior algebra 397
10.3.2 Axiomatic tensor operations in the /\^ {K) algebra 398

XIV Contents
10.4 Exterior product of mixed exterior vectors 399
10.5 Anti-commutativity of the /\ mixed exterior product 403
10.6 Change of basis in mixed exterior algebras 404
10.7 Exercises 409
Part IV Tensors over Linear Spaces ^vith Inner Product
11 Euclidean Homogeneous Tensors 413
11.1 Introduction 413
11.2 Initial concepts 413
11.3 Tensor character of the inner vector's connection in a
PSE^'in) space 416
11.4 Different types of the fundamental connection tensor 418
11.5 Tensor product of vectors in E^(Il) (or in P5£;^(Il)) 421
11.6 Equivalent associated tensors: Vertical displacements of
indices. Generalization 422
11.6.1 The quotient space of isomers 426
11.7 Changing bases in E"'(]R): Euclidean tensor character criteria 427
11.8 Symmetry and anti-symmetry in Euclidean tensors 430
11.9 Cartesian tensors 433
11.9.1 Main properties of Euclidean E'^{M) spaces in
orthonormal bases 433
11.9.2 Tensor total Euclidean character in orthonormal
bases 434
11.9.3 Tensor partial Euclidean character in orthonormal
bases 436
11.9.4 Rectangular Cartesian tensors 436
11.10 Euclidean and pseudo-Euclidean tensor algebra 451
11.10.1 Euclidean tensor equality 451
11.10.2 Addition and external product of Euclidean
(pseudo-Euclidean) tensors 451

11.10.3 Tensor product of Euclidean (pseudo-Euclidean)
tensors 452
11.10.4 Euclidean (pseudo-Euclidean) tensor contraction 452
11.10.5 Contracted tensor product of Euclidean or
pseudo-Euclidean tensors 455
11.10.6 Euclidean contraction of tensors of order r = 2 457
11.10.7 Euclidean contraction of tensors of order r = 3 457
11.10.8 Euclidean contraction of tensors of order r = 4 457
11.10.9 Euclidean contraction of indices by the Hadamard
product 458
11.11 Euclidean tensor metrics 482
11.11.1 Inner connection 483
11.11.2 The induced fundamental metric tensor 484
Contents XV
11.11.3 Reciprocal and orthonormal basis 486
11.12 Exercises 504
12 Modular Tensors over £J^(]R) Euclidean Spaces 511
12.1 Introduction 511
12.2 Diverse cases of linear space connections 511
12.3 Tensor character of y/\G\ 512
12.4 The orientation tensor: Definition 514
12.5 Tensor character of the orientation tensor 514
12.6 Orientation tensors as associated Euclidean tensors 515
12.7 Dual or polar tensors over E'"'(]R) Euclidean spaces 516
12.8 Exercises 525
13 Euclidean Exterior Algebra 529
13.1 Introduction 529
13.2 Euclidean exterior algebra of order r = 2 529
13.3 Euclidean exterior algebra of order r (2 < r < n) 532
13.4 Euclidean exterior algebra of order r=n 535

13.5 The orientation tensor in exterior bases 535
13.6 Dual or polar tensors in exterior bases 536
13.7 The cross product as a polar tensor in generalized Cartesian
coordinate frames 538
13.8 \/\G\ geometric interpretation in generalized Cartesian
coordinate frames 539
13.9 Illustrative examples 540
13.10 Exercises 576
Part V Classic Tensors in Geometry and Mechanics
14 Affine Tensors 581
14.1 Introduction and Motivation 581
14.2 Euclidean tensors in E^(]R) 582
14.2.1 Projection tensor 582
14.2.2 The momentum tensor 586
14.2.3 The rotation tensor 587
14.2.4 The reflection tensor 590
14.3 Affine geometric tensors, nomographics 597
14.3.1 Preamble 597
14.3.2 Definition and representation 599
14.3.3 Affinities 600
14.3.4 Homothecies 604
14.3.5 Isometrics 606
14.3.6 Product of isometrics 623
14.4 Tensors in Physics and Mechanics 626
XVI Contents
14.4.1 The stress tensor S 628
14.4.2 The strain tensor F 630
14.4.3 Tensor relationships between S and F. Elastic tensor. 635
14.4.4 The inertial moment tensor 647
14.5 Exercises 655

Bibliography 659
Index 663
Part I
Basic Tensor Algebra
Tensor Spaces
1.1 Introduction
In this chapter we give some concepts that are required in the remaining chap-
ters of the book. This includes the concepts of reciprocal coordinate frames,
contravariant and covariant coordinates of a vector, some formulas for changes
of basis, etc.
We also introduce different types of matrix products, such as the ordinary,
the tensor or the Schur products, together with their main properties, that
will be used extensively to operate and simplify long expressions throughout
this book.
Since we extend and condense tensors very frequently, i.e., we represent
tensors as vectors to take full advantage of vector theory and tools, and then
we recover their initial tensor representation, we present the corresponding
extension and condensation operators that permit moving from one of these
representations to the other, and vice versa.
These operators are used initially to solve some important matrix equa-
tions that are introduced, together with some interesting applications.
Finally, the chapter ends with a section devoted to special tensors that are
used to solve important physical problems.
1.2 Dual or reciprocal coordinate frames in affine
Euclidean spaces
Let £"^(11) be an n-dimensional affine linear space over the field IR equipped
with an inner connection (inner or dot product), < •,

>, and let {e^} be
a basis of E'"'(IR). The vector V with components {x^} in the initial basis

-• ^
{Sa}:
i.e., the vector V = ^ x^e^ will be represented in the following by the
symbolic matrix expression
1 Tensor Spaces
2
V = \\ea\\X = [eie2'
X
(1.1)
In this book vector matrices will always be represented as row matrices, de-
noted by II • II, and component matrices always as column matrices, denoted
by [•]. So, when referring to columns of vectors or rows of components, we
must use the corresponding transpose matrices.
To every pair of vectors, {F, W}^ the connection assigns a real number (a
scalar),
given by the matrix relation
<V,W>
=X'GY,
(1.2)
where X and Y are the column matrices with the coordinates of vectors V
and W, respectively, and G is the Gram matrix of the connection, which is
given by
Gn==[9aß]^[<eo^.eß>]; gaß eU; G = G\\G\ ^ 0. (1.3)
As is well known, if a new basis is selected, all the mathematical objects
associated with the linear space change representation.
So,
if
||ez||l,n = \\ea\\l,nCn,n (1-4)
is the matrix representation of the change-of-basis, and the subindices refer
to the matrix dimensions (row and columns, respectively), a vector V can be

written as
V=\\ea\\Xn,l
and also as
V = \\ii\\Xn,i,
where the initial Xn,i and new Xn,i components are related by
Xn,l = CXn,l. (1.5)
It is obvious that any change-of-basis can be performed with the only
constraint of having an associated C non-singular matrix (|C| 7^ 0).
However, there exists a very special change-of-basis that is associated with
the matrix G
C~G-\ (1.6)
for which the resulting new basis will not be denoted by {e^}, but with the
special notation
{e*^^},
and it will be called the reciprocal or dual basis of
{ea}'
The vector y = Jleajl-X with components {x^} in the initial basis now has
the components
{XQ,},
that is
1.2 Dual or reciprocal coordinate frames in affine Euclidean spaces
V
r
_>*1 _,*2
e e
Xi
X2
Hence, taking into account (1.6), expression (1.5) leads to
X = G-^X* 4^ X* - GX
and from (1.4) we get

||e II = ||eQ;||G <^ [e e •••e J = [6162 ••

en]G-
(1.7)
(1.^
Equation (1.7) gives the relation between the contravariant coordinates^
X, of vector V in the initial frame and the covariant coordinates, X*, of the
same vector F, when it is referred to a new frame that is the reciprocal or
dual of the initial frame. In short, in a punctual affine space we make use of
two frames simultaneously:
1.
The (O
—{e*Q;})
initial or primary (contravariant coordinates).
2.
The (O

{e*^}) reciprocal (covariant coordinates) (in spheric three-
dimensional geometry it is the po/ar trihedron of the given one).
Following the exposition, assume that the coordinates of two vectors V
and W are given and that their dot product is sought after.
1.
If the two vectors are given in contravariant coordinates, we use the ex-
pression (1.2):
<V,W> = X^GY.
2.
If T? is given in contravariant coordinates (column matrix X) and W is
given in covariant coordinates (column matrix y*), and at this time the
heterogeneous connection is not known, expression (1.7) can be obtained
by writing W in contravariant coordinates, Y = G~"^F* and using expres-

sion (1.2):
<V^W> = X*G(G-^F*) = XV* = XVy* (1.9)
The surprising result is that with data vectors in contra-covariant coordi-
nates the heterogeneous connection matrix is the identity matrix /, and
the result can be obtained by a direct product of the data coordinates.
From this result, one can begin to understand that the simultaneous use
of data in contra and cova forms can greatly facilitate tensor operations.
3.
If y is given in covariant coordinates (X*) and W in contravariant coor-
dinates (matrix y), proceeding in a similar way with vector V, and using
(1.7),
one gets
6 1 Tensor Spaces
< V,W>={G-^X*YGY
=
{X*)\G-'^YGY
=
{X*YG-'^GY
=
{X*flY,
(1.10)
where once more we observe that cova-contravariant data imply a unit
connection matrix /.
4.
Finally, if one has cova-covariant data, that is, V{X*) and
1^(1^*),
the
result will be
<V,W> - X^GY - (G-iX*)*G(G-^y*)
=

{XyG-^GG-^Y''
= (X*)*G-^y*,
(1.11)
which discovers that for a reciprocal frame, the Gram matrix is G* = G "^,
that is,
< y, H> > = (X*)*G-^y*. (1.12)
Example 1.1 (Change of basis). The Gram matrices associated with linear
spaces equipped with inner products (Euclidean, pre-Euclidean, etc.) when
changing bases, transform in a "congruent" way, i.e.: G = C^GC. The proof
is as follows.
Proof:
By definition we have
G =
en =
ei
62
• ei 62
en =
ei • ei ei • 62
62 • e 1 e2 • 62
en •ei Sn* 62
e_i

ejn
62 •en
(1.13)
If the scalar e^ • Cj is denoted by p«j, we have G =
[9ij]'>
and since gij =
Ci

• Sj — Cj •
Ci
— Qji we get G — G^.
If in the linear space we consider the change-of-basis
||e^||
= ||e4||C, then
we have
G
• \\en
|e-||C)*.(||e,||C)=C*(i|e,
• e,:
\)C
and using (1.13), we finally get G = C^GC, which is the desired result.
Next, an example is given to clarify the above material.
D
Example 1.2 (Linear operator and scalar product). Assume that in the affine
linear space E'^(]R) referred to the basis {ca}^ a given linear operator (of
associated matrix T given in the cited basis) transforms the vectors in the
affine linear space into vectors of the same space. In this situation, one per-
forms a change-of-basis in E'^CSl) (with given associated matrix C). We are
interested in finding the matrix M associated with the linear operator, such
that taking vectors in contravariant coordinates of the initial basis returns the
transformed vectors in "covariant coordinates" of the new basis.
We have the following well-known relations:
1.2 Dual or reciprocal coordinate frames in affine Euclidean spaces 7
1.
In the initial frame of reference, when changing bases, the Gram matrix
(see the change-of-basis for the bilinear forms) satisfies
G = C^GC. (1.14)
2.

It is known that the linear operator operates in ^^(H) as
Y = TX. (1.15)
3.
According to (1.5) the change-of-basis for vectors leads to
and entering with (1.7) for the vector W in the new basis gives
iXY = GY
^^^""^'="'
^'-^'^ {C^GC)Y
^^^""^^="'
^'-^'^ {C^GC){C-'Y) =
^,^^because^of(1.15)^,^^^^^^
(1.17)
Thus,
we get (F)* = MX with M = C^GT, which is the sought after result.
D
Finally, we examine in some detail how the axes scales of the reference
frame {e**} are the "dual" or "reciprocal" of the given reference frame {e^}.
Equation (1.8):
[e e
• •
-e J = [eie2 ••
-
en\G
declares that the director vector associated with the main direction OX^ (in
the dual reference frame (O

X^, X2, , X^)) is
r^ = g^'e, + g^'e2 +
• • •
+ g''e, +

• • •
+ p^^e,, (1.18)
where
[g'^^]
= G
-^
is the inverse of G and symmetric, and then
IGI'
9'' = TTTT, (1-19)
where G*-^ is the adjoint of ^^j in G.
The modulus of the vector e"*^ is
V"
< e
*%
e
*^
> = v^ = 1/ T7^'
(^-^O)
which gives the scales of each main direction OX^, in the reciprocal system,
which are the reciprocal of the scales of the fundamental system (contravari-
ant) when G is diagonal.
Since <
e**%
Cj
> =
0;
Vf 7^
j,
all e*^ has a direction that is orthogonal to
all remaining vectors ej {j ^ i). All this recalls the properties of the "polar

trihedron" of a given trihedron in spheric geometry.
1 Tensor Spaces
Remark 1.1. If the reference frame is orthogonalized but not orthonormalized,
that is, if
Iffn 0 0
0 922
0
G
0
0
•••ff„
expression (1.20) becomes
^ 1
G\ \
\G\
v^-
(1.21)
D
1.3 Different types of matrix products
1.3.1 Definitions
In this section the most important matrix products are defined.
Definition 1.1 (Inner, ordinary or scalar matrix product). Consider
the following matrices:
where the matrix subindices and the values within brackets refer to their di-
mensions and the corresponding elements, respectively.
We say that the matrix P is the inner, ordinary or scalar product of ma-
trices A and B, and it is denoted by Am B, iff (if and only if)
a=h
P = AmB
=^Pij

= '^aiabaj] z
==
1,
2, ,
m; j = l,2, ,n.
Definition 1.2 (External product of matrices). Consider the following
matrices:
^m,h = L^ÜJ' ^m,h = \Pij\')
and the scalar X e K. We say that the matrix P is the external product of the
scalar X and the matrix A, and is denoted by Xo A, iff
P = X o A
=^
Pij = Xüij.
Definition 1.3 (Kronecker, direct or tensor product of matrices).
Consider the following matrices:
^m,n — [<^a/3j5 •'-^P,Q — L^7<5j? ^mp,nq — \Pii
1.3 Different types of matrix products
We say that the matrix P is the Kronecker, direct or tensor product of matrices
A and B and it is denoted by A ^ B, iff
P = A^B^Pij = aaßb^s = a[_i^j+i,[_i:^j+iöi_Li^jpj_L2^jg, (1.22)
where z =
1,2, ,
mp,
;
j =
1,2, ,
nq, [x\ is the integer part of x, with an
order fixed by convention and represented by means of ''submatrices'\'
P — Ä
>B

p,q-
ail o B I (2i2 o B \
a2io B \ 022^ B \
I I .
ami o B I am2 oB \
where each partition has p rows and q columns.
— +
I
+
•• I
— +
ain o B
a2n ^ B
d'm.n
O
B
It is interesting to know what the row i and column j are, where the
factor
üaßbrys
appears in the tensor product Am,n ^ ^p,q^ i-^-5 what is the
corresponding element pij. Its row and column are given by
i = {a-l)p^ 7; j = {ß- l)q + 6.
Similarly, the reverse transformation, according to (1.22) is
i-1
P
+
1;
ß
i-i
+

1;
j = i-
P
P\
J-
(1.23)
1
(1.24)
Some authors call this product the total product of matrices, which causes
confusion with the total product of linear spaces.
Definition 1.4 (Hadamard or Schur product of matrices). Consider
the following matrices:
We say that the matrix P is the Hadamard or Schur product of matrices A
and B, and it is denoted by AUB, iff
^m,n — -^m.n^-t^m^n ^ Pij —
(^ij^ij'i
^ — I5 -^5 • • •
5
^^5 J — 1, ^, . . . , ?2.
10 1 Tensor Spaces
1.3.2 Properties concerning general matrices
The properties of the sum + and the ordinary product • of matrices, which
are perfectly developed in the linear algebra of matrices, are not developed
here.
Conversely, the most important properties of other products are given.
The most important properties of these products for general matrices are:
1.
A<^{B ^C)
=^
{A^B)<S)C (associativity of (g)).

(A^{B-^C)=A^B-^A^C (right distributivity of (g))-
• \{A-i-B)^C = A^C^B^C (left distributivity of (g)).
3.
(A
(g)
By = A*
(g)
B* (be aware of the order invariance).
4.
{A
(g)
By = A*
(g)
5*, where X* - (X*) (complex fields).
5.
Relation between scalar and tensor products. Let Am,ni
^P^Q^
^n,r and
Fq^s
be four data matrices. Then, we have
{A^
B) • {C 0 F) = {A* C) ^ {B • F).
In fact, we have
'^m,n W •^p,g — -^mp^nq
^n,r ^ -^9,5 ^^ ^nq,rs-)
SO
that the scalar product P • Q is possible:
V-^7Ti,n ^
•^P,Q)
* \^n,r

'<y
•'^q,s) ^^ ^mp^nq • ^nq,rs
In addition, we have
A- • 0 = Arm^ri • ^n,r ^^ ^m.r
and
— J^mp,rs'
B • F =
Bp^q

Fq^s
= Qp,s
and then
(A • C) (g) (P • F) = P^,, 0
Q;,,
= Rmp,rs.
where these formulas aim only to justify the dimensions of the data ma-
trices.
6. Generalization of the relations between scalar and tensor products:
(Ai(g)Pi)«(A2(g)P2)*-

-{Ak^Bk) = {Ai*A2-

'•Ak)^{BimB2*-
-—Bk).
This is how one moves from several tensor products to a single one. This
is possible only when the dimensions of the corresponding matrices allow
the inner product.
7.
There is another way of generalizing Property 5, which follows.
Consider now the product

P = (Al 0 Pi
(g)
Ci) • (A2
(g)
P2
(g)
^2).
1.3 Different types of matrix products 11
Assuming ttiat the matrix dimensions allow the products, and using Prop-
erties 1 and 5, one gets
P=[{Ai
(8)
^i)
(8)
CiK(A2 0 B2)
(8) C2]
= [(Al
(8)
Bi)m{A2 0 B2)](8)(Ci.C2)
and using again Property 5 to the bracket on the second member, we have
P = [(Al • A2) 0 (5i • ^2)] 0 (Ci . C2)
and using Property 1, the result is
P = (Al • A2)
(8)
{Bi • ^2) 0 (Ci • C2).
In summary, the following relation holds:
(Al
(8)
Bi
(8)

Ci) • (A2
(8)
^2
(S)
C2) = (Al • A2)
(8)
(Bi • B2)
(8)
(Ci • C2),
which after generalization leads to
(Ai(8)A2(8)-

•(8)Afc)»(5i(8)^2(8)-

-(8)5^) = (Ai#5i)(8)(A2*B2)g)-

^0{Ak^Bk).
8. If we denote by A^ the product A • A •
• • •
• A and by A^^^ the product
A0A0 '(S)A,
with /c G IN, we have
A^,n,Bn,,^(A.5)[^^=At^UBW.
We remind the reader that {A*B)^ ^ A^ • B^, unless A and B commute.
9.
rank (A 0 5) = (rank A) (rank B)
= (rank 5) (rank A)
= rank {B 0 A). (1.25)
1.3.3 Properties concerning square matrices
Next we consider only square matrices, that is, of the form Am,m and Bp^p.

The most important properties of these matrices are:
1.
(A
(8)
Ip) • {Im 0B) = {Im 0 B) • (A
(8)
/p) = A
(8)
5.
2.
det(A
(8)
B) = (detA)^(detB)^ = (det^)^(c^etA)^ = det{B 0 A).
3.
trace {A^ B) = (trace A)(trace
JB)
= (trace 5)(trace A) = trace {B 0 A).
4.
(A
(8)
JB)""-^ = A~^
<S>
B~^^ where one must be aware of the order, and A
and B must be regular matrices.
5.
Remembering the meaning of the notation A^ and A^^^ introduced in
Property 8 above. Property 6 of that section for square matrices becomes
(A(8)5)^ = A^(8)B^.
12
1

Tensor Spaces
1.3.4 Properties concerning eigenvalues
and
eigenvectors
Let
{Xi\i — 1,2, , m} and
{[ii\i
= 1,2, ,p} be the
sets
of
eigenvalues
of
^m,m^
and
Bp^p^ respectively.
If Vi
(column matrix)
is an
eigenvector
of Am^
of eigenvalue A^
and Wj
(column matrix)
is an
eigenvector
of Bp^ of
eigenvalue
/iy, that
is, if Am *
Vi

= Xi ovi and Bp • Wj =
/JLJ
O
WJ^
then
we
have:
1.
The set of
eigenvalues
of the
matrix
A^ B is the set
{Xi(ij\i
=: 1,2,
,m;j
= 1,2,
,_p}.
(1.26)
2.
The set of
eigenvalues
of the
matrix
Z = {A^ Ip)
-\-
{Im ^ B) is the set
{Xi
+ ßj\i = 1,
2, ,

m; j = 1,2,
,p}. (1.27)
Remark
1.2. The
matrix
A can be
replaced
by the
matrix
A^ and the
matrix
B by the
matrix
B^. D
3.
The set of
eigenvectors
of the
matrix
A<^ B is the set
{vi®Wj\i
=
1,2, ,m-,
j =
1,2, ,p}.
Proof.
{A(S)B)*{vi0Wj)
=
{A9Vi)0{B»Wj)
=

{XiOVi)^{/ijOWj)
=
{Xifj,j)o{vi0Wj),
which shows that
Vi 0 Wj are the
eigenvectors
oi A^ B.
Example
1.3
(Eigenvalues). Consider
the
tridiagonal symmetric matrix
A-r}
of order
n,
which
is
also called finite difference matrix
of
order
n, and let I^
be
the
unit matrix.
The
matrix
L-n?^n'^
= {An^n ^ ^n) + {^n ^
-A.n.n)
is called

the
Laplace discrete bidimensional matrix.
Since
the
eigenvalues
of
matrix
An,n are
2
-
-1
0-
0
0
-1
2
-
-1
0
0
0
-1
2
-
0
0
0-
0-
-1


0-
0 •

0

0

0

-1

0 -
0
0
0
2
-
-1
0
0
0
-1
2
4sin^
i
= 1,2, ,n
.2(n4-l),
and
in
this case

A = B = A^^n^
according
to the
Property
2
above,
the set of
eigenvalues
of L^2 ,^2 is
{Ae,}
^ 4
sm
TTl
2(n
+ l)
+
sin
TTJ
2(n4-l)
;
ij = 1,2, ,n
D

×