VIETNAM NATIONAL UNIVERSITY
UNIVERSITY OF SCIENCE
FACULTY OF MATHEMATICS, MECHANICS AND INFORMATICS
Azat Yazgulyyev
RIGHT INVERTIBLE OPERATORS
AND INTERPOLATION PROBLEMS
IN LINEAR SPACES
Undergraduate Thesis
Undergraduate Program in Mathematics
Hanoi - 2012
VIETNAM NATIONAL UNIVERSITY
UNIVERSITY OF SCIENCE
FACULTY OF MATHEMATICS, MECHANICS AND INFORMATICS
Azat Yazgulyyev
RIGHT INVERTIBLE OPERATORS
AND INTERPOLATION PROBLEMS
IN LINEAR SPACES
Undergraduate Thesis
Advanced Undergraduate Program in Mathematics
Thesis advisor: Prof. Dr. Hab. Nguyen Van Mau
Hanoi - 2012
Acknowledgements
First and foremost, I am deeply grateful to my advisor Prof.Dr.Hab. Nguyen
Van Mau. His patient guidance and kind support made this work possible. I
am thankful for the chance to study with him and appreciate all the help he has
given to me during my thesis preparation. Thanks for your dedication, patience,
motivation and immense knowledge. I highly appreciate for your encouragement
me to finish this thesis !
I also want to thank Teachers in Horizon School for all the help they have
given to me during my graduate study. I am so lucky to get their support.
I wish to thank the other teachers at the Mathematics Department of Univer-
sity of Science for their teaching, continuous support, tremendous research and
study environment they have created. I also thank to my classmates for their
friendship and suggestion. I will never forget their care and kindness. Thank
you for all the help and making the class like a family.
Last, but not least, I would like to express my deepest gratitude to my family.
Without their unconditional love and support, I would not be able to do what I
have accomplished.
Hanoi, November 13th, 2012
Student.
Azat Yazgulyyev.
i
Contents
Acknowledgements i
Contents ii
Introduction ii
1 Calculus of right invertible operators 4
1.1 Introduction to Algebraic Analysis . . . . . . . . . . . . . . . . . . . 4
1.2 D-polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2 Interpolation problems 10
2.1 Property (c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 Polynomial Interpolation Problems . . . . . . . . . . . . . . . . . . . 14
3 Some Applications in Analysis 16
3.1 The criterion for initial operators to possess the (generalized) c(R)-
property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2 Hermite interpolation problem . . . . . . . . . . . . . . . . . . . . . 20
3.3 Lagrange interpolation problem . . . . . . . . . . . . . . . . . . . . . 20
3.4 Newton interpolation problem . . . . . . . . . . . . . . . . . . . . . . 21
3.5 Taylor interpolation problem . . . . . . . . . . . . . . . . . . . . . . . 22
3.6 Logarithmic and Antilogarithmic mappings . . . . . . . . . . . . . . 24
3.7 Trigonometric elements and mappings . . . . . . . . . . . . . . . . . 26
Bibliography 32
ii
Introduction
The theory of right invertible operators (algebraic analysis) started with works
of D. Przeworska-Rolewicz, and then studied and developed by many other math-
ematicians. The algebraic theory of (generalized) invertible operators was stud-
ied by Ross Caradus, Zuhair Nashed, Otmar Scherzer, Nguyen Van Mau, Nguyen
Minh Tuan and by other scientists.
This thesis concentrates on the theory of right invertible operators and in-
terpolation problems with right invertible operators in linear spaces.
Algebraic analysis, is an algebraic based theory unifying many different gen-
eralizations of derivatives and integrals (not necessarily continuous). The main
concepts of this algebraic formulation are right invertible linear operators, their
right inverses and associated initial operators. Right invertible operators are
considered to be algebraic counterparts of derivatives and their right inverses
together with initial operators correspond with idea of integration. For the
reader’s convenience we will try to give a brief & basic concepts and definitions
related to the right invertible operators.
We use the word ’operator’ in many fields likewise biology, physics, linguis-
tics, computer programming . . . etc. and all of them have a different meanings.
In basic mathematics an operator is a symbol or function representing a mathe-
matical operation (i.e. operator means a function between vector spaces ).
In terms of vector spaces, an operator is a mapping from one vector space (or
module ) to another. As we know that the operators are of critical importance
to both linear algebra and functional analysis. Furthermore there are bunch of
applications in many other fields of pure and applied mathematics.
1
CONTENTS
Let’s recall the definition of linear operators:
The most common kind of operators encountered are linear operators. Let U
and V be vector spaces over a field 𝒦. Operator
A : U → V
is called linear if:
A(αx + βy) = αAx + βAy, ∀x, y ∈ U, ∀α, β ∈ 𝒦).
Now, let’s give a definition of right invertible operators:
Let X be a linear space over R and L(X) be the family of all linear operators
in X with the domains being linear spaces of X. Then for any A ∈ L(X), let
D
A
denote the domain of A and let L
0
= A ∈ L(X) : D
A
= X. By the space of
constants of an operator D ∈ L(X) we shall mean the set Z
D
= ker D.
A linear operator D ∈ L(X) is said to be right invertible if
DR = I
for some linear operator R ∈ L
0
(X) called a right inverse of D and I = id
X
. The
family of all right invertible operators in X will be denoted by R(X).
Next we will give a definition of interpolation problem: In the mathematical
field of numerical analysis interpolation is a method of constructing new data
points within the range of a discrete set of known data points.
One often has a number of data points, obtained by sampling or experimen-
tation, which represent the values of a function for a limited number of values
of the independent variable. It is often required to interpolate (i.e. estimate)
the value of that function for an intermediate value of the independent variable.
This maybe achieved by curve fitting (if more than two data points are known)
or regression analysis.
Or in other words, simply, we can say estimation of a value between two
known data points. A simple example is calculating the mean of two popula-
tion counts made 10 years apart to estimate the population in the fifth year.
The interpolation problems are an important part of algebra and calculus. It has
a big role in mathematics not only as subjects for studying but also as a powerful
tool of continuous and discrete models of the calculus in the theory of equations,
approximation theory. . .
2
CONTENTS
This thesis consist of 3 parts:
∙ First chapter introduces the theory of right invertible operators (in partic-
ular, examples, definitions and some properties) with the initial operators
and D-polynomials.
∙ The next chapter is about interpolation problems induced by right invert-
ible operators. We have given the definition of c(R)-property and mention
about interpolation problems induced by right invertible operators. Fur-
thermore, because, considering the general problem is complicated, we will
mention about some classical interpolation problems (specific cases of the
general interpolation problem) Taylor interpolation problem, Lagrange in-
terpolation problem, Newton interpolation problem and Hermite interpo-
lation problem.
∙ In the last chapter, we have given the criterions for the system of initial op-
erators to possess the c(R)-property and generalized c(R)-property. You will
see some applications of interpolation problems with right invertible oper-
ators in analysis and given some properties of logarithmic and antiloga-
rithmic mappings to show how to solve equations with linear combinations
of right invertible operators in commutative algebras with the concrete ex-
amples by using trigonometric elements.
Prof.Dr.Hab. Nguyen Van Mau Prof. D.Przeworska-Rolewicz
Unfortunately, Prof. Danuta Przeworska- Rolewicz has died several months ago.
3
CHAPTER 1
Calculus of right invertible operators
D.Przeworska Rolewicz developed an algebra-based theory around linear, not
necessarily continuous operators D : X → X which admit a right inverse. The
”Algebraic analysis” term is used by many authors to indicate an algebraic ap-
proach to analytic problems and also used in many different senses. In the
present paper, this term we use in the sense of D. Przeworska-Rolewicz since
our main interest is the calculus of right invertible operators. To be more pre-
cise below we will state some fundamental facts of algebraic analysis.
1.1 Introduction to Algebraic Analysis
Let X be a linear space over R and L(X) be the family of all linear operators
in X with the domains being linear subspaces of X. Then for any A ∈ L(X), let
D
A
denote the domain of A and let L
0
(X) = {A ∈ L(X) : D
A
= X}.
A linear operator D ∈ L(X) is said to be right invertible if DR = I, for some
linear operator R ∈ L
0
(X) called a right inverse of D and I = id
X
. The family of
all right invertible operators in X will be denoted by R(X). If two right inverses
commute each other, then they are equal. It is known that a power of a right
invertible operators is again right invertible, as well as a polynomial in a right
invertible operator under appropriate assumptions. However, a linear combina-
tion of right invertible operators ( their sum and/or difference) in general is not
right invertible.
Elements of kerD are said to be constants, since by definition, Dz = 0 if and
only if z ∈ kerD. The kernel of D is said to be the space of constants. By the
space of constants of an operator D ∈ L (X) we shall mean the set Z
D
= kerD.
We should point out that, in general, constants are different than scalars, since
they are elements of the space X.
4
1.1. Introduction to Algebraic Analysis
Furthermore, by ℛ
D
= {R
γ
}
γ∈Γ
we denote the family of all right inverses of
a given D ∈ R(X). If R ∈ ℛ
D
is a given right inverse of D ∈ R(X), the family ℛ
D
is characterized by
ℛ
D
= {R + (I − ℛ
D
)A : A ∈ L
0
(X)} (1.1)
Consider a family of right invertible operators D
i
∈ R(X) and a corresponding
family of their right inverses R
i
∈ ℛ
D
for i = 1, . . . , n and some n ∈ N. Then, the
composition D = D
1
, . . . , D
n
is right invertible, i.e. D ∈ R(X) and one of its right
inverses R ∈ R(X) is given by
R = R
n
· · · R
1
. (1.2)
For any x, y ∈ X we say that y is a primitive element of x whenever Dy = x.
Thus, the element R
x
is a primitive element of x , for any x ∈ X and R ∈ ℛ
D
. The
set
I(x) = {y ∈ X : Dy = x} (1.3)
is called the indefinite integral of a given x ∈ X. One can easily check that
ℛ
D
x = {Rx + (I − ℛ
D
)Ax : A ∈ L
0
(X)} = ℛ
D
x + Z
D
= Rx + Z
D
(1.4)
for any R ∈ ℛ
D
and any non-zero element x ∈ X. Hence, we obtain
I(x) = ℛ
D
x + Z
D
= Rx + Z
D
(1.5)
for any x ∈ X and R ∈ ℛ
D
.
In other words, we can just simply say that, the invertibility of an operator
A ∈ L(X) means that the equation Ax = y has a unique solution for every y ∈ X.
An element y ∈ dom D is said to be primitive for an x ∈ X if y = Rx for an
R ∈ ℛ
𝒟
. Indeed, by definition, x = DRx = Dy. again, by definition, all x ∈ X
have primitives.
Definition 1.1 The operator F ∈ L
0
(X) is said to be an initial operator of
D ∈ R(X), if the following conditions are satisfied:
1. Im F = ker D, F
2
= F
2. There exists an R ∈ ℛ
D
e.i. FR = 0.
for every operator D ∈ R(X), denote by ℱ
D
the set of all initial operators of D.
In other words, any projection operator F ∈ L
0
(X) onto Z
D
, i.e. F
2
= F and
Im F = Z
D
is said to be an initial operator induced by D ∈ R(X) and the family
of all such operators we denote by F
D
. For an initial operator F and x ∈ X, the
5
1.1. Introduction to Algebraic Analysis
element F
x
∈ Z
D
is called the initial value of x. Additionally, we say that an
initial operator F ∈ ℱ
D
corresponds to R ∈ ℛ
D
if FR = 0 or equivalently if
F = I − RD (1.6)
on the domain of D. The two families R
D
and F
D
uniquely determine each other.
Indeed, for any R ∈ ℛ
D
we define F = I − RD ∈ ℱ
D
. On the other hand, for any
F ∈ ℱ
D
we define R = R
1
− FR
1
, where R
1
∈ ℛ
D
can be any since the result is
independent of the choice of R
1
. Thus for any γ ∈ Γ we have F
γ
= I − ℛ
γ
D and
consequently F
D
= F
γ
∈ Γ.
By a simple calculation one can verify that F
α
F
β
= F
β
and F
β
ℛ
α
= ℛ
α
− ℛ
β
, for
any α, β ∈ Γ. Hence, for any indices α, β, γ ∈ Γ, there is
F
β
ℛ
γ
− F
α
ℛ
γ
= F
β
ℛ
α
, (1.7)
which means that in fact the left side of equation (1.7) is independent of γ. The
last property allows one to define the following definite integration operator
I
β
α
= F
β
ℛ
γ
− F
α
ℛ
γ
(1.8)
for any α, β , γ ∈ Γ. Amongst many properties of the operator I
β
α
we can mention
the most intuitive one, namely
I
β
α
= F
β
− F
α
. (1.9)
Hence for any x ∈ X and its arbitrary primitive element y ∈ X i.e. Dy = x, we
get
I
β
α
x = F
βγ
− F
αγ
(1.10)
which is called the definite integral of x.
To intuitively demonstrate the basic concepts which we mentioned above, we’ll
give an example by taking the usual derivative operator D =
d
dx
.
Example 1.2 (cf.16) Assume the linear space X = 𝒞
0
(R) and D =
d
dx
.
Then we recognize the domain D
D
= 𝒞
1
(R) and the set (linear subspace) of all constants
of D is Z
D
= { f ∈ X : f is a constant function}. Since Z
D
is 1-dimensional linear space
over R, we shall assume the identification Z
D
≡ R . Thus, the initial operators F in this
example are projections of X onto R.
𝒞
0
(R) = all continuous real functions.
𝒞
1
(R) = all real functions having continuous derivative.
In the next section we define and analyze, in the sense of algebraic analysis,
polynomials corresponding with a given family of right invertible operators, to
generalize the usual polynomials of several variables.
6
1.2. D-polynomials
1.2 D-polynomials
Right invertible operators are considered to be algebraic counterparts of deriva-
tives and their right inverses together with initial operators correspond with the
idea of integration. These are some examples in terms of algebraic analysis :
usual differential and integral calculus, generalized differential calculus in rings. . . etc.
One can associate the concept of D-polynomials with a fixed invertible operator
D, defined in a linear space X. However the D-polynomials, definite integrals as-
sociated with a single right invertible operator D constitute the algebraic coun-
terparts corresponding with mathematical analysis for functions of one variable.
So, there is a necessity to extend algebraic analysis for functions of many vari-
ables. To begin this direction we replace a single operator D by a fixed family D
of right invertible operators and study the corresponding D-polynomials.
Preliminaries
Let X be a linear space over a field 𝒦 and L(X) be the family of all linear
mappings
D : U → V
for any U,V linear subspaces of X.
We shall use the notation: dom (D) = U, codom (D) = V and Im D =
Du : u ∈ U
for the domain, codomain and image of D, correspondingly.
We denote that
N = 1, 2, 3 . . . and N
0
= 0, 1, 2, 3 . . . (1.11)
Whenever D
1
= · · · = D
m
= D ∈ L(X), we write D
m
= D
1
. . . D
m
, for m ∈ N, and
D
0
= I = id
dom(D)
. By the space of constants for D ∈ L(X) we mean the family
Z(D) = ker D (1.12)
For any D ∈ L(X) and m ∈ N, we assume the notation
Z
0
(D) = ker D∖0 and Z
m
(D) =
ker D
m
+ 1
ker D
m
(1.13)
Obviously, for any D ∈ L(X) there exist
Z
i
(D)
Z
j
(D) = ∅ where i ̸= j (1.14)
Proposition 1.3 Let D ∈ L(X), m ∈ N
0
and Z
i
(D) ̸= (∅) for i = 0, 1 m. Then
any element u
i
∈ Z
i
(D), i = 0, . . . m are linearly independent.
7
1.2. D-polynomials
Proof: Consider a linear combination u = λ
0
u
0
+ · · · + λ
m
u
m
and suppose that
u = 0 for some coefficients λ
0
. . . λ
m
∈ 𝒦. Hence we obtain the sequence of
equations:
D
k
u = λ
k
D
k
u
k
+ · · · + λ
m
D
k
u
m
= 0, for k = 0, . . . m. Step by step, from these
equations we compute λ
m
= 0, . . . λ
0
= 0. Let us define
R(X) = D ∈ L(X) : codom(D) = imD (1.15)
i.e. each element D ∈ R(X) is considered to be a surjective mapping (onto its do-
main). Thus, R(X) consists of all right invertible elements. Indeed, it is enough
to know one right inverse in order to determine all right inverses and all ini-
tial operators. Note that a superposition of a finite number of right invertible
operators is again a right invertible operator.
Definition 1.4 An operator R ∈ L(X) is said to be a right inverse of D ∈ R(X) if
dom(R) = im(D) and DR = I ≡ id
im(D)
. By ℛ
D
we denote the family of all right
inverses of D. In fact, R
D
is a nonempty family, since for each y ∈ im(D) we can
select an element x ∈ D
−1
(y) and define R ∈ ℛ
D
s.t.
R : y → x
Definition 1.5 Any element F ∈ L(X) s.t. dom(F) = dom(D), im(F) = Z(D)
and F
2
= F is said to be an initial operator induced by D ∈ R(X). We say that an
initial operator F corresponds to a right inverse R ∈ ℛ
D
, whenever FR = 0 or if :
F = I − RD (1.16)
The initial operators play very important role in the calculus of right invertible
operators. If two initial operators commute each other, then they are equal. The
family of all initial operators induced by D will be denoted by F
D
.
The above formula characterizes initial operators by means of right inverses,
whereas formula
R = R
′
− FR
′
(1.17)
which is independent of R
′
, characterizes right inverses by means of initial op-
erators. Both families R
D
and F
D
are fully characterized by formulas
R
D
= R + FA : dom A = Im D, A ∈ L(X) (1.18)
F
D
= {F(I − AD) : dom A = Im D , A ∈ L(X)} (1.19)
where R ∈ ℛ
D
and F ∈ ℱ
D
are fixed arbitrarily. Let us give some examples to
illustrate the above concepts.
8
1.2. D-polynomials
Example 1.6 Let X = R
R
(the linear space of all functions) and D ∈ R(X)-usual
derivative, i.e. Dx(t) =
x
t
, with dom (D) subset of X consisting of all differentiable
functions. Then for an arbitrarily fixed a ∈ R, by formula Rx(t) =
t
a
x(s) ds. one can
define a right inverse R ∈ ℛ
D
and the initial operator F ∈ ℱ
D
corresponding to R is given
by Fx(t) = x(a).
Example 1.7 Let X = R
N
(the linear space of all sequences ) and D ∈ R(X)- difference
operator, i.e. (Dx)
n
= x
n+1
− x
n
for n ∈ N. A right inverse R ∈ ℛ
D
is defined by the
formulas (Rx)
1
= 0 and (Rx)
n+1
=
n
∑
i=1
x
i
while (Fx)
n
= x
1
defines the initial operator
F ∈ ℱ
D
corresponding to R. We note that the nontrivial initial operators do exist only for
operator which are right invertible but not invertible. The family of all such operators is
then :
R
+
(X) = {D ∈ R(X) : dim Z(D) > 0} (1.20)
Proposition 1.8 Let D ∈ R(X) and R ∈ ℛ
D
. Then R is not a nilpotent operator.
Proof: Suppose that R
n
̸= ∅ and R
n+1
= 0, for some n ∈ N. Then 0 ̸= R
n
=
IR
n
= DRR
n
= DR
n+1
= 0, a contradiction.
9
CHAPTER 2
Interpolation problems
It is known that the classical interpolation problems were born very early,
starting with the works of Newton, Lagrange But we should mention that
the construction of the general interpolation problems and algorithms to find
its solutions as well as building theories related to interpolation, in general,
until now are being researched and developed by mathematicians. The general
interpolation problems induced by a right invertible operators with initial oper-
ators was first introduced and considered by D. Przeworska-Rolewicz, in 1988.
As we mentioned before the initial operators of the right invertible operators has
a huge role in dealing with interpolation problems. Moreover, we can say that
interpolation problems play very important role in establishing the polynomials
satisfying the system of the special conditions.
2.1 Property (c)
Here we are going to give a necessary and sufficient condition for the deter-
minant induced by a system of initial operators with the property c (R) to be
different from zero. This will also lead us to a necessary and sufficient condition
for general interpolation problem to have a unique solution.
Definition 2.1 Let D ∈ R(X). An initial operator F
0
for D has the property c (R)
for an R ∈ ℛ
D
if there exist scalar c
k
s.t.
F
0
R
k
z = (c
k
k!)z for all z ∈ ker D, k ∈ N (2.1)
and c
k
= 0 for all k ∈ N if F
0
= F, where F is initial operator for D corresponding
to R. We shall write F
0
∈ c(R). A set F
0
D
⊂ F
D
has the property (c) if for every
F
0
∈ F
0
D
there exists an R ∈ ℛ
D
s.t. F
0
∈ c(R). Put c
0
= 1, since F
0
z = z for all
z ∈ ker D.
10
2.1. Property (c)
Definition 2.2 Let D ∈ R(X). The set F
D
of all initial operators has the property
(c) if and only if dim ker D = 1.
Clearly, if the system {F
0
, . . . ., F
N−1
} ∈ ℱ
D
has the property c(R) for an R ∈ ℛ
D
with constants d
ik
:
F
i
R
k
z = (d
i
k∖k!)z for i = 0, . . . N − 1 k ∈ N
0
(2.2)
and F
0
, . . . , F
N−1
are linearly independent, then
V
N
= det(d
ik
)
i,k=0, ,N−1
= 0. (2.3)
The following question was stated by D.Przeworska-Rolewicz:
Is the determinant V
N
different from zero for any system {F
0
· · · F
N−1
} of linearly
independent initial operators having the property (c(R))?
We can claim that, indeed, the answer is positive for the case N = 1 (by def,
d
0
= 1). But, in general, the answer for the case N ≥ 2 is negative (N.V.Mau: see
[9]-Chapter 6)
Naturally, the following question arises: Does there exist a subspace X
0
⊂ X
such that the matrix (d
ik
)(i, k = 0, . . . N − 1) has non-zero determinant if and
only if the restrictions of the initial operators {F
0
, . . . , F
N−1
} to X
0
are linearly
independent?
We will show that the answer is positive, by the following theorem:
Theorem 2.3 Put
P
N
(R) = lin {R
k
z : z ∈ ker D, k = 0, . . . , N − 1} (2.4)
or more specifically we can describe as:
ker D + R(ker D) + · · · + R
N−1
(ker D) = P
N
(R).
Suppose F
0
, . . . , F
N−1
∈ ℱ
D
have the property (c). Then a necessary and suf-
ficient condition for V
N
̸= 0, where V
N
is defined by (2.3), is that F
0
, . . . F
N−1
are
linearly independent on P
N
(R).
Proof is based on the below lemma:
Lemma 2.4 Suppose F
0
, . . . , F
N−1
∈ ℱ
D
have the property c(R) for an R ∈ ℛ
D
.
Put
F
′
i
= (F
i
, F
i
R, . . . , F
i
R
N−1
) for i = 0, . . . , N − 1, (2.5)
d
i
= (d
i0
, d
i1
, . . . d
iN−1
) for i = 0, . . . , N − 1, (2.6)
where d
ik
are defined by (2.2). Then the vectors F
′
0
, . . . , F
′
N−1
are linearly indepen-
dent on ker D if and only if the vectors d
0
, d
1
, . . . , d
N−1
are linearly independent.
Proof: [cf. 9-Chapter 6]
11
2.1. Property (c)
Corollary 2.5 Let D ∈ R(X), R ∈ ℛ
D
and F
0
, . . . F
N−1
∈ c(R). Then V
N
defined
by (2.3) is not zero if and only if the operators F
0
R
k
, F
1
R
k
, . . . , F
N−1
R
k
are linearly
independent on ker D for each k ( 0 ≤ k ≤ N − 1)
Proof: [cf.9-Chapter 6]
Proof of theorem (2.3)
Suppose that V
N
̸= 0, Then by corollary, the vectors F
′
0
, F
′
1
· · · , F
′
N−1
of the form
(2.5) are linearly independent on ker D. This means that the operators F
0
R
j
, . . . , F
N−1
R
j
are linearly independent on ker D for each j ∈ {0, 1, . . . , N − 1}, i.e. F
0
, . . . , F
N−1
are linearly independent on the set
ker D + R(ker D) + · · · + R
N−1
(ker D) = P
N
(R).
Conversely, suppose F
0
, . . . , F
N−1
∈ c(R) are linearly independent on P
N
(R). By
Corollary, we just need to show that the system of vector operators
F
′
i
= (F
i
, F
i
R, . . . , F
i
R
N−1
) (i = 0, 1, . . . , N − 1)
is linearly independent on ker D. Suppose that
N−1
∑
i=0
β
i
F
i
R
j
z = 0 for all z ∈ ker D
where β
i
∈ C. This means that
N−1
∑
i=0
β
i
F
i
R
j
z = 0 for all z ∈ ker D j = 0, . . . N − 1
Because j ∈ {0, . . . N − 1} is arbitrary
N−1
∑
i=0
β
i
F
i
N−1
∑
j=0
α
j
R
j
z = 0
So, our proof is complete, by the assumption we β
0
= · · · = β
N−1
= 0. It means
that
N−1
∑
i=0
β
i
F
i
x = 0 for every x ∈ P
N
(R).
Lemma 2.6 Let D ∈ R(X), dim ker D = 1, R ∈ ℛ
D
and let F ∈ ℱ
D
be an initial
operator correponding to R. Then F
1
RX = ker D for every F
1
̸= F, F
1
∈ ℱ
D
Proof: Since dim ker D = 1 we have F
1
R
k
z = c
k
z for all z ∈ ker D,
where c
k
∈ ℱ (k = 1, 2, . . . ). If c
k
̸= 0 for some k ∈ N we conclude that
F
1
R
k
(ker D) = ker D. On the other hand, F
1
R
k
(ker D) ⊂ F
1
R
k
X ⊂ F
1
RX ⊂ F
1
X =
ker D. Thus F
1
RX = ker D.
12
2.1. Property (c)
Every initial operator F of the right invertible operator D possesses (c) − propert y
if and only if dim ker D = 1. Particularly, if D is the right invertible operator in
the linear spaces X and dimkerD ≥ 2, then there exists a class of initial operators
of D which doesn’t possesses the (c)-property. (In the beginning of the next chap-
ter we will give some conditions related to possessing generalized c(R)-property)
Definition 2.7 Let D ∈ R(X), R ∈ ℛ
D
. We say that the operator F ∈ ℱ
D
possesses c(R )-property if for every k ∈ N, there exists c
k
∈ K such that FR
k
z = c
k
z
for all z ∈ ker D where R
0
= I.
Definition 2.8 Let D ∈ R(X), R ∈ ℛ
D
, and let F
i
∈ ℱ
D
, i = 1, 2, . . . , n. The system
of the initial operators {Fi}
i=1, ,n
is said to possess the generalized c(R)-property
if there are nontrivial subspaces Z
1
, Z
2
, . . . , Z
p
of ker D such that following condi-
tions hold
1. ker D =
p
ν=1
Z
ν
.
2. For any i = 1, 2, . . . , n ∈ N there exists c
ijν
∈ 𝒦, ν = 1, 2, . . . , p,
such that
F
i
R
j
z = c
ij
z, for all z ∈ Z
ν
.
From the above definition, it follows that if every initial operator F
1
, F
2
, . . . , F
N
possesses the c(R)-property, then the system of initial operator {F
i
}
i=1 n
pos-
sesses the generalized c (R) -property. [see next chapter].
Example 2.9 Put X = c(R), D =
d
2
dt
2
, R =
t
0
s
0
dx
The dim ker D = 2 and e
1
= 1, e
2
= t are the basic vectors of ker D. Consider the
operators F
k
given by the following:
(Fk
x
)(t) = x(0) + tx
′
(0) +
1
2
(1 + t)x
′′
(k) +
1
2
(1 − t)x
′′
(−k) k = 1, 2.
It is clearly that F
k
∈ ℱ
D
, kerD = lin{e
1
}
lin{e
2
} and
F
k
R
j
e
1
=
k
2j−2
(2j−2)
e
1
,
F
k
R
j
e
2
=
k
2j−2
(2j−1)
e
2
,
where k = 1, 2, j ∈ N. Thus, two initial operators (F
1
, F
2
) possess the generalized
c(R)-property, but they do not possess the c(R)-property.
13
2.2. Polynomial Interpolation Problems
Note: Next chapter will include some, definitions, theorems (and their ap-
plications) of some classical interpolation problems.
Let’s review a little bit about the polynomial interpolations.
2.2 Polynomial Interpolation Problems
Briefly, we say that the interpolation problem is nothing but a problem of
solving a system of linear equations and we define the interpolation problem as
following:
Given k + 1 distinct points in R, {x
j
}
0≤j≤k
and a function f , find a polynomial of
degree less than or equal to k that agrees with f on these points.
The first question we will face is whether our problem is well-posed or not?
( In particular, does a solution exist and is it unique?). We usually use the La-
grange polynomials to demonstrate the existence.
The Lagrange polynomials for {x
j
}
0≤j≤k
are defined as below:
L
j
(x) =
∏
0≤i≤k,i̸=j
x − x
i
x
j
− x
i
, 0 ≤ j ≤ k (2.7)
Since
L
j
{x
i
} = δ
i,j
, 0 ≤ i, j ≤ k (2.8)
where δ
i,j
is the Kronecker delta (δ
i,j
= 1 if i = j and δ
i,j
= 0 otherwise)
P(x) =
k
∑
j=0
f (x
j
)L
j
(x) (2.9)
is a polynomial of degree at most k that satisfies P(x
j
) = f (x
j
) for all 0 ≤ j ≤ k.
And for the uniqueness, assume that Q(x) is another interpolating polynomial
of degree at most k. Then D = P − Q is a polynomial of degree at most k that
vanishes in the k + 1 distinct points, {x
j
}
0≤j≤k
. This is possible only if D ≡ 0, i.e.
if P and Q are the same polynomials.
By (2.7), we obtain a closed form formula for (2.9). But, it has 2 drawbacks:
1. When we have large number of division and multiplication, the evaluation
of the expression in (2.9) is inefficent.
2. If we denote P
k
as a value of the interpolating polynomial f on the set of
points {x
j
}
0≤j≤k
, at some intermediate points ξ, it is impossible to use P
k
(ξ)
in order to compute P
k+1
(ξ). We need to start computation from the begin-
ning.
14
2.2. Polynomial Interpolation Problems
Furthermore we have the alternative form of the polynomial, we call it the New-
ton form:
P(x) =
k
∑
j=0
a
j
j−1
∏
i=0
(x − x
i
). (2.10)
the coefficients in this representationare the so-called divided differences of f ,
a
j
= f [x
0
, . . . , x
j
], 0 ≤ j ≤ k. (2.11)
where the divided differences of a function f are defined recursively as below:
f [x
0
, . . . , x
j
] =
f [x
1
, . . . , x
j
] − f [x
0
, . . . , x
j−1
]
x
j
− x
0
(2.12)
f [x
i
] = f (x
i
) (2.13)
This form of the interpolating polynomial is much more practical than that of
the Lagrange form. Taylor expansion uses values of the functions at a single
point to produce a polynomial approximation of f . We define Taylor series as:
If f (x) is (n + 1) times differrentiable, then
f (x) = f (a) + f
′
(a)(x − a) +
f
′′
(a)
2!
(x − a)
2
+ . . . +
a
n!
(x − a)
n
+ ℛ
n
. (2.14)
we define ℛ
n
as :
x
a
f
n+1
(t)(x − t)
n
n!
dt =
f
(n+ 1)
(ξ)(x − a)
n+1
(n + 1)!
a ≤ ξ ≤ x (2.15)
remainder ∖ error term of Taylor series.
T(x) = f (a ) + f
′
(a)(x − a) +
f
′′
(a)(x − a)
2
2!
+ · · · +
f
(n)
(a)(x − a)
n
n!
(2.16)
is called n
th
Taylor polynomial∖expansion of f around a.
Hermite interpolation is a method of approximating data points as a poly-
nomial function. This type of polynomial is also derived from the calculation of
divided difference. Newton interpolation requires first n values, however, Her-
mite interpolation requires n(m + 1) − 1 values.
(x
0
, y
0
), (x
1
, y
1
), · · · , (x
n−1
, y
n−1
)
(x
0
, y
′
0
), (x
1
, y
′
1
), · · · , (x
n−1
, y
′
n−1
)
.
.
.
.
.
.
.
.
.
.
.
.
(x
0
, y
(m)
0
), (x
1
, y
(m)
1
), · · · , (x
n−1
, y
(m)
n−1
)
15
CHAPTER 3
Some Applications in Analysis
3.1 The criterion for initial operators to possess
the (generalized) c(R)-property
Let D ∈ R(X) and let 1 < dimkerD = q < +∞. Denote by E = {e
m
}
m=1,q
the
system of basic vectors of ke rD. Suppose that F ∈ F
D
is the initial operator of
D corresponding to an R ∈ R
D
. Note that FR
i
∈ L
0
(X) for every i ∈ N. By the
restriction the domain X to ke rD, one can consider FR
i
the operator belonging
L
0
(kerD). So, we write
T
i
= FR
i
|
kerD
, i ∈ N (3.1)
For every i ∈ N assume that T
i
e
j
=
q
∑
m=1
c
ijm
e
m
, where c
ijm
∈ K, j = 1, 2, ·, q, i ∈ N.
Put
C
i
= [c
ijm
]
j,m=1,q
i.e. C
i
are the square matrices of order q. Then, we say that the operators T
i
are
represented by the matrix C
i
under the system of basic vectors E = {e
m
}
m=1,q
and write T
i
(E) = C
i
(E).
Theorem 3.1 The operator F ∈ F
D
possesses the c(R) property if and only if
C
i
= α
i
I, where α
i
∈ K and I is the identity matrix in kerD.
In other words, the operator F ∈ F
D
possesses the c(R) property if and only if
there is a system of basic vectors of ke rD such that the matrices C
i
are diagonal
simultaneously.
Proo f . Suppose that F ∈ F
D
possesses the c(R) property. From definition 2.7 it
follows that for any i ∈ N there exists c
i
∈ K such that
16
3.1. The criterion for initial operators to possess the (generalized) c(R)-property
T
i
e
j
= c
i
e
j
, j = 1, 2, , q.
Then
C
i
= [δ
jm
c
i
]
j,m=1,q
where δ
jm
is the Kronecker’s symbol. Thus, C
i
are the diagonal matrices.
Conversely, suppose that for every i ∈ N, C
i
defined by (3.1) are of the form
C
i
= α
i
I.
We have
T
i
e
j
= C
i
e
j
= α
i
e
j
, j = 1, 2 · · · , q,
for all i ∈ N. It means that F possesses the c (R)-property.
Now we deal with the system of initial operators. Let F
i
i=1,n
be the system of the
initial operators of D corresponding to R ∈ R
D
. write
T
ij
= F
i
R
j
|
kerD
Suppose that
T
i,j
(e
m
) =
q
∑
k=1
c
ijmk
e
k
, i = 1, 2 , n, m = 1, 2, , q, i ∈ N (3.2)
For every i = 1, 2, , n and j ∈ N we set
C
ij
= [c
ijmk
]
m,k=
1,q
. (3.3)
i.e. C
ij
are the square matrices of order q. We say that operators T
ij
are repre-
sented by the matrices C
ij
under the system of basic vectors E = e
m
m=1,q
. Set
T
ij
(E) = C
ij
(E). (3.4)
We have the following theorem:
Theorem 3.2 The system {F
i
}
i=1,n
possesses the generalized c(R)-property if and
only if there exists an invertible matrix S so that for every i = 1, 2 n, j ∈ N,
the operator S
−1
C
ij
S are diagonal matrices. In other words, {F
i
}
i=1,n
possesses
the generalized c(R)-property if and only if there exists a system of basic vectors
E
*
= {e
*
m
}
m=1,q
of kerD such that all operators T
ij
are represented by the diagonal
matrices under system E
*
.
17
3.1. The criterion for initial operators to possess the (generalized) c(R)-property
Proof :[cf. 17] Necessity and Sufficiency.
Note that if the system of initial operators {F
i
}
i=1,n
possesses the generalized
c(R)-property with respect to subspaces Z
1
, Z
2
, · · · , Z
p
then this system also pos-
sesses the generalized c(R)-property with respect to subspaces {Z
vs
}
v=1,p,s= 1,p
v
where Z
vs
⊂ Z
v
, p
v
= dimZ
v
. Hence, the following problem arises: Let the
system of the initial operators {F
i
}
i=1,n
possesses the generalized c(R)-property
with respect to subspaces Z
1
, Z
2
, · · · , Z
p
.
Consider the following general interpolation problem: Given n finite sets I
i
of non-negative integers not greater than N − 1. Denote by r
i
the cardinality of
the set I
i
: I
i
= r
i
for i = 1, . . . , n. Let N = r
1
+ · · · , r
n
. We are looking for a
D-polynomial u of degree N − 1 satisfying the conditions
F
i
D
k
u = u
ik
(k ∈ I
i
, i = 1, . . . n), (3.5)
for given n different initial operators F
1
, . . . , F
n
∈ F
D
, where u
ik
∈ ker D are given,
u = z
0
+ Rz
1
+ · · · + R
N−1
z
N−1
for an R ∈ R
D
and z
0
, z
1
, . . . , z
N−1
∈ ker D are to
be determined.
Suppose that there exist scalars d
ik
satisfying
F
i
R
k
z = (
d
ik
k!
)z for all z ∈ ker D (k ∈ I
i
, i = 1, . . . , n). (3.6)
i.e. F
1
, . . . , F
n
∈ c(R). We assume that
I
i
= {k
ij
: i = 1, . . . , ℛ
i
}, 0 ≤ k
i1
< · · · < k
ir
r
(i = 1, . . . , n) (3.7)
and we can rewrite the condition (3.1) as:
F
i
D
k
ij
u = u
ik
j
(i = 1, . . . , n, j = 1, . . . , r
i
)
F
i
D
k
ij
u =
N−1
∑
m=0
F
i
D
k
ij
R
m
z
m
=
k
ij
−1
∑
m=0
F
i
D
k
ij
−m
z
m
+
N−1
∑
m=k
(ij)
F
i
R
m−k
ij
z
m
N−1
∑
m=k
ij
F
i
R
m−k
ij
z
m
=
N−1
∑
m=k
ij
d
i,m−ki ,j
m − k
i,j
!
z
m
(i = 1, . . . , n, j = 1, . . . , r
i
)
Theorem 3.3 A necessary and sufficient condition for det
ˆ
G ̸= 0
is that all operators F
ik
ij
defined by: F
ik
ij
= F
i
D
k
ij
(i = 1, . . . , n j = 1, . . . , r
i
)
are linearly independent on P
N
(R) is defined by (2.4). (where
ˆ
G is (N × N) ma-
trix).
Next we will formulate the main result for general interpolation problem for a
right invertible operator.
18
3.1. The criterion for initial operators to possess the (generalized) c(R)-property
Theorem 3.4 The general interpolation problem has a unique solution for any
u
ik
ij
∈ ker D (i = 1, . . . , n j = 1, . . . r
i
) if and only if the system of operators
{F
k
ij
i
}
i=1, ,n;j=1, ,r
i
is linearly independent on P
N
(R).
Proof: By the assumptions, for every pair (i , j) (i = 1, . . . , n : j = 1, . . . , r
i
) of
indices, we have
u
ik
ij
= F
i
D
k
ij
u =
(N−1)
∑
m=0
F
i
D
k
ij
R
m
z
m
=
(N−1)
∑
m=k
ij
d
i,m−k
ij
m−k
ij
!z
m
.
We obtain a system of N equations with N unknowns:
(N−1)
∑
m=k
ij
d
i,m−k
ij
m − k
ij
!z
m
= u
ik
ij
(i = 1, . . . , n; j = 1, . . . , r
i
) (3.8)
with matrix
ˆ
G. So, our conclusion follows from the previous theorem.
Theorem 3.5 If V
N
= det
ˆ
G ̸= 0 then the unique solution of the general interpo-
lation problem is given by the formula
u = U
N
(u
0
, . . . , u
N−1
), (3.9)
where
u
r
1
+···+r
i−1
+ j = u
ik
ij
(i = 1, . . . , n; j = 1, . . . , r
i
) (3.10)
U
N
(u
0
, . . . , u
N−1
) =
N−1
∑
j=0
V
N
j
(R)u
j
,
V
N
j
(t) = V
−1
N
N−1
∑
k=0
(−1)
k+j
V
N
jk
t
k
and V
N
jk
is the minor determinant obtained by cancelling row number j and
column number k.
As applicatons of the above theorems which we described, we deal with some
classical interpolation problems for right invertible operators.
19
3.2. Hermite interpolation problem
3.2 Hermite interpolation problem
If I
i
= {0, 1, . . . r
i
− 1}, we have the following interpolation problem:
Find a D-polynomial u of degree N − 1 which for given n ≤ N different initial
operators F
1
, . . . , F
n
admits given values together with D
k
u up to order r
j
− 1 i.e.
find a solution of:
F
i
D
j
u = u
ij
(i = 1, . . . , n; j = 1, . . . , r
i
− 1), (3.11)
where r
1
+ · · · + r
n
= N, u
ij
∈ ker D are given , u = z
0
+ Rz
1
+ · · · + R
N−1
z
N−1
for
R ∈ ℛ
D
and z
0
, . . . , z
N−1
are to be determined.
Next we have an important theorem:
Theorem 3.6 Suppose that D ∈ R(X) and F
1
, . . . , F
n
∈ c(R) for an R ∈ ℛ
D
.
Then the Hermite interpolation problem has a unique solution if and only if the
system of operators {F
i
D
j
}
i=1···n;j=1···ℛ
i
−1
is linearly independent on P
N
(R). If this
condition is satisfied, then the unique solution is
u =
N−1
∑
j=0
V
N
j
(R)u
j
; (3.12)
where
V
Nj
(R) = V
−1
N
N−1
∑
k=0
(−1)
k+j
V
Njk
R
k
(j = 0, . . . , N − 1)
when we cancel row number j and column number k in
ˆ
G we get the minor deter-
minant V
N
= det
ˆ
G, V
Njk
where (j, k = 0, . . . , N − 1), and the elements u
0
, . . . , u
N−1
∈
ker D are defined by:
u
µ
= u
1µ
for µ = 0, 1 · · · , r
1
− 1,
u
r
1
+µ
= u
2µ
for µ = 0, 1 · · · , r
2
− 1,
u
r
1
+r
2
+µ
= u
3µ
for µ = 0, 1 · · · , r
3
− 1,
· · · · · · . . .
u
r
1
+r
2
+···+r
n−1
+µ
= u
nµ
for µ = 0, 1 · · · , r
n
− 1
3.3 Lagrange interpolation problem
If I
i
= {0} for i = 1, . . . n, then we get the following interpolation problem:
Find a D-polynomial of degree N − 1 which for given different initial operators
F
1
, . . . , F
n
admits given values: F
i
u = u
i
: u
i
∈ ker D are given,
u = z
0
+ Rz
1
+ · · · + R
n−1
z
n−1
is to be determined.
20
3.4. Newton interpolation problem
Theorem 3.7 A necessary and sufficient condition for the Lagrange interpola-
tion problem to have a unique solution is that systems {F
1
, . . . , F
n
} is linearly in-
dependent onP
N
(R). If this condition is satisfied then the unique solution is
u =
n−1
∑
j=0
V
nj
(R)u
j
, (3.13)
where
V
nj
(t) = V
−1
n
n−1
∑
k=0
(−1)
k+j
k! V
njk
t
k
(j = 0, . . . , n − 1)
when we cancel row number j and column number k in V
n
we get the minor
determinant V
njk
where (j, k = 0, . . . , N − 1)
3.4 Newton interpolation problem
If I
i
= {i} for i = 1, . . . n, then we get the following interpolation problem:
Find a D-polynomial u of degree n − 1 which for given n different initial operators
F
1
, . . . , F
n
satisfies:
F
m
D
m
u = u
m
(m = 0, . . . , n − 1) (3.14)
where u
0
, . . . , u
n−1
∈ ker D are given.
Theorem 3.8 Suppose that D ∈ R(X), F
0
, . . . , F
n−1
∈ c(R) for an R ∈ ℛ
D
:
F
i
R
k
z =
d
ik
k!
z for i = 0, . . . , n − 1, k ∈ N
If
V
n
= det(d
ik
)
i,k=0, ,n−1
̸= 0 (3.15)
then the Newton interpolation problem has a unique solution for any u
0
, . . . , u
n−1
∈
ker D given by
u =
n−1
∑
j=0
V
nj
(R)u
j
(3.16)
V
nj
(t) = V
−1
n
n−1
∑
k=0
(−1)
k+j
k! V
njk
t
k
(j = 0, . . . , n − 1) when we cancel row number
j and column number k in V
n
we get the minor determinant V
njk
where (j, k =
0, . . . , n − 1)
Example 3.9 Let X = 𝒞(R), D =
d
dt
, R =
t
0
, (Fx)(t) = x(0).
Write (F
h
x)(t) = x(h), x ∈ X. Then F
h
R
k
c =
h
k
k!
c for c, h ∈ R, k ∈ N. Hence
d
hk
= h
k
.
It is known that for the classical Lagrange, Hermite and Newton interpola-
tion problems with initial operators F
h
1
, . . . F
h
n
the corresponding determinants
do not vanish.
21