Tải bản đầy đủ (.pdf) (26 trang)

Summary of mathematics doctoral thesis: Newton kantorovich iterative regularization and the proximal point methods for nonlinear ill posed equations involving monotone operators

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (231.45 KB, 26 trang )

MINISTRY OF EDUCATION

VIETNAM ACADEMY

AND TRAINING

OF SCIENCE AND TECHNOLOGY

GRADUATE UNIVERSITY OF SCIENCE AND TECHNOLOGY
............***............

NGUYEN DUONG NGUYEN

NEWTON-KANTOROVICH ITERATIVE
REGULARIZATION AND THE PROXIMAL
POINT METHODS FOR NONLINEAR
ILL-POSED EQUATIONS INVOLVING
MONOTONE OPERATORS

Major: Applied Mathematics
Code: 9 46 01 12

SUMMARY OF MATHEMATICS DOCTORAL THESIS

Hanoi - 2018


This thesis is completed at: Graduate University of Science and
Technology - Vietnam Academy of Science and Technology

Supervisor 1: Prof. Dr. Nguyen Buong


Supervisor 2: Assoc. Prof. Dr. Do Van Luu

First referee

1: . . . . . .

Second referee 2: . . . . . .
Third referee

3: . . . . . .

The thesis is to be presented to the Defense Committee of the Graduate University of Science and Technology - Vietnam Academy of Science
and Technology on . . . . . . . . . . . . 2018, at . . . . . . . . . . . . o’clock . . . . . . . . . . . .

The thesis can be found at:
- Library of Graduate University of Science and Technology
- Vietnam National Library


Introduction

Many issues in science, technology, economics and ecology such as image
processing, computerized tomography, seismic tomography in engineering
geophysics, acoustic sounding in wave approximation, problems of linear
programming lead to solve problems having the following operator equation
type (A. Bakushinsky and A. Goncharsky, 1994; F. Natterer, 2001; F.
Natterer and F. W¨
ubbeling, 2001):
A(x) = f,


(0.1)

where A is an operator (mapping) from metric space E into metric space E
and f ∈ E. However, there exists a class of problems among these problems
that their solutions are unstable according to the initial data, i.e., a small
change in the data can lead to a very large difference of the solution. It is
said that these problems are ill-posed. Therefore, the requirement is that
there must be methods to solve ill-posed problems such that the smaller
the error of the data is, the closer the approximate solution is to the correct
solution of the derived problem. If E is Banach space with the norm .
then in some cases of the mapping A, the problem (0.1) can be regularized
by minimizing Tikhonov’s functional:
Fαδ (x) = A(x) − fδ

2

+ α x − x+ 2 ,

(0.2)

with selection suitable regularization parameter α = α(δ) > 0, where fδ is
the approximation of f satisfying fδ − f ≤ δ

0 and x+ is the element

selected in E to help us find a solution of (0.1) at will. If A is a nonlinear
mapping then the functional Fαδ (x) is generally not convex. Therefore, it is
impossible to apply results obtained in minimizing a convex functional to
find the minimum component of Fαδ (x). Thus, to solve the problem (0.1)
with A is a monotone nonlinear mapping, a new type of Tikhonov regularization method was proposed, called the Browder-Tikhonov regularization



2

method. In 1975, Ya.I. Alber constructed Browder-Tikhonov regularization method to solve the problem (0.1) when A is a monotone nonlinear
mapping as follows:
A(x) + αJ s (x − x+ ) = fδ .

(0.3)

We see that, in the case E is not Hilbert space, J s is the nonlinear mapping, and therefore, (0.3) is the nonlinear problem, even if A is the linear
mapping. This is a difficult problem class to solve in practice. In addition,
some information of the exact solution, such as smoothness, may not be
retained in the regularized solution because the domain of the mapping J s
is the whole space, so we can’t know the regularized solution exists where
in E. Thus, in 1991, Ng. Buong replaced the mapping J s by a linear and
strongly monotone mapping B to give the following method:
A(x) + αB(x − x+ ) = fδ .

(0.4)

The case E ≡ H is Hilbert space, the method (0.3) has the simplest
form with s = 2. Then, the method (0.3) becomes:
A(x) + α(x − x+ ) = fδ .

(0.5)

In 2006, Ya.I. Alber and I.P. Ryazantseva proposed the convergence of
the method (0.5) in the case A is an accretive mapping in Banach space E
under the condition that the normalized duality mapping J of E is sequentially weakly continuous. Unfortunately, the class of infinite-dimensional

Banach space has the normalized duality mapping that satisfies sequentially weakly continuous is too small (only the space lp ). In 2013, Ng.
Buong and Ng.T.H. Phuong proved the convergence of the method (0.5)
without requiring the sequentially weakly continuity of the normalized duality mapping J. However, we see that if A is a nonlinear mapping then
(0.3), (0.4) and (0.5) are nonlinear problems. For that reason, another stable method to solve the problem (0.1), called the Newton-Kantorovich iterative regularization method, has been studied. This method is proposed
by A.B. Bakushinskii in 1976 to solve the variational inequality problem
involving monotone nonlinear mappings. This is the regularization method
built on the well-known method of numerical analysis which is the NewtonKantorovich method. In 1987, based on A.B. Bakushinskii’s the method,


3

to find the solution of the problem (0.1) in the case A is a monotone
mapping from Banach space E into the dual space E ∗ , I.P. Ryazantseva
proposed Newton-Kantorovich iterative regularization method:
A(zn ) + A (zn )(zn+1 − zn ) + αn J s (zn+1 ) = fδn .

(0.6)

However, since the method (0.6) uses the duality mapping J s as a regularization component, it has the same limitations as the Browder-Tikhonov
regularization method (0.3). The case A is an accretive mapping on Banach space E, to find the solution of the problem (0.1), also based on A.B.
Bakushinskii’s the method, in 2005, Ng. Buong and V.Q. Hung studied the
convergence of the Newton-Kantorovich iterative regularization method:
A(zn ) + A (zn )(zn+1 − zn ) + αn (zn+1 − x+ ) = fδ ,

(0.7)

under conditions
A(x) − A(x∗ ) − J ∗ A (x∗ )∗ J(x − x∗ ) ≤ τ A(x) − A(x∗ ) , ∀x ∈ E
(0.8)
and

A (x∗ )v = x+ − x∗ ,

(0.9)

where τ > 0, x∗ is a solution of the problem (0.1), A (x∗ ) is the Fréchet
derivative of the mapping A at x∗ , J ∗ is the normalized duality mapping of
E ∗ and v is some element of E. We see that conditions (0.8) and (0.9) use
the Fréchet derivative of the mapping A at the unknown solution x∗ , so
they are very strict. In 2007, A.B. Bakushinskii and A. Smirnova proved
the convergence of the method (0.7) to the solution of the problem (0.1)
when A is a monotone mapping from Hilbert space H into H (in Hilbert
space, the accretive concept coincides with the monotone concept) under
the condition
A (x) ≤ 1, A (x) − A (y) ≤ L x − y , ∀x, y ∈ H, L > 0.

(0.10)

The first content of this thesis presents new results of the NewtonKantorovich iterative regularization method for nonlinear equations involving monotone type operators (the monotone operator and the accretive
operator) in Banach spaces that we achieve, which has overcome limitations of results as are mentioned above.


4

Next, we consider the problem:
Find an element p∗ ∈ H such that 0 ∈ A(p∗ ),

(0.11)

where H is Hilbert space, A : H → 2H is the set-valued and maximal
monotone mapping. One of the first methods to find the solution of the

problem (0.11) is the proximal point method introduced by B. Martinet in
1970 to find the minimum of a convex functional and generalized by R.T.
Rockafellar in 1976 as follows:
xk+1 = Jk xk + ek , k ≥ 1,

(0.12)

where Jk = (I + rk A)−1 is called the resolvent of A with the parameter
rk > 0, ek is the error vector and I is the identity mapping in H. Since
A is the maximal monotone mapping, Jk is the single-valued mapping (F.
Wang and H. Cui, 2015). Thus, the prominent advantage of the proximal
point method is that it varies from the set-valued problem to the singlevalued problem to solve. R.T. Rockafellar proved that the method (0.12)
converges weakly to a zero of the mapping A under hypotheses are the
k
< ∞ and rk ≥ ε > 0,
zero set of the mapping A is nonempty, ∞
k=1 e
for all k ≥ 1. In 1991, O. G¨
uler pointed out that the proximal point

method only achieves weak convergence without strong convergence in
infinite-dimensional space. In order to obtain strong convergence, some
modifications of the proximal point method to find a zero of a maximal
monotone mapping in Hilbert space (OA Boikanyo and G. Morosanu, 2010,
2012; S. Kamimura and W. Takahashi, 2000; N. Lehdili and A. Moudafi,
1996; G. Marino and H.K. Xu, 2004; Ch.A. Tian and Y. Song, 2013; F.
Wang and H. Cui, 2015; H.K. Xu, 2006; Y. Yao and M.A. Noor, 2008) as
well as of an accretive mapping in Banach space (L.C. Ceng et al., 2008;
S. Kamimura and W. Takahashi, 2000; X. Qin and Y. Su, 2007; Y. Song,
2009) were investigated. The strong convergence of these modifications

is given under conditions leading to the parameter sequence of the resol∞
vent of the mapping A is nonsummable, i.e.
k=1 rk = +∞. Thus, one
question arises: is there a modification of the proximal point method that

its strong convergence is given under the condition is that the parameter

sequence of the resolvent is summable, i.e.
k=1 rk < +∞? In order
to answer this question, the second content of the thesis introduces new


5

modifications of the proximal point method to find a zero of a maximal
monotone mapping in Hilbert space in which the strong convergence of
methods is given under the assumption is that the parameter sequence of
the resolvent is summable.
The results of this thesis are:
1) Propose and prove the strong convergence of a new modification of
the Newton-Kantorovich iterative regularization method (0.6) to solve the
problem (0.1) with A is a monotone mapping from Banach space E into
the dual space E ∗ , which overcomes the drawbacks of method (0.6).
2) Propose and prove the strong convergence of the Newton-Kantorovich
iterative regularization method (0.7) to find the solution of the problem
(0.1) for the case A is an accretive mapping on Banach space E, with
the removal of conditions (0.8), (0.9), (0.10) and does not require the
sequentially weakly continuity of the normalized duality mapping J.
3) Introduce two new modifications of the proximal point method to find a
zero of a maximal monotone mapping in Hilbert space, in which the strong

convergence of these methods are proved under the assumption that the
parameter sequence of the resolvent is summable.
Apart from the introduction, conclusion and reference, the thesis is composed of three chapters. Chapter 1 is complementary, presents a number of
concepts and properties in Banach space, the concept of the ill-posed problem and the regularization method. This chapter also presents the NewtonKantorovich method and some modifications of the proximal point method
to find a zero of a maximal monotone mapping in Hilbert space. Chapter 2 presents the Newton-Kantorovich iterative regularization method for
solving nonlinear ill-posed equations involving monotone type operators
in Banach spaces, includes: introducing methods and theorems about the
convergence of these methods. At the end of the chapter give a numerical example to illustrate the obtained research result. Chapter 3 presents
modifications of the proximal point method that we achieve to find a zero
of a maximal monotone mapping in Hilbert spaces, including the introduction of methods as well as results of the convergence of these methods.
A numerical example is given at the end of this chapter to illustrate the
obtained research results.


Chapter 1

Some knowledge of preparing
This chapter presents the needed knowledge to serve the presentation
of the main research results of the thesis in the following chapters.
1.1.
1.1.1.

Banach space and related issues
Some properties in Banach space

This section presents some concepts and properties in Banach space.
1.1.2.

The ill-posed problem and the regularization method


• This section mentions the concept of the ill-posed problem and the
regularization method.
• Consider the problem of finding a solution of the equation
A(x) = f,

(1.1)

where A is a mapping from Banach space E into Banach space E. If (1.1)
is an ill-posed problem then the requirement is that we must be used the
solution method (1.1) such that when δ
0, the approximative solution
is closer to the exact solution of (1.1). As presented in the Introduction,
in the case where A is the monotone mapping from Banach space E into
the dual space E ∗ , the problem (1.1) can be solved by Browder-Tikhonov
type regularization method (0.3) (see page 2) or (0.4) (see page 2).
The case A is an accretive mapping on Banach space E, one of widely
used methods for solving the problem (1.1) is the Browder-Tikhonov type
regularization method (0.5) (see page 2). Ng. Buong and Ng.T.H. Phuong
(2013) proved the following result for the strong convergence of the method
(0.5):
Theorem 1.17. Let E be real, reflexive and strictly convex Banach space
with the uniformly Gâteaux differentiable norm and let A be an m-accretive
mapping in E. Then, for each α > 0 and fδ ∈ E, the equation (0.5) has a
unique solution xδα . Moreover, if δ/α → 0 as α → 0 then the sequence {xδα }


7

converges strongly to x∗ ∈ E that is the unique solution of the following
variational inequality

x∗ ∈ S ∗ : x∗ − x+ , j(x∗ − y) ≤ 0, ∀y ∈ S ∗ ,

(1.2)

where S ∗ is the solution set of (1.1) and S ∗ is nonempty.
We see, Theorem 1.17 gives the strong convergence of the regularization
solution sequence {xδα } generated by the Browder-Tikhonov regularization
method (0.5) to the solution x∗ of the problem (1.1) that does not require
the sequentially weakly continuity of the normalized duality mapping J.
This result is a significant improvement compare with the result of Ya.I.
Alber and I.P. Ryazantseva (2006) (see the Introduction).
Since A is the nonlinear mapping, (0.3), (0.4) and (0.5) are nonlinear
problems, in Chapter 2, we will present an another regularization method,
called the Newton-Kantorovich iteration regularization method. This is
the regularization method built on the well-known method of the numerical
analysis, that is the Newton-Kantorovich method, which is presented in
Section 1.2.
1.2.

The Kantorovich-Newton method

This section presents the Kantorovich-Newton method and the convergence theorem of this method.
1.3.

The proximal point method and some modifications

In this section, we consider the problem:
Find an element p∗ ∈ H such that 0 ∈ A(p∗ ),

(1.3)


where H is Hilbert space and A : H → 2H is a maximal monotone mapping.
Denote Jk = (I + rk A)−1 is the resolvent of A with the parameter rk > 0,
where I is the identity mapping in H.
1.3.1.

The proximal point method

This section presents the proximal point method investigated by R.T.
Rockafellar (1976) to find the solution of the problem (1.3) and the assertion proposed by O. G¨
uler (1991) that this method only achieves weak
convergence without strong convergence in the infinite-dimensional space.


8

1.3.2.

Some modifications of the proximal point method

This section presents some modifications of the proximal point method
with the strong convergence of them to find the solution of the problem
(1.3) including the results of N. Lehdili and A. Moudafi (1996), H.K. Xu
(2006), O.A. Boikanyo and G. Morosanu (2010; 2012), Ch.A. Tian and Y.
Song (2013), S. Kamimura and W. Takahashi (2000), G. Marino and H.K.
Xu (2004), Y. Yao and M.A. Noor (2008), F. Wang and H. Cui (2015).
Comment 1.6. The strong convergence of modifications of the proximal
point method mentioned above uses one of the conditions
(C0) exists constant ε > 0 such that rk ≥ ε for every k ≥ 1.
(C0’) lim inf k→∞ rk > 0.

(C0”) rk ∈ (0; ∞) for every k ≥ 1 and limk→∞ rk = ∞.
These conditions lead to the parameter {rk } of the resolvent is nonsummable,


rk = +∞. In Chapter 3, we introduce two new modifications of

i.e.
k=1

the proximal point method that the strong convergence of these methods
is given under the condition of the parameter sequence of the resolvent
that is completely different from results we know. Specifically, we use the
condition that the parameter sequence of the resolvent is summable, i.e.


rk < +∞.
k=1


Chapter 2

Newton-Kantorovich iterative
regularization method for nonlinear
equations involving monotone type
operators
This chapter presents the Newton-Kantorovich iteration regularization
method for finding a solution of nonlinear equations involving monotone
type mappings. Results of this chapter are presented based on works [2 ],
[3 ] and [4 ] in list of works has been published.
2.1.


Newton-Kantorovich iterative regularization for nonlinear
equations involving monotone operators in Banach spaces

Consider the nonlinear operator equation
A(x) = f, f ∈ E ∗ ,

(2.1)

where A is a monotone mapping from Banach space E into its dual space
E ∗ , with D(A) = E. Assume that the solution set of (2.1), denote by S,
is nonempty and instead of f , we only know its approximation fδ satisfies
fδ − f ≤ δ

0.

(2.2)

If A does not have strongly monotone or uniformly monotone properties
then the equation (2.1) is generally an ill-posed problem. Since when A is
the nonlinear mapping, (0.3) (see page 2) and (0.4) (see page 2) are nonlinear problems, to solve (2.1), in this section, we consider an another regularization method, called the Newton-Kantorovich iterative regularization
method. This regularization method was proposed by A.B. Bakushinskii
(1976) based on the Newton-Kantorovich method to find the solution of


10

the following variational inequality problem in Hilbert space H: Find an
element x∗ ∈ Q ⊆ H such that
A(x∗ ), x∗ − w ≤ 0, ∀w ∈ Q,


(2.3)

where A : H → H is a monotone mapping, Q is a closed and convex
set in H. A.B. Bakushinskii introduced the iterative method to solve the
problem (2.3) as follows:

z ∈ H,
0
 A(z ) + A (z )(z
n
n
n+1 − zn ) + αn zn+1 , zn+1 − w ≤ 0, ∀w ∈ Q.

(2.4)

Based on the method (2.4), to find the solution of the equation (2.1)
when A is a monotone mapping from Hilbert space H into H, A.B. Bakushinskii and A. Smirnova (2007) proved the strong convergence of the NewtonKantortovich type iterative regularization method:
z0 = x+ ∈ H, A(zn ) + A (zn )(zn+1 − zn ) + αn (zn+1 − x+ ) = fδ ,

(2.5)

with using the generalized discrepancy principle
≤ τ δ < A(zn ) − fδ 2 , 0 ≤ n < N = N (δ),

(2.6)

A (x) ≤ 1, A (x) − A (y) ≤ L x − y , ∀x, y ∈ H.

(2.7)


A(zN ) − fδ

2

and the condition

Comment 2.1. The advantage of the method (2.5) is its linearity. This
method is an important tool for solving the problem (2.1) in the case A is
a monotone mapping in Hilbert space. However, we see that the condition
(2.7) is fairly strict and should overcome such that the method (2.5) can
be applied to the wider mapping class.
When E is Banach space, to solve the equation (2.1) in the case instead
of f , we only know its approximation fδn ∈ E ∗ satisfying (2.2), in which δ
is replaced by δn , I.P. Ryazantseva (1987, 2006) also developed the method
(2.4) to propose the iteration:
z0 ∈ E, A(zn ) + A (zn )(zn+1 − zn ) + αn J s (zn+1 ) = fδn .

(2.8)

The convergence of the method (2.8) was provided by I.P. Ryazantseva
under the assumption that E is Banach space having the ES property, the


11

dual space E ∗ is strictly convex and the mapping A satisfies the condition
A (x) ≤ ϕ( x ), ∀x ∈ E,

(2.9)


where ϕ(t) is a nonnegative and nondecreasing function.
Comment 2.2. We see that lp and Lp (Ω) (1 < p < +∞) are Banach
spaces having the ES property and the dual space is strictly convex. However, since the method (2.8) uses the duality mapping J s as a regularization component, it has the same disadvantages as the Browder-Tikhonov
regularization method (0.3) mentioned above.
To overcome these drawbacks, in [3 ], we propose the new NewtonKantorovich iterative regularization method as follows:
z0 ∈ E, A(zn ) + A (zn )(zn+1 − zn ) + αn B(zn+1 − x+ ) = fδn ,

(2.10)

where B is a linear and strongly monotone mapping.
Firstly, to find the solution of the equation (2.1) in the case without
perturbation for f , we have the following iterative method:
z0 ∈ E, A(zn ) + A (zn )(zn+1 − zn ) + αn B(zn+1 − x+ ) = f.

(2.11)

The convergence of the method (2.11) is given by the following theorem:
Theorem 2.4. Let E be a real and reflexive Banach space, B be a linear,
mB -strongly monotone mapping with D(B) = E, R(B) = E ∗ and A be a
monotone, L-Lipschitz continuous and twice Fréchet differentiable mapping
on E with condition (2.9). Assume that the sequence {αn } and the initial
point z0 in (2.11) satisfy the following conditions:
a) {αn } is a monotone decreasing sequence with 0 < αn < 1 and there
exists σ > 0 such that αn+1 ≥ σαn for all n = 0, 1, ...;
b)
ϕ0 z0 − x0
≤ q < 1, ϕ0 = ϕ(d + γ),
2mB σα0
d ≥ max


(2.12)

B(x+ − x∗ ) /mB + x∗ , L B(x+ − x∗ ) /m2B ,

a positive number γ is found from the estimate
2mB σα0 /ϕ0 ≤ γ,
where x∗ is the unique solution of the variational inequality
x∗ ∈ S, B(x+ − x∗ ), x∗ − y ≥ 0, ∀y ∈ S

(2.13)


12

and x0 is the solution of the following equation with n = 0:
A(x) + αn B(x − x+ ) = f.
c)
αn − αn+1
2mB σ 3
2
≤ c(q − q ), c =
.
αn3
ϕ0 d
Then, zn → x∗ , where zn is defined by (2.11).
Now, we have the following result for the convergence of (2.10):
Theorem 2.5. Let E, A and B be as in Theorem 2.4 and let fδn be elements in E ∗ satisfies (2.2), in which δ replaced by δn . Assume that the
sequence {αn }, the real number d and the initial point z0 in (2.10) satisfy
conditions a), b) in Theorem 2.4 and

c)
mB σ 3 δn
m2B σ 2
αn − αn+1
2
2
. (2.14)

c
(q

q
),
c
=
,

c
(q

q
),
c
=
1
1
2
2
αn3
ϕ0 d αn2

ϕ0
Then, zn → x∗ , where zn is defined by (2.10).
Comment 2.3. We see, (2.10) and (2.11) are linear problems. The introduction of these methods overcomes the "nonlinear" property of previous
methods for finding a solution of nonlinear ill-posed equations involving
monotone mappings in Banach spaces. For strong convergence, the method
(2.8) only applies to Banach space E having the ES property and the dual
space E ∗ is strictly convex, while methods (2.10) and (2.11) can be used in
any real and reflexive Banach space. However, methods (2.10) and (2.11)
require A is the Lipschitz continuous mapping.
2.2.

Newton-Kantorovich iterative regularization for nonlinear
equations involving accretive operators in Banach spaces

Consider the problem of finding a solution of the nonlinear equation
A(x) = f, f ∈ E,

(2.15)

where A is an accretive mapping on Banach space E. Assume that the
solution set of (2.15), denote by S ∗ , is nonempty and instead of f , we only
know its approximation fδ satisfies (2.2).
If A does not have additional property as strongly accretive or uniformly
accretive then the problem (2.15), in general, is the ill-posed problem.


13

One of widely used methods to solve the problem (2.15) is the BrowderTikhonov regularization method (0.5) (see page 2). However, if A is the
nonlinear mapping then (0.5) is the nonlinear problem. To overcome this

drawback, Ng. Buong and V.Q. Hung (2005) investigated the convergence
of the following Newton-Kantorovich type iterative regularization method
to find the solution of the problem (2.15) in the case instead of f we only
know its approximation fδn ∈ E which satisfies the condition (2.2), where
δ is replaced by δn :
z0 ∈ E, A(zn ) + A (zn )(zn+1 − zn ) + αn (zn+1 − x+ ) = fδn .

(2.16)

The strong convergence of the method (2.16) proposed by Ng. Buong and
V.Q. Hung under hypotheses that E with the dual space E ∗ are uniformly
convex spaces, E possesses the approximation and the mapping A satisfies
conditions
A(x) − A(x∗ ) − J ∗ A (x∗ )∗ J(x − x∗ ) ≤ τ A(x) − A(x∗ ) , ∀x ∈ E
(2.17)
and
A (x∗ )v = x+ − x∗ ,

(2.18)

wher τ is some positive constant, x∗ ∈ S ∗ is uniquely determined by (2.17),
J ∗ is the normalized duality mapping of E ∗ and v is some element of E.
Comment 2.4. We see, (2.16) has the advantage that is the linear problem. However, the strong convergence of this Newton-Kantorovich type
iterative regularization method require conditions (2.17) and (2.18). These
conditions are relatively strict because they use the Fréchet derivative of
the mapping A at the unknown solution x∗ .
Recently, in [2 ], in order to solve the equation (2.15), we prove the convergence of the following Newton-Kantorovich type iterative regularization
method:
z0 ∈ E, A(zn ) + A (zn )(zn+1 − zn ) + αn (zn+1 − x+ ) = fδ ,


(2.19)

without using conditions (2.7), (2.17) and (2.18). Our results are proved
based on Theorem 1.17. Therefore, the strong convergence of the method
(2.19) also must not utilize the assumption that the normalized duality
mapping J is sequentially weakly continuous.


14

Firstly, consider the Newton-Kantorovich type iterative regularization
method in the case without perturbation for f :
x0 ∈ E, A(xn ) + A (xn )(xn+1 − xn ) + αn (xn+1 − x+ ) = f.

(2.20)

Theorem 2.7. Let E be a real, reflexive and strictly convex Banach space
with the uniformly Gâteaux differentiable norm and let A be an m-accretive
and twice Fréchet differentiable mapping on E with condition (2.9). Assume that the sequence {αn }, the real number d and the initial point x0
satisfy the following conditions:
a) {αn } is a monotone decreasing sequence with 0 < αn < 1 and there
exists σ > 0 such that αn+1 ≥ σαn for all n = 0, 1, ...;
b)
ϕ0 x0 − xα0
≤ q < 1, ϕ0 = ϕ(d + γ),
2σα0
a positive number γ is found from the estimate
2σα0 /ϕ0 ≤ γ, d ≥ 2 x∗ − x+ + x+ ,

(2.21)


where x∗ ∈ S ∗ is the unique solution of the variational inequality (1.2) and
xα0 is the solution of the following equation with n = 0:
A(x) + αn (x − x+ ) = f.

(2.22)

c)
αn − αn+1
ϕ0 d
q − q2

;
c
=
.
1
αn2
c1
2σ 2
Then, xn → x∗ , where xn is defined by (2.20).
Theorem 2.8. Let E and A be as in Theorem 2.7. The mapping A
satisfies additional property that is L-Lipschitz continuous. Suppose the
sequence {αn } satisfies the condition a) in Theorem 2.7. Let a number
τ > 1 in (2.6) be chosen such that
3dL
ϕ˜ z0 − xα0
≤ q, 0 < q < 1 −
,
2σα0

τ˜σ
(2.23)

2
2
ϕ˜ = ϕ0 + 2L /˜
τ , τ˜ = ( τ − 1) ,
where d is defined as in Theorem 2.7, a positive number γ is taken from
estimate (2.21) with ϕ0 replaced by ϕ˜ and
αn − αn+1 d
2Lσ
+

q.
αn2
τ˜
ϕ˜
˜τ

(2.24)


15

Then,
1. For n = 0, 1, ..., N (δ),
ϕ˜ zn − xαn
≤ q,
2σαn


(2.25)

where zn is a solution of (2.19) and N (δ) is chosen by (2.6).
2. limδ→0 zN (δ) − y = 0, where y ∈ S ∗ . If N (δ) → ∞ as δ → 0 then
y = x∗ , where x∗ ∈ S ∗ is the unique solution of the variational inequality
(1.2).
Comment 2.5. Beside eliminating conditions (2.7), (2.17), (2.18) and
the sequentially weakly continuity of the normalized duality mapping J,
the hypothesis about Banach space E in our results is also lighter than
one in Ng. Buong and V.Q. Hung’s result (2005). Specifically, in Theorem 2.7 and Theorem 2.8, we only need to assume that E is the reflexive
and strictly convex space with the uniformly Gâteaux differentiable norm,
instead of E with the dual space E ∗ are uniformly spaces and E possesses
the approximation as in Ng. Buong and V.Q. Hung’s result. However, the
strong convergence of the Newton-Kantorovich type iterative regularization method (2.19) requires the additional condition that is the Lipschitz
continuity of the mapping A.
2.3.

Numerical example of the finite-dimensional approximation for the Newton-Kantorovich iterative regularization
method

To solve the equation (2.1), we can use the Browder-Tikhonov type
regularization method (0.4) or the Newton-Kantorovich type iterative regularization method (2.10). However, to be able to use (0.4) and (2.10) for
solving problems in practice by computer, the first task is to approximate
(0.4) and (2.10) by corresponding equations in finite-dimensional spaces.
Ng. Buong (1996; 2001) introduced the finite-dimensional approximation
method for the solution xδα of (0.4) as follows:
An (x) + αBn (x − x+ ) = fnδ ,

(2.26)


where An = Pn∗ APn , Bn = Pn∗ BPn , fnδ = Pn∗ fδ , Pn is a linear projection
from E onto the subspace En of E satisfies En ⊂ En+1 , for all n, Pn ≤ c,


16

where c is a constant and Pn∗ is the conjugate mapping of Pn . We have
the following result for the convergence of the solution sequence {xδαn } of
(2.26) to the solution xδα of (0.4):
Theorem 2.9. (Ng. Buong, 2001) Assume that Pn∗ BPn x → Bx, for all
x ∈ D(B). The necessary and sufficient condition for every α > 0 and
fδ ∈ E ∗ , the solution sequence {xδαn } of (2.26) converges strongly to the
solution xδα of (0.4) is Pn x → x as n → ∞ for every x ∈ E.
Apply (2.26) to (2.10) for αn = α that is a fixed number, we have
An (zn ) + An (zn )(zn+1 − zn ) + αBn (zn+1 − x+ ) = fnδn .

(2.27)

Let k(t, s) is a real, two variables, continuous, nondegenerate and nonnegative function on the square [a, b]×[a, b] satisfies: there exists a constant
q = 2, 1 < q < ∞ such that
b

b

|k(t, s)|q dsdt < +∞.
a

(2.28)

a


Then, the mapping A is defined by
b

k(t, s)x(s)ds, x(s) ∈ Lp [a, b],

(Ax)(t) =

(2.29)

a

is the mapping from the space Lp [a, b] into the space Lq [a, b] with

1
+
p

1
= 1 and continuous (S. Banach, 1932). Since k(t, s) is continuous and
q
nonnegative on [a, b] × [a, b], A is a monotone mapping on Lp [a, b].
Hereafter, we will present the application of the method (2.27) to solve
the following Hammerstein type integral equation:
b

(Ax)(t) =

k(t, s)x(s)ds = f (t),


(2.30)

a

where f (t) ∈ Lq [a, b]. Assume the solution x(t) of (2.30) is twice Fréchet
differentiable and satisfies the boundary condition x(a) = x(b) = 0. Taking
Bx(t) = x(t) − x (t), where x(t) ∈ D(B) is the closure of all functions in
C 2 [a, b] in the metric of Wq2 [a, b], satisfies x(a) = x(b) = 0. Let {tn0 = a <
tn1 < · · · < tnn = b} be an uniformly partition of [a, b]. Approximating E
by the sequence of linear subspaces En = L{ψ1 , ψ2 , · · · , ψn }, where
ψi (t) =

1, if t ∈ (tni−1 , tni ]
0, if t ∈
/ (tni−1 , tni ].


17

Selecting projection
n

x(tni )ψi (t).

Pn x(t) =

(2.31)

i=1


Consider the equation (2.30) with a = 0, b = 1, k(t, s) = |t − s|. Clearly,
1

1

1
3/2

|k(t, s)|
0

0

1

|t − s|3/2 dsdt < +∞.

dsdt =
0

(2.32)

0

So, we consider q = 3/2 and p = 3. With the exact solution x∗ (s) =
s(1−s), we have f (t) = −(1/6)t4 +(1/3)t3 −(1/6)t+1/12. Hereafter is the
calculated result with x+ (t) = 2.22 and fδn = f +δn , where δn = 1/(1+n)2 :
Table 2.1. Calculated results with α = 0.5
n
zn+1 − x∗ 3

n
zn+1 − x∗ 3
4

0.2689666069

64

0.0424663883

8

0.1620043546

128

0.0298464819

16 0.1003942097

256

0.0230577881

32 0.0640826159 1024 0.0203963532
Table 2.2. Calculated results with α = 0.1
n
zn+1 − x∗ 3
n
zn+1 − x∗ 3

4

0.2600031372

64

0.0388129413

8

0.1534801504

128

0.0269295563

16 0.0936525099

256

0.0204623013

32 0.0591546836 1024 0.0176684288
Table 2.3. Calculated results with α = 0.01
n
zn+1 − x∗ 3
n
zn+1 − x∗ 3
4


0.1948813288

64

0.0295640389

8

0.1176798737

128

0.0196621910

16 0.0739066898

256

0.0138863679

32 0.0461148835 1024 0.0099425015
Observing above results, we see, the application of the method (2.27)
brings results in the convergence to the solution of the equation (2.30) is
quite good. Especially, with α getting smaller and going to 0, zn+1 is closer
to the exact solution x∗ .


Chapter 3

Iterative method for finding a zero

of a monotone mapping in Hilbert
space
This chapter presents modifications of the proximal point method that
we achieve to find a zero of a maximal monotone mapping in Hilbert space.
The content of this chapter is presented based on works [1 ] and [4 ] in list
of works has been published.
3.1.

The problem finding a zero of a maximal monotone mapping

• This section introduces the problem:
Find an element p∗ ∈ H such that 0 ∈ A(p∗ ),

(3.1)

where H is Hilbert space and A : H → 2H is a maximal monotone mapping.
• One of the first methods for finding a solution of the problem (3.1) is
the proximal point method (0.12). However, the proximal point method
(0.12) only converges weakly without converging strongly in the infinitedimensional space. In order to achieve strong convergence, some modifications of the proximal point method were proposed (see Section 1.3.2). As
Comment 1.6 in Section 1.3.2, the strong convergence of these modifications is proved under conditions leading to the parameter sequence of the
resolvent of the mapping A is nonsummable, i.e. ∞
k=1 rk = +∞.
• To find p∗ ∈ H that is the solution of the variational inequality problem
p∗ ∈ C : F p∗ , p∗ − p ≤ 0, ∀p ∈ C,

(3.2)

where C = ZerA is the set of zeros of the mapping A, F is a L-Lipschitz
continuous and η-strongly monotone mapping on H, S. Wang (2012) pro-



19

posed the iterative method:
xk+1 = Jk [(I − tk F )xk + ek ], k ≥ 1,

(3.3)

where Jk is the resolvent of A and ek is the error vector.
The author proved the convergence of the method (3.3) under the condition (C0’) lim inf k→∞ rk > 0 that is mentioned in Section 1.3.2. This is
the condition leading to the parameter sequence {rk } of the resolvent of
the mapping A is nonsummable.
• To answer the question in the Introduction (see page 4), in the next section, we will introduce two new modifications of the proximal point method
to find a zero of the maximal monotone mapping A in Hilbert space H
with the strong convergence given under the condition that the parameter

sequence of the resolvent is summable, i.e.
k=1 rk < +∞. The obtained
new modifications are individual cases of an extension of the method (3.3)
to find the solution of the variational inequality problem (3.2).
3.2.

Modifications of the proximal point method with the parameter sequence of the resolvent is summable

In [1 ], we introduce two new modifications of the proximal point method
corresponding to sequences {xk } and {z k } defined by:
xk+1 = J k (tk u + (1 − tk )xk + ek ), k ≥ 1,

(3.4)


z k+1 = tk u + (1 − tk )J k z k + ek , k ≥ 1,

(3.5)

and

where J k = J1 J2 · · · Jk is the product of k resolvents Ji = (I + ri A)−1 , i =
1, 2, ..., k of the mapping A. First of all, we propose the iterative method:
xk+1 = J k [(I − tk F )xk + ek ], k ≥ 1,

(3.6)

to find a solution p∗ ∈ H of the variational inequality problem (3.2),
where C = ZerA, F : H → H is an η-strongly monotone and γ-strictly
pseudocontractive mapping. Then, from (3.6), by selecting the appropriate
mapping F , we obtain methods (3.4) and (3.5). Denote |Ax| = inf{ y :
y ∈ Ax}, x ∈ D(A). Let A0 is the mapping defined by A0 x = {y ∈ Ax :


20

y = |Ax|}, x ∈ D(A). Since A is the maximal monotone mapping, A0 is
a single-valued mapping (Ya.I. Alber and I.P. Ryazantseva, 2006).
Theorem 3.2. Let A be a maximal monotone mapping in Hilbert space H
such that D(A) = H, C := ZerA = ∅ and the mapping A0 be bounded, F
be an η-strongly monotone and γ-strictly pseudocontractive mapping with
η + γ > 1. Assume that tk , ri and ek satisfy conditions (C1), (C5’) and
(C0” ’) ri > 0 for all i ≥ 1 and ∞
i=1 ri < +∞.
Then, the sequence {xk }, defined by the method (3.6), converges strongly

to the element p∗ as k → ∞, where p∗ is the unique solution of (3.2).
Remark 3.1. This remark presents how to change and selection the
mapping F in the method (3.6) to obtain methods (3.4) and (3.5). Indeed,
in (3.6), setting z k = (I − tk F )xk + ek , then rewriting tk := tk+1 and
ek := ek+1 , we obtain
z k+1 = (I − tk F )J k z k + ek .

(3.7)

Next, take F = I − f , where f = aI + (1 − a)u, with a is a fixed number in
(0; 1) and u is a fixed point of H. With the mapping F selected as above,
(3.6) and (3.7) respectively become
xk+1 = J k (tk (1 − a)u + (1 − tk (1 − a))xk + ek ),

(3.8)

z k+1 = tk (1 − a)u + (1 − tk (1 − a))J k z k + ek .

(3.9)

Then, in (3.8) and (3.9), denoting tk := (1 − a)tk , we obtain methods (3.4)
and (3.5) corresponding.
Remark 3.2. The condition that the parameter sequence of the resolvent is summable, i.e. the condition (C0” ’) is satisfied, which leads to
limk→∞ rk = 0. The result in this section is a suggestion for the research
of the strong convergence of modifications of the proximal point method
under the condition that the parameter sequence of the resolvent satisfies
limk→∞ rk = 0.
3.3.

Numerical example


Consider the following convex optimization problem: find an element
p∗ ∈ R2 such that
f (p∗ ) = inf2 f (x).
x∈R

(3.10)


21

We know that if f (x) is a lower-semicontinuous, proper and convex
functional then its sub-differential ∂f is a maximal monotone mapping
and the problem (3.10) is equivalent to the problem of finding a zero of
∂f (H.H. Bauschke and P.L. Combettes, 2017; R.T. Rockafellar, 1966).
Hereafter, we will apply methods (3.4) and (3.5) to find a solution of the
problem (3.10) with the function f (x) given as follows:

0,
if x2 ≤ 1,
f (x) =
x − 1, if x > 1.
2

(3.11)

2

For r > 0, we have


(x , x ),
if x2 ≤ 1,
1 2
−1
(I + r∂f ) (x) =
(x , x /(1 + r)), if x > 1.
1 2
2

(3.12)

Taking a = 1/2 and u = (0; 2). Then, the solution of the problem (3.10)
satisfies the variational inequality (3.2) with A = ∂f is p∗ = (0; 1). Using
tk = 1/(k + 1), ri = 1/(i(i + 1) and ek = (0; 0), we obtain the following
result tables:
a) The case of the initial point is (2,0; 6,0):
Table 3.1. Calculated result when applying the method (3.4)
k

xk+1
1

with calculated time is 2.745 seconds
xk+1
k
xk+1
2
1

xk+1

2

1

1.5000000000 3.3333333333

2000

0.0504468881 0.9999996953

10

0.6727523804 0.9982716570

5000

0.0319113937 0.9999999357

20

0.4895426850 0.9997219609

8000

0.0252293542 0.9999999775

50

0.3152358030 0.9998464349 10000 0.0225661730 0.9999999803


100

0.2242781046 0.9999270505 12000 0.0206002179 0.9999999883

500

0.1007993740 0.9999960565 15000 0.0184255869 0.9999999946

1000 0.0713204022 0.9999986543 20000 0.0159571926 0.9999999954
Table 3.2. Calculated result when applying the method (3.5)
with calculated time is 2.730 seconds


22

z1k+1

k

z2k+1

k

z1k+1

z2k+1

1

1.5000000000 3.5000000000


2000

0.0504468881 1.0002497795

10

0.6727523804 1.0367234670

5000

0.0319113937 1.0000999281

20

0.4895426850 1.0225868918

8000

0.0252293542 1.0000624662

50

0.3152358030 1.0092381210 10000 0.0225661730 1.0000499833

100

0.2242781046 1.0048024836 12000 0.0206002179 1.0000416476

500


0.1007993740 1.0009925330 15000 0.0184255869 1.0000333306

1000 0.0713204022 1.0004981392 20000 0.0159571926 1.0000249980
b) The case of the initial point is (10; 20):
Table 3.3. Calculated result when applying the method (3.4)
k

xk+1
1

with calculated time is 2.699 seconds
xk+1
k
xk+1
2
1

xk+1
2

1

7.5000000000 10.3333333333

2000

0.2522344403 0.9999996174

10


3.3637619019

0.9837468002

5000

0.1595569687 0.9999999232

20

2.4477134250

0.9999068009

8000

0.1261467712 0.9999999725

50

1.5761790149

0.9997667170

10000 0.1128308650 0.9999999996

100

1.1213905229


0.9998979723

12000 0.1030010894 0.9999999861

500

0.5039968702

0.9999947597

15000 0.0921279346 0.9999999932

1000 0.3566020110

0.9999983305

20000 0.0797859628 0.9999999946

Table 3.4. Calculated result when applying the method (3.5)
with calculated time is 2.683 seconds
k+1
k
z1
z2k+1
k
z1k+1
z2k+1
1


7.5000000000 10.5000000000

2000

0.2522344403 1.0002497442

10

3.3637619019

1.0343656695

5000

0.1595569687 1.0000999239

20

2.4477134250

1.0224448516

8000

0.1261467712 1.0000624636

50

1.5761790149


1.0091958993

10000 0.1128308650 1.0000499805

100

1.1213905229

1.0047946859

12000 0.1030010894 1.0000416627

500

0.5039968702

1.0009920670

15000 0.0921279346 1.0000333302

1000 0.3566020110

1.0004979605

20000 0.0797859628 1.0000249977

Observing above results, we see, the application of methods (3.4) and
(3.5) brings results in a good convergence to the solution of the problem
(3.10).



23

GENERAL CONCLUSION

The thesis achieves the following results:
- Propose and prove theorems of the strong convergence of the NewtonKantorovich type iterative regularization method for finding a solution of
nonlinear equations involving monotone mappings in Banach spaces.
- Propose and prove theorems of the strong convergence of the NewtonKantorovich type iterative regularization method for finding a solution of
nonlinear equations involving accretive mappings in Banach spaces.
- Propose and prove the theorem of the strong convergence of new modifications of the proximal point method to find a zero of a maximal monotone
mapping in Hilbert space, with a different approach about the condition
of the parameter sequence of the resolvent, that is the convergence of previous modifications was given under the assumption that the parameter
sequence of the resolvent is nonsummable, while the strong convergence of
these new modifications is proved under the assumption that the parameter sequence of the resolvent is summable.
Recommendations for future research:
• Continue the study of the finite-dimensional approximation with the
regularization parameter sequence {αn } and evaluating the convergence
rate to the solution of Newton-Kantorovich iterative regularization methods given in Chapter 2 for solving equations involving monotone type operators.
• Develope Newton-Kantorovich iterative regularization methods proposed in Chapter 2 to build regularization methods for solving equation
systems involving monotone type operators.
• Evaluate the convergence rate to the solution of iterative methods
to find a zero of a maximal monotone mapping in Hilbert space given in
Chapter 3.
• Propose and investigate the convergence of new iterative methods to
find zeros of monotone type mappings in Hilbert spaces and Banach spaces.


×