Tải bản đầy đủ (.pdf) (20 trang)

DSpace at VNU: Parallel regularized Newton method for nonlinear ill-posed equations

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (375.47 KB, 20 trang )

Numer Algor (2011) 58:379–398
DOI 10.1007/s11075-011-9460-y
ORIGINAL PAPER

Parallel regularized Newton method for nonlinear
ill-posed equations
Pham Ky Anh · Cao Van Chung

Received: 7 September 2010 / Accepted: 21 March 2011 /
Published online: 9 April 2011
© Springer Science+Business Media, LLC 2011

Abstract We introduce a regularized Newton method coupled with the parallel splitting-up technique for solving nonlinear ill-posed equations with smooth
monotone operators. We analyze the convergence of the proposed method and
carry out numerical experiments for nonlinear integral equations.
Keywords Monotone operator · Regularized Newton method ·
Parallel splitting-up technique
Mathematics Subject Classifications (2010) 47J06 · 47J25 · 65J15 ·
65J20 · 65Y05

1 Introduction
The Kaczmarz method for a system of linear algebraic equations was invented more than 70 years ago. It has proved to be quite efficient in many
applications ranging from computer tomography to digital signal processing.
Recently, there has been a great interest in the so-called Kaczmarz methods for solving ill-posed problems, namely, Landweber–Kaczmarz method,

P. K. Anh (B) · C. V. Chung
Department of Mathematics, Vietnam National University, 334 Nguyen Trai,
Thanh Xuan, Hanoi, Vietnam
e-mail:
C. V. Chung
e-mail:




380

Numer Algor (2011) 58:379–398

Newton–Kaczmarz method and the steepest-descent-Kaczmarz method. The
main idea of Kaczmarz methods is to split the initial ill-posed problem into
a finite number of subproblems and to perform a cyclic iteration over the
subproblems. These methods not only reduce the overall computational effort
but also impose less constraints on the nonlinearity of the operators. Besides,
experiments show that the Kaczmarz methods in some cases are performed
better than standard iterative methods. For a wide literature concerning
Kaczmarz methods, please refer to [1–6] and references therein.
Clearly, Kaczmarz methods are inherently sequential algorithms. When the
number of equations N is large, the Kaczmarz like methods are costly on a
single processor. In this paper we introduce a parallel regularized Newton
method, which may be regarded as a counterpart of the regularizing Newton–
Kaczmarz method. Our idea consists of splitting the given ill-posed problem
into subproblems and performing synchronously one iteration of the regularized Newton method to each subproblem. The next approximate solution is
determined as a convex combination of the obtained iterations for subproblems. The benifit of our approach is clear. Based on parallel computation we
can reduce the overall computational effort under widely used conditions on
smooth monotone operators (cf. [7–10]).
Let us consider a nonlinear operator equation
A(x) := F(x) − f = 0,

(1.1)

where F : H → H is a twice locally Frechet differentiable and monotone
operator; f ∈ H is given and H is a real Hilbert space. We assume that the

set C ⊂ H of solutions of (1.1) is not empty, hence C is a convex and closed
subset of H (see, e.g. [7, 11]).
Several problems arising in Quantum Mechanics, Wiener-type filtering
theory or in physics, where dissipation of energy occurs, can be reduced to
equations involving monotone operators (see [7–10]).
If F is not strongly monotone or uniformly monotone, problem (1.1) in
general is ill-posed. In that case a process known as Lavrentiev regularization
consisting of solving the singularly perturbed operator equation
A(x) + αn x = 0,

(αn > 0,

n = 0, 1, 2, . . .)

(1.2)

is recommended (see [7, 11–13]).
In [14] we proposed parallel iterative regularization methods for solving
(1.1), combining the regularization and parallel splitting-up techniques. In this
paper we use the parallel splitting-up technique (see [15]) and the Newton
method to solve (1.2). Throughout this paper we assume that for all x ∈ H,
F(x) − f =

N
i=1

Fi (x) −

N
i=1


fi and put Ai (x) := Fi (x) − fi (i = 1, N). Further,

we suppose that all Fi : H → H are twice locally Frechet differentiable and


Numer Algor (2011) 58:379–398

381

monotone operators. Then, knowing the n-th approximation zn we can determine the next approximation zn+1 as follows
Ai (zn ) +

αn
αn
+ γn I (zin − zn ) = − Ai (zn ) + zn , i = 1, 2, . . . , N,
N
N
(1.3)

zn+1 =

1
N

N

zin ,

n = 0, 1, 2, . . . ,


(1.4)

i=1

where αn and γn are positive numbers. Due to the nonnegativity of the
derivative Ai (zn ) (see [7]), all the linear regularized equations (1.3) are wellposed. Moreover, being independent from each other, they can be solved
stably and synchronously by parallel processors.
The article is outlined as follows. In Section 2 we provide a convergence
analysis of the proposed method in both exact data and noisy data cases. In the
final Section 3 we verify all the assumptions required and perform numerical
experiments for a model problem.

2 Convergence analysis
We begin with some notions and auxiliary results.
Lemma 2.1 If αn is a converging to zero sequence of positive numbers, then
for each n ∈ N, the regularized equation (1.2) has a unique solution x∗n and
x∗n → x† := argmin x as n → +∞. Moreover, the following estimates hold
x∈C

i.
ii.

x∗n ≤ x† ;
x∗n+1 − x∗n ≤

|αn+1 −αn |
αn

x† for all n.


The proof of this lemma can be found in [7, 11].
We recall that an operator A : H → H is called c−1 —inverse-strongly
monotone, if
A(x) − A(y), x − y ≥

1
A(x) − A(y) 2 ,
c

∀x, y ∈ H,

where c is some positive constant (see, e.g. [16]).
Obviously, every inverse-strongly monotone operator is monotone and not
necessarily strongly-monotone.
The following lemma shows that a system of inverse-strongly monotone
operator equations may be reduced to a single equation involving the sum of
these operators.


382

Numer Algor (2011) 58:379–398

Lemma 2.2 Suppose Ai , i = 1, 2, . . . , N are ci−1 -inverse-strongly monotone
operators. If the system of equations
Ai (x) = 0, i = 1, 2, . . . , N,

(2.1)


is consistent, then it is equivalent to the operator equation
N

Ai (x) = 0

A(x) :=

(2.2)

i=1

Proof Obviously, any common solution of (2.1) is a solution of (2.2). Conversely, let x∗ be a solution of (2.2) and y be a common solution of (2.1). Then,
N

N

Ai (x∗ ), x∗ − y =

0=
i

[Ai (x∗ ) − Ai (y)], x∗ − y
i

N

N

ci−1 Ai (x∗ ) − Ai (y)


Ai (x∗ ) − Ai (y), x∗ − y ≥

=
i

2

≥ 0.

i

Thus, Ai (x∗ ) = Ai (x∗ ) − Ai (y) = 0, hence x∗ is a solution of system (2.1).
In what follows, we denote a closed ball centered at a ∈ H and with a
radius r by B[a, r]. The following result on the convergence rate of Lavrentiev
regularization method is needed for our further study.
Lemma 2.3 Let x† = 0 be a minimal-norm solution of (1.1) and assume Ai (x),
i = 1, N, are twice continuously Frechet dif ferentiable operators in B[0, r] with
a radius r > x† . Moreover, let Ai (x) ≤ Li , i = 1, 2, ..., N for all x ∈ B[0, r].
Then the following statements hold:
(a) If Ai , i = 1, 2, ..., N, are monotone and there exists an element u ∈ H, such
that
N

x† = A (x† )u =

N

Ai (x† )u

Li u < 2.


and

i=1

(2.3)

i=1

Then the regularized solution x∗n of (1.2) converges to x† at the rate
x∗n − x† = O(αn ).

(2.4)

(b) If Ai , i = 1, 2, ..., N, are ci−1 - inverse-strongly monotone and system (2.1)
is consistent. Moreover, assume that there exist wi , i = 1, 2, ..., N, such that
N

N

Ai∗ (x† )wi

x† =

Li wi < 1.

and

i=1


(2.5)

i=1

Then the following convergence rate holds
x∗n − x† = O(αn1/4 ).

(2.6)


Numer Algor (2011) 58:379–398

383

Proof The first estimate of Lemma 2.3 can be found in [11, 13]. For the second
estimate we can process as in [17]. Indeed, Lemma 2.2 ensures that x† is a
common solution of equations (2.1). Further, we have
1 ∗
x − x+
n
2 n

2

=

1
{ x∗n
2


2

− x†

2

− 2 x† , xn − x† }
N

≤ − x† , x∗n − x† = −

Ai∗ (x† )wi , x∗n − x†
i=1

N

N

Ai (x∗n ) − Ai (x† ) − Ai (x† )(x∗n − x† ), wi −

=
i=1

Ai (x∗n ), wi
i=1

N

1


=
0

i=1

1
0

t Ai (x† + st(x∗n − x† ))(x∗n − x† )2 , wi dsdt

N

Ai (x∗n ), wi .


i=1

Thus, we obtain the estimate
1 ∗
x − x†
2 n

2

1

2

N


N

Li wi

x∗n

−x

† 2

Ai (x∗n ) wi

+

i=1

(2.7)

i=1

Using the inverse-strong monotonicity of Ai (x) we get
Ai (x∗n )

2

= Ai (x∗n ) − Ai (x† ) 2 ≤ ci Ai (x∗n ) − Ai (x† ), x∗n − x†


⎨N


Aj(x∗n )− Aj(x† ), x∗n −x† −
Aj(x∗n )− Aj(x† ), x∗n −x†
= ci


j=1

j=i

≤ − ci αn x∗n , x∗n − x† ≤ 2αn ci x† 2 .

From the last inequality we find Ai (x∗n ) ≤ 2αn ci x† . Taking into account
N

Li wi )−1 2αn x†
the last estimate, from (2.7) we get x∗n − x† 2 ≤ 2(1 −
N


ci wi , which implies (2.6).

i=1

i=1

Before stating and proving convergence theorems we observe that Ai (x∗n ) +
→ Ai (x† ) as n → +∞, hence we can assume Ai (x∗n ) + αNn x∗n ≤ CA for
all n = 0, 1, . . . and i = 1, 2, ..., N. Furthermore, we suppose that the operators
Ai , i = 1, 2, ..., N, are twice continuously
√ Frechet differentiable and Ai (x) ≤

ϕ for all x ∈ B[0, r], where r = M D∗ , and M > 1, D∗ ≥ max CA2 , x† 2
are appropriately chosen constants. Now we have the following convergence
result.
αn ∗
x
N n


384

Numer Algor (2011) 58:379–398

Theorem 2.1 Let {αn }, {γn } be two sequences of positive numbers, αn
0,
γn
+∞ as n → +∞, such that the following conditions hold for all n ∈ N
and for some constants c1 ∈ (0, 1), c2 > 0
γn αn4 ≥ γ0 α04

;

c1 γ03
γn3 (αn − αn+1 )2

αn5
α03

and γn (γn+1 − γn ) ≤ c2 γ02 .
(2.8)


Moreover, the initial values α0 , γ0 and M satisfy the following relations
lα0
D∗ ≤ √ < (M − 1)2 D∗ ;
γ0

N 2 ≤ 4γ0 α02 ;

2γ0 (1− c1 )
(2+c2 )(2Nγ0 +α0 )

where l :=




4Nc1 γ03 +(4c1 +2 c1 +c2 )α0 γ02 +2α0
2γ0 α02

l > 0,

(2.9)

2
.
ϕ2

Then starting from z0 = 0, the approximations zn of the parallel regularized
Newton method (1.3) and (1.4) will converge to the minimal norm solution x†
of (1.1).
For the sake of clarity, we divide the proof of Theorem 2.1 into several steps.

Firstly, we establish a recurrence estimate for the distance between zn defined
by (1.3), (1.4) and the regularized solution x∗n of (1.2).
Lemma 2.4 The distances ||en || := ||zn − x∗n || satisfy the following inequality.
en+1

2



1+ n
N(1 + 2 n )

1

2 γn
+

where

n

:=

N

Ai (ξni )

2

en


4

+ N en

i=1

N(1 + 2 n )(αn+1 − αn )2 †
x
2
n αn

2

2

+

NCA2
γn2

,

(2.10)

αn
.
2Nγn

Proof Inserting x = zn ; h = −en := x∗n − zn ; y = ein := zin − x∗n in the Taylor

formula
A(x + h), y = A(x) + A (x)h +

A (x + θ h)h2
,y ,
2!

where x, y, h ∈ H and θ := θ(y) ∈ (0, 1), we get
Ai (x∗n ) +

αn ∗ i
x , z − x∗n = Ai (zn ), zin − x∗n + Ai (zn )(x∗n − zn ), zin − x∗n
N n n
1
A (ξ i )(x∗ − zn )2 , zin − x∗n
+
2 i n n
αn ∗ i
(2.11)
+
x , z − x∗n .
N n n


Numer Algor (2011) 58:379–398

385

Here ξni = θni (x∗n − zn ) + zn and θni ∈ (0; 1) depends on Ai , zin , x∗n . On the other
hand, from (1.2) it follows

N

Ai (x∗n ) +
i=1

N

αn ∗ i
x , z − x∗n =
N n n

αn ∗ i
x , z − zn
N n n

Ai (x∗n ) +
i=1
N

Ai (x∗n ) +

+
i=1
N

αn ∗ i
x , z − zn .
N n n

Ai (x∗n ) +


=

αn ∗
x , zn − x∗n
N n

i=1

(2.12)

Combining (2.11) with (2.12) we find
N

N

Ai (zn ), zin − x∗n +
i=1

+

Ai (zn )(x∗n − zn ), zin − x∗n +
i=1

1
2

N

N


Ai (ξni )(x∗n − zn )2 , zin − x∗n −
i=1

Ai (x∗n ) +
i=1

αn
N

N

x∗n , zin − x∗n
i=1

αn ∗ i
x , z − zn = 0.
N n n
(2.13)

Multiplying both sides of (1.3) by zin − x∗n and summing up for i from 1 to N,
we have
N

N

Ai (zn ), zin − x∗n +
i=1

+


Ai (zn )(zin − zn ), zin − x∗n
i=1

N

αn
+ γn
N

zin − zn , zin − x∗n +
i=1

αn
N

N

zn , zin − x∗n = 0.

(2.14)

i=1

Subtracting (2.13) and (2.14), after a short computation we get
1
γn

N


N

Ai (zn )(zin − x∗n ), zin − x∗n +
i=1

+

i=1

αn
Nγn

=

zin − zn , zin − x∗n

1
2γn



1
γn

N

zin − zn , zin − x∗n + zn − x∗n , zin − x∗n
i=1
N


Ai (ξni )(x∗n − zn )2 , zin − x∗n
i=1
N

Ai (x∗n ) +
i=1

αn ∗ i
x , z − zn .
N n n

(2.15)


386

Numer Algor (2011) 58:379–398

Since Ai is monotone, Ai (x) is a linear positive operator for all x. Therefore,
the first term in the left-hand side of (2.15) is nonnegative. Hence, from (2.15)
we get
N

i=1



N

αn

Nγn

zin − zn , zin − x∗n +

zin − x∗n

2

i=1

N

1
2γn

N

1
γn

Ai (ξni )(x∗n − zn )2 , zin − x∗n −
i=1

Ai (x∗n ) +
i=1

αn ∗ i
x , z − zn .
N n n


Using the above notations we can rewrite the last inequality as follows
N

i=1



en , ein

N

i=1





1
γn

=

+

1
2

ein 2

1

γn



1
2

en

2

+

ein − en

1+

i=1
N

i=1
N

Ai (ξni )

2

+

4


en

i=1

N

2αn
Nγn

ein

Ai (x∗n ) +

αn ∗ i
x , e − en
N n n

Ai (x∗n ) +

αn ∗
x
N n

N

i=1
N

1


ein

2 γn3

2

i=1

i=1

1
γn2

Ai (ξni )(−en )2 , ein +

2

+

i=1
1
γn

N
2

+

ein − en


2

i=1

NCA2
+
γn2

N

ein − en 2 .
i=1

Ai (ξni )(−en )2 , ein ≤

√1
2 γn

Ai (ξni )

2

γn


4αn γn − N

2Nγn γn


N

ein
i=1

2

1
≤ √
2 γn

By the assumptions N ≤ 4α02 γ0

4αn2 γn −N
αn


.
all n, therefore Nγ
2Nγn γn
n
2

1+

ein − en 2 , the last inequality is

N

2

γn

Ai (ξni )(−en )2 , ein −

Here we used the Young’s inequality
en 4 + √1 3 ein 2 . Thus,
2

i=1

1
2

+

2

αn ∗ i
x , e − en .
N n n

Ai (x∗n ) +

i=1

N

1
≤ √
2 γn


2

N

N

en
i=1

1
γn

i=1

Ai (ξni )(−en )2 , ein −

N



2

2

i=1

Since

equivalent to

ein

ein

N

1
2γn

ein

N

αn
Nγn

ein − en , ein +

αn
Nγn

N

ein
i=1

2

and


N

Ai (ξni )

2

en

4

+ N en

2

+

i=1

αn4 γn

NCA2
.
γn2


≥ α04 γ0 , we have 2αn2 γn − N ≥ 0 for

Hence, from the last inequality, we find

1

≤ √
2 γn

N

Ai (ξni )
i=1

2

en

4

+ N en

2

+

NCA2
. (2.16)
γn2


Numer Algor (2011) 58:379–398

387

From (1.4) we get en+1 = zn+1 − x∗n+1 ≤ zn+1 − x∗n + x∗n+1 − x∗n . Using

the estimate x∗n+1 − x∗n ≤ |αn+1αn−αn | x† in Lemma 1.1, we have
en+1 = zn+1 − x∗n+1 ≤ zn+1 − x∗n +


1
N

N

ein +
i=1

|αn+1 − αn | †
x
αn

N

1
≤√
N

1/2

ein

|αn+1 − αn | †
x
αn


+

2

i=1

|αn+1 − αn | †
x .
αn

Applying the inequality (a + b )2 ≤ (1 + n )(a2 + 1n b 2 ) to the last relation with
αn
n := 2Nγn > 0 and using (2.16), we come to the desired estimate (2.10).
Now for completing the proof of Theorem 2.1, we use estimate (2.10) to
show that en −→ 0, hence zn − x† = en + x∗n − x† −→ 0 as n −→ ∞.
Proof of Theorem 2.1 Setting vk := ek 2 /λk and λk :=
(2.10) as
vn+1 ≤

1+ n
N(1 + 2 n )

λ2n vn2

2λn+1 γn

N

Ai (ξni )


2

i=1

+

√αk ,
γk

we can rewrite

Nλn
vn
λn+1

NCA2
N(1 + 2 n )(αn+1 − αn )2 † 2
.
(2.17)
+
x
2
λn+1 γn2
n αn λn+1

By our assumptions, we have v0 ≤ √
l and ξ0i ≤ x† ≤ M D∗ . Assume by
i
induction that vn ≤ l and
√ ξn ≤ M D∗ for some n ≥ 0, we will show that

i
vn+1 ≤ l and ξn+1
< M D∗ .
From (2.17) and the estimates Ai (ξni ) ≤ ϕ, vn ≤ l, we get
+

vn+1 − l ≤

1+ n
(1 + 2 n )

λ2n ϕ 2
2
√ l +
2λn+1 γn
+

λn
1+2 n

l
λn+1
1+ n

CA2
(1 + 2 n )(αn+1 − αn )2 †
+
x
2
2

λn+1 γn
n αn λn+1

Taking into account the relation max CA2 , x†
inequality we find
vn+1 − l ≤

2

≤ D∗ ≤

lα0

,
γ0

2

.

from the last

1+ n
λ2n ϕ 2 2
lα0
n λn+1
− (λn − λn+1 ) l + 2 √
√ l −
(1 + 2 n )λn+1 2 γn
1+ n

γn γ0
+

lα0 (1 + 2 n )(αn+1 − αn )2

γ0 n αn2


388

Numer Algor (2011) 58:379–398



(1 +


ϕ 2l αn+1 γn
1
− √
·
2
αn γn+1 2N + αn /γn


γn (αn γn+1 − αn+1 γn )
α0
1
+
+√ ·√


2
αn γn+1
γ0
γn αn2

2
n )αn l
3/2

(1 + 2 n )λn+1 γn

3/2

+

2α0 (Nγn + αn )γn (αn − αn+1 )2
.

γ0 αn5

(2.18)

Besides, a straightforward calculation yields


γn (αn γn+1 − αn+1 γn )

αn2 γn+1





γn (αn − αn+1 )( γn+1 + γn ) + (αn+1 γn+1 − αn γn )
=

αn2 γn+1


2γn (αn − αn+1 ) γn (αn+1 − αn ) γn ( γn+1 − γn )

+
+

αn2
αn2
αn γn+1


γn (αn − αn+1 ) γn (γn+1 − γn )
γn (αn − αn+1 ) (γn+1 − γn )
+
=
+
.
2
αn
2γn αn
αn2
2αn


From the assumptions
αn+1
αn

c1 αn3 γ03
α03 γn3

≥1−

and

γn3 (αn −αn+1 )2
αn5
γn+1
γn



≤1+

c1 γ03
α03

c2 γ02
.
2γn2

and γn (γn+1 − γn ) ≤ c2 γ02 , it follows


Hence, by (2.18) and the last relation,

we get
vn+1 − l ≤

(1 +

2
n )αn l
3/2

(1 + 2 n )λn+1 γn

ϕ 2l γn (αn − αn+1 ) (γn+1 − γn )
+
+
2
αn2
2αn

αn+1 γn
1
α0
1
− √
·
+√ ·√
αn γn+1 2N + αn /γn
γ0
γn αn2

3/2

+


(1 +
(1 + 2

2
n )αn l

3/2
n )λn+1 γn

2α0 (Nγn + αn )γn (αn − αn+1 )2

γ0 αn5

ϕ 2l
+
2

c2 γ02
c1 γ03 αn
α0
+
+√ √
3
2γn αn
γ0 γn αn2

γn α0

2Nc1 γ05
2c1 αn γ05
αn
+
+
− 2N +
2√
2
3
γn
α0 γn
α0 γn
×

1+

c2 γ02
2γn2

−1

1−

c1 αn3 γ03
α03 γn3

.


−1


Numer Algor (2011) 58:379–398

Since αn
vn+1 − l ≤

0, γn

389

+∞ as n → +∞ and γn αn4 ≥ γ0 α04 for all n, we have

(1 +

2
n )αn l
3/2

(1 + 2 n )λn+1 γn


ϕ 2l 4Nc1 γ03 + (4c1 + 2 c1 + c2 )α0 γ02 + 2α0
+
2
2γ0 α02

2γ0 (1 − c1 )


(2 + c2 )(2Nγ0 + α0 )

= 0.

(2.19)

On the other hand, we have ξni = θni (x∗n − zn ) + zn = x∗n + (1 − θni )(zn − x∗n ).
From the assumption √αγnn l ≤ (M − 1)2 D∗ we find
i
ξn+1
= x∗n+1 + (1 − θni ) zn+1 − x∗n+1 ≤ x† + (1 − θni ) en+1

αn+1 √
≤ x† + (1 − θni ) √
l ≤ D∗ + (M − 1) D∗ = M D∗ .
4 γ
n+1

Thus, by induction, we have vn ≤ l and ξni ≤ M D∗ for all n. Therefore,
zn − x∗n 2 ≤ l √αγnn → 0 as n → ∞. Finally, Lemma 2.1 ensure that x∗n → x† ,

hence zn → x† , as n → ∞.

Remark 2.1 The sequences αn := α0 (1 + n)− p , 0 < p ≤ 18 and γn := γ0 ×
1
and c2 ≥ 12 .
(1 + n)1/2 satisfy all the conditions (2.8) for any constants c1 ≥ 64
1
and c2 = 12 ; γ0 := max 5;
If we choose c1 = 64

M ≥ 3, then condition (2.9) holds.

12ϕ 2 D∗

2

; α0 := 5Nγ0 and

Remark 2.2 It can be shown that the above theorem remains valid if the
boundedness of the second derivative F (x) is replaced with the Lipschitz
continuity of F (x) in B[0, r]. On the other hand, as observed in [5], even the
Lipschitz continuity of F (x) does not imply the frequently used in literature
“tangential cone condition”, so our hypothesis is not as very strong as it seems
to be.
Now we assume that instead of the exact data Fi and fi we only have noisy
ones, denoted by {Fn,i } and { fn,i }. Suppose that the operators Fn,i : H → H
possess the same properties as Fi , i.e., they are twice continuously Frechet
differentiable monotone operators. Moreover, let
fn,i − fi ≤ δn

(δn > 0,

Fn,i (x) − Fi (x) ≤ hn g( x )

i = 1, 2, ..., N,

(hn > 0,

n ≥ 0),


i = 1, 2, ..., N,

n ≥ 0),

(2.20)

where g(t) is a continuous positive and nondecreasing function on (0, +∞).
Let Fn (x) =

N
i=1

Fn,i (x) and fn =

N
i=1


fn,i and denote An,i (x) := Fn,i (x) − fn,i ,

(i = 1, N). Since An,i (x∗n ) − Ai (x ) ≤ hn g( x† ) + δn + Ai (x∗n ) − Ai (x† ) , in
the case hn , δn , αn → 0 as n → ∞, we can assume that An,i (x∗n ) + αNn x∗n ≤ CA .


390

Numer Algor (2011) 58:379–398

We √
suppose as before that there exists M > 1 such that for all x ∈

B[0, M D∗ ], the second order derivatives An,i (x) ≤ ϕ, where the constant
D∗ satisfies the inequality D∗ ≥ max CA2 , x† 2 , (δ0 + h0 g( x† ))2 .
Now we consider the parallel regularized Newton method in the noisy data
case.
An,i (zn ) + (

αn
αn
+ γn )I (zin − zn ) = − An,i (zn ) + zn , i = 1, N,
N
N
zn+1 =

1
N

(2.21)

N

zin , n = 0, 1, . . .

(2.22)

i=1

Theorem 2.2 Let αn , γn be two sequences of positive numbers and αn
0, γn
+∞ as n → +∞. Assume that (2.8) are satisf ied, and the following additional
condition holds

1
hn g( x† ) + δn
≤ .

h0 g( x ) + δ0
γn

(2.23)

Besides, suppose all the conditions (2.9) are satisf ied with respect to


4Nc1 γ03 + (4c1 + 2 c1 + c2 )α0 γ02 + 6α0 2
2γ0 (1 − c1 )
l :=

.
(2 + c2 )(2Nγ0 + α0 )
ϕ2
2γ0 α02
Let z0 = 0, then the nth iteration zn of (2.21) and (2.22) will converge to the
solution x† of (1.1).
Proof From (1.2) and the properties of An,i , we have
N

An,i (x∗n ) +
i=1

αn ∗ i
x , z − x∗n =

N n n

N

An,i (x∗n ) +
i=1

αn ∗ i
x , z − zn
N n n

N

An,i (x∗n ) +

+
i=1
N

An,i (x∗n ) +

=
i=1

αn ∗
x , zn − x∗n
N n

αn ∗ i
x , z − zn

N n n

N

An,i (x∗n ) − A(x∗n ), zn − x∗n

+
i=1

+ A(x∗n ) +

αn ∗
x , zn − x∗n
N n

N

An,i (x∗n ) +

=
i=1

αn ∗ i
x , z − zn + (Fn (x∗n )
N n n

− F(x∗n )) − ( fn − f ), zn − x∗n .


Numer Algor (2011) 58:379–398


391

Similarly, as in the proof of Theorem 2.1, using the Taylor formula for An,i (zn )
at x∗n , and multiplying both sides of (2.21) by zin − x∗n , we find
1
γn

N

An,i (zn )(zin − x∗n ), zin − x∗n
i=1
N

+

zin

zn , zin





i=1

=

zin − x∗n


2

i=1

N

1
2γn


N

αn
+
Nγn

x∗n

An,i (ξni )(x∗n − zn )2 , zin − x∗n −
i=1

N

1
γn

An,i (x∗n ) +
i=1

αn ∗ i

x , z − zn
N n n

1
(Fn (x∗n ) − F(x∗n )) − ( fn − f ), zn − x∗n .
γn

(2.24)

Putting ein = zin − x∗n and en = zn − x∗n , and using the monotonicity of An,i , from
(2.24) we get
N

ein − en , ein +
i=1

αn
Nγn

N

ein

2



i=1

1

2γn



N

An,i (ξni )(−en )2 , ein
i=1
N

1
γn

An,i (x∗n ) +
i=1

αn ∗ i
x , e − en
N n n

1
(Fn (x∗n ) − F(x∗n )) − ( fn − f ), en .
γn

Taking into account (2.20) and the relation 2 ein − en , ein = ein
ein − en 2 , we can rewrite the last inequality as
N

N


ein

2



i=1



N

en

2

+

i=1

1
γn

ein − en

2

+

i=1


N

An,i (ξni )(−en )2 , ein −
i=1

2
γn

2αn
Nγn

− en

2

ein − en

2

2

N

ein

2

i=1


N

An,i (x∗n ) +
i=1

αn ∗ i
x , e − en
N n n

2

(Fn (x∗n ) − F(x∗n )) − ( fn − f ), en
γn
1
≤ √
2 γn

N

An,i (ξni )

2

en

i=1

+ N hn g( x† ) + δn

2


+

4

+

1
2 γn3
2

en
.
γn2

N

ein
i=1

2

+

NCA2
+
γn2

N


i=1

+


392

Numer Algor (2011) 58:379–398
αn
Nγn

Using (2.8) and (2.9) we get
find
αn
1+
Nγn

N

i=1

4αn γn −N

.
2Nγn γn

<

1
≤ √

2 γn

ein 2

2√

Thus, from the last inequality, we

N

An,i (ξni )

en+1

2

≤ (1 +



n)

1
N

2

+

1

n

b 2 ), where

N

ein

2

+

i=1

1+ n
1

N(1 + 2 n ) 2 γn

i=1

αn
2Nγn

ein

+ N hn g( x† ) + δn

2


en

4

1+ n
λ2n vn2


N(1 + 2 n ) 2λn+1 γn
NCA2
λn+1 γn2

√αk ,
γk

|αn+1 −αn |
αn

x†

2

+ N 1+

1
γn2

en

2


NCA2
γn2

+

N(1 + 2 n )(αn+1 − αn )2 † 2
.
x
2
n αn
(2.26)

(2.26) becomes

N

An,i (ξni )

2

+

i=1

+

+

> 0, from (2.25), we get


i=1

+

2

en

(2.25)

2

N
2

+

:=

n

An,i (ξni )

+

1
γn2

1/2


N

√1
N

(αn+1 − αn )2 †
x
2
n αn

With the notations vk := ek 2 /λk , λk :=

vn+1

+ N 1+

i=1

Taking into account the inequalities en+1 ≤
n )(a

4

en

NCA2
2
+ N hn g( x† ) + δn .
γn2


+

and (a + b )2 ≤ (1 +

2

Nλn
1
1 + 2 vn
λn+1
γn

N hn g( x† ) + δn
λn+1

N(1 + 2 n )(αn+1 − αn )2 †
x
2
n αn λn+1

2

2

.

(2.27)



By the assumptions (2.8), we
have v0 ≤ l and ξ0i ≤ x† < M D∗ . Let

assume vn ≤ l and ξni ≤ M D∗ for some n ≥ 0, then from (2.27) and the
estimates An,i (ξni ) ≤ ϕ, one gets
vn+1 − l ≤

1+ n
λ2n ϕ 2
2
√ l +
(1 + 2 n ) 2λn+1 γn

CA2
λn
λn
1+2 n
+

l
+
λn+1
λn+1 γn2
1+ n
λn+1 γn2

hn g( x† ) + δn
+
λn+1


2

+

(1 + 2 n )(αn+1 − αn )2 †
x
2
n αn λn+1

2

.


Numer Algor (2011) 58:379–398

393

Taking into account relations hn g( x† ) + δn ≤ h0 g( xγn )+δ0 and D∗ ≤ √lαγ00 , and
acting similarly as in Theorem 2.1 we come to

(1 + n )αn2 l
ϕ 2l αn+1 γn
1
1
vn+1 − l ≤

·
+


3/2
2
αn γn+1 2N + αn /γn
γn αn
(1 + 2 n )λn+1 γn


γn (αn γn+1 − αn+1 γn )
α0
2
+
+√ ·√

αn2 γn+1
γ0
γn αn2


3/2

+

2α0 (Nγn + αn )γn (αn − αn+1 )2
.

γ0 αn5

(2.28)

Performing similar computations as in the proof of Theorem 2.1, and using the

relations

αn+1
αn

vn+1 − l ≤

c1 αn3 γ03
α03 γn3

≥1−
(1 +

γn+1
γn

and

2
n )αn l
3/2

(1 + 2 n )λn+1 γn

≤1+

c2 γ02
,
2γn2


we obtain

1
ϕ 2l γn (αn − αn+1 ) (γn+1 − γn )
+
+
+
2
αn2
2αn
γn αn

αn+1 γn
1
α0
2
− √
·
+√ ·√
αn γn+1 2N + αn /γn
γ0
γn αn2
3/2

+


(1 +
(1 + 2


2
n )αn l

2α0 (Nγn + αn )γn (αn − αn+1 )2

γ0 αn5

ϕ 2l
+
2

3/2
n )λn+1 γn

c1 γ03 αn
c2 γ02
2α0
1
+
+√ √
+
3
2γn αn
γ0 γn αn2
γn αn
γn α0

2Nc1 γ05
2c1 αn γ05
αn

+
+
− 2N +
2√
2
3
γn
α0 γn
α0 γn
×

1+

c2 γ02
2γn2

−1

1−

c1 αn3 γ03
α03 γn3

−1

.

Therefore, by our assumptions, we have
vn+1 − l ≤


(1 +

2
n )αn l
3/2

(1 + 2 n )λn+1 γn

= 0.


ϕ 2l
2γ0 (1 − c1 )

2
(2 + c2 )(2Nγ0 + α0 )

4Nc1 γ03 + (4c1 + 2 c1 + c2 )α0 γ02 + 6α0
+
2γ0 α02
(2.29)


α
θi ) √4 γn+1
l
n+1

i
On the other hand, from (2.9) we get ξn+1

≤ x† + (1 −
<


i
M D∗ . Hence, by induction, we have vn ≤ l and ξn < M D∗ for all n.
Therefore, zn − x∗n 2 ≤ l √αγnn → 0 as n → ∞, hence zn → x† as n → ∞.


394

Numer Algor (2011) 58:379–398

Remark 2.3 The sequences αn := α0 (1 + n)− p , 0 < p ≤ 18 and γn := γ0 ×
1
, c2 = 12 , γ0 := max 5; (10ϕ 2 D∗ )2 ;
(1 + n)1/2 as well as the constants c1 = 64
α0 := 7.5Nγ0 and M ≥ 3, satisfy all the conditions of Theorem 2.2.
Theorem 2.1 (Theorem 2.2, respectively) together with Lemma 2.3 lead to
the following result.
Corollary 2.1 Let all the conditions of Theorem 2.1 (Theorem 2.2, respectively)
be satisf ied.
i. If condition (2.3) holds, then
zn − x† = O(αn ).
ii. If Ai (x) (i = 1, N) are ci−1 -inverse strongly monotone, system (2.1) is consistent, and condition (2.5) holds, then
zn − x† = O(αn1/4 ).

3 Numerical experiments
In this section we illustrate the proposed method by considering some nonlinear integral equations (e.g. [7–10]). For the sake of simplicity, we restrict
ourselves to the case, when the initial operator is decomposed into a sum of

two smooth monotone operators acting on the real Hilbert space H = L2 [0, 1],
i.e.,
A(x) = A1 (x) + A2 (x) = 0,

(3.1)

Starting from z0 = 0 and knowing the n-th approximation, we compute the
corrections h1n , h2n and the next approximation as
An, j(zn ) + (

αn
αn
+ γn )I hnj = − An, j(zn ) + zn ,
2
2
zn+1 = zn +

h1n + h2n
,
2

j = 1, 2
n = 0, 1, 2, . . . ,

3.1 An experiment with twice differentiable monotone operators
Let us consider (3.1) with A j := I − F j, and
1

[F j(u)](t) :=


K j(t, s) f j(u(s))ds + g j(t),

j = 1, 2,

(3.2)

0
x)
where K1 (t, s) = ts3 , K2 (t, s) = 13 + ts + t+s
, f1 (x) = 9e−x , f2 (x) = 6(sin√x+cos
2
97
and g j(t) are later chosen functions. A direct computation shows that F j(u)
2


Numer Algor (2011) 58:379–398

395

Table 1 Experiment with exact data for 10,000 early iterations
nmax
RT OL (%)
RAT (×10−4 )

1,000
0.08205
3.89151

2,500

0.07239
3.85040

5,000
0.06639
3.85027

7,500
0.06317
3.85026

10,000
0.06088
3.85027

are nonexpansive, hence the operators Aj := I − Fj are monotone. Further,
Aj are continuously differentiable, and
[A1 (u)h](t) = h(t) + 6

1

tsu(s)exp(−u2 (s))h(s)ds;

(3.3)

0

6
[A2 (u)h](t) = h(t) − √
97


1
0

1
t+s
+ ts +
3
2

cos u(s) − sin u(s) h(s)ds.
(3.4)

Moreover we have [Aj (u) − Aj (v)]h ≤ 2 u − v h , ( j = 1, 2). According
to Remark 2.2, instead of an upper bound ϕ of A j , one can set ϕ = 2.
All the integrals in (3.1), (3.3) and (3.4) are computed using the trapezoidal
formula with the stepsize τ = 10−3 . In the following experiments, let the exact
solution x∗ (t) = χ, where χ > 0 is a later chosen constant, and g j(t) := χ −
1

Kj(t, s) f j(χ)ds, j = 1, 2.

0

All computations were carried out on an IBM cluster 1350 with eight
computing nodes with total 51.2 GFlops. Each node contains two Intel Xeon
dual core 3.2 GHz, 2 GBRam. We use the following notations:
nmax
T OL
RT OL = T OL/ x∗

RAT = T OL/αn




Total number of iterations
Tolerance zn − x∗
Relative tolerance (in percent)
Ratio of tolerance to αn

In the first experiment, we use χ = 1 and choose αn , γn as in Remark 2.1.
Table 1 shows the results for 10,000 early steps, when αn are still large.
From Table 2, we see that when n is sufficiently large (nmax = O(108 )), i.e.,
αn is sufficiently small, the tolerance xn − x∗ is as small as O(αn ).
In the next experiment, we consider the noisy case with χ = 0.5 and αn ,
γn (n > 0) are chosen as in Remark 2.3. Let Fn,i (x) = Fi (x) + χρ2γn n(t) x and
fn,i = fi +

χρn (t)
, where ρn (t)
2γn

:=

nt

and

n


∈ [0, 1] are normally distributed

Table 2 Experiment with exact data for small α
αn
RT OL (%)
RAT (×10−4 )

0.06517
0.0025092
3.85027

0.05750
0.0022142
3.85027

0.05000
0.0019251
3.85025

0.04955
0.0019078
3.85027

0.04500
0.0017326
3.85026


396


Numer Algor (2011) 58:379–398

Table 3 Experiment with
noisy data for first 10,000
iterations

nmax

T OL (×10−3 )

RT OL (%)

RAT (×10−4 )

500
1,000
2,500
5,000
10,000

1.31759
1.05234
0.92303
0.84637
0.77619

0.26352
0.21047
0.18461
0.16927

0.15524

5.73019
4.99178
4.90947
4.90949
4.90947

Table 4 Experiment with
inexact data for small α

αn

T OL (×10−5 )

RT OL (%)

RAT (×10−4 )

0.0750
0.0675
0.0625
0.0575
0.0550

3.682103
3.313886
3.068419
2.822951
2.700209


0.0073642
0.0066278
0.0061368
0.0056459
0.0054004

4.90947
4.90946
4.90947
4.90948
4.90947

Table 5 Experiment with exact data for 1,000 early iterations
nmax
RT OL (%)
RAT (×10−2 )

100
0.5713
1.9108

250
0.4920
1.9097

500
0.4381
1.9102


750
0.4142
1.9008

1,000
0.3871
1.8990

0.04981
0.09468
1.90083

0.03749
0.07129
1.90051

0.03512
0.06679
1.90076

Table 6 Results with exact data for small α
αn
RT OL (%)
RAT (×10−2 )

0.08891
0.16801
1.90011

Table 7 Experiment with

inexact data for first 10,000
steps

0.06682
0.12542
1.98996

nmax

T OL (×10−3 )

RT OL (%)

RAT (×10−2 )

500
1,000
2,500
5,000
10,000

7.759
4.431
3.253
2.966
2.734

1.5518
0.8862
0.6506

0.5532
0.5468

3.6801
2.1305
1.7310
1.7309
1.7310


Numer Algor (2011) 58:379–398



397

random numbers with zero mean. Table 3 shows the results for first 10,000
iterations.
Table 4 shows us the behaviour of tolerances with respect to sufficiently
small αn (nmax = O(108 )).

3.2 An experiment with differentiable monotone operators
In the second example, we consider (3.1) with [A j(x)](t) = [F j(x)](t) − f j(t),
j = 1, 2 and
1

e− j|t−s| x(s)ds + [arctan( jx(t))]3 ,

[F j(x)](t) :=


j = 1, 2.

(3.5)

0

Originated from the Wiener-type filtering theory, nonlinear equations involving operators F j(x) have been intensively studied by Hoang and Ramm (see
[8–10]). For the monotonicity, the smoothness of A j and the ill-posedness of
(3.1) please refer to [8–10]. Furthermore, we have
1

e− j|t−s| h(s)ds +

[F j(x)h](t) :=
0

3 j[arctan( jx(t))]2
h(t),
1 + ( jx(t))2

j = 1, 2.

(3.6)

Let the exact solution x∗ (t) = χ as in the previous subsection, hence, x∗ = χ
and
1

e− j|t−s| ds + [arctan( jχ)]3 ,


f j(t) = χ

j = 1, 2.

0

All the integrals in (3.5) and (3.6) are computed using the trapezoidal formula
with the stepsize τ = 10−3 . Note that in this example, F j(x) is not Lipschitz
continuous and we choose αn , γn , (n > 0) as in Remark 2.1, while γ0 , α0 are
chose from experiments.




In the first experiment, we set χ = 1 and α0 := 0.5, γ0 := 4. Table 5 shows
the results for 1,000 early steps, when αn are still large.
Table 6 shows that when n is sufficiently large, for instance, nmax = (106 ),
the tolerance xn − x∗ is as small as O(αn ).
Finally, we consider the noisy case with χ = 0.5 and αn , γn (n > 0) are chosen as in Remark 2.3, but α0 = 0.5 and γ0 = 4, as in the above experiment.
The noise data are chosen as in the last experiment in Section 3.1. The
results for first 10,000 iterations are shown in Table 7.

4 Conclusion
Kaczmarz methods, being sequential algorithms for solving systems of linear and nonlinear equations, have many successful applications to real life


398

Numer Algor (2011) 58:379–398


problems. In this paper we develop a quite different idea, which may be
regarded as a counterpart of the regularizing Newton–Kaczmarz method.
Instead of performing a cyclic iteration over subproblems, we perform synchronously one iteration of the regularized Newton method to each subproblem and take their convex combination as a next approximate solution. The
convergence analysis in both free-noise and noisy data cases has been studied
and some numerical experiments have been performed.
Acknowledgements The authors would like to express their special thanks to the referees,
whose careful reading and many constructive comments led to a considerable improvement of the
paper. This work was supported by the Vietnam National Foundation for Science and Technology
Development.

References
1. Burger, M., Kaltenbacher, B.: Regularizing Newton–Kaczmarz methods for nonlinear illposed problems. SIAM Numer. Anal. 44, 153–182 (2006)
2. De Cezaro, A., Haltmeier, M., Leitão, A., Scherzer, O.: On steepest-descent-Kaczmarz
method for regularizing systems of nonlinear ill-posed equations. Appl. Math. Comput. 202,
596–607 (2008)
3. Haltmeier, M., Leitão, A., Scherzer, O.: Kaczmarz methods for regularizing nonlinear ill-posed
equations, I. Convergence analysis. Inverse Probl. Imaging 1(2), 289–298 (2007)
4. Haltmeier, M., Kowar, R., Leitão, A., Scherzer, O.: Kaczmarz methods for regularizing nonlinear ill-posed equations, II. Applications. Inverse Probl. Imaging 1(3), 507–523 (2007)
5. Kaltenbacher, B., Neubauer, A., Scherzer, O.: Iterative Regularization Methods for Nonlinear
Ill-Posed Problems. Walter de Gruyter, Berlin (2008)
6. Kowar, R., Scherzer, O.: Convergence analysis of a Landweber–Kaczmarz method for solving nonlinear ill-posed problems. In: Ill-Posed and Inverse Problems (Book Series), vol. 23,
pp. 69–90 (2002)
7. Alber, Y., Ryazantseva, I.: Nonlinear Ill-Posed Problems of Monotone Type. Springer,
New York (2006)
8. Hoang, N.S., Ramm, A.G.: An iterative scheme for solving nonlinear equations with monotone
operators. BIT Numer. Math. 48(4), 725–741 (2008)
9. Hoang, N.S., Ramm, A.G.: Dynamical systems gradient methods for solving nonlinear equations with monotone operators. Acta Appl. Math. 106(3) (2009) 473–499
10. Hoang, N.S., Ramm, A.G.: Dynamical systems method for solving nonlinear equations with
monotone operators. Math. Comput. 79(269) (2010) 239–258
11. Bakusinskii, A.B., Goncharskii, A.V.: Ill-Posed Problems. Numerical Methods and Applications. Moscow Univ. Press. In Russian (1989)

12. Janno, J.: Lavrent’ev regularization of ill-posed problems containing nonlinear near-tomonotone operators with application to autoconvolution equation. Inverse Probl. 16, 133–148
(2000)
13. Tautenhahn, U.: On the method of Lavrentiev regularization for nonlinear ill-posed problems.
Inverse Probl. 18, 191–207 (2002)
14. Anh, P.K., Chung, C.V.: Parallel iterative regularization methods for solving systems of illposed equations. Appl. Math. Comput. 212, 542–550 (2009)
15. Lü, T., Neittaanmäki, P., Tai, X.-C.: A parallel splitting up method for partial differential
equations and its application to Navier–Stokes equations. RAIRO Math. Model. Numer. Anal.
26(6), 673–708 (1992)
16. Liu, F., Nashed, M.Z.: Regularization of nonlinear ill-posed variational inequalities and convergence rates. Set-Valued Anal. 6, 313–344 (1998)
17. Hein, T.: Convergence rates for multi-parameter regularization in Banach spaces. Int. J. Pure
Appl. Math. 43(4), 593–614 (2008)



×