• Nu S-Au Găsit Rezultate

View of An improved semilocal convergence analysis for the midpoint method

N/A
N/A
Protected

Academic year: 2022

Share "View of An improved semilocal convergence analysis for the midpoint method"

Copied!
19
0
0

Text complet

(1)

J. Numer. Anal. Approx. Theory, vol. 45 (2016) no. 2, pp. 109–127 ictp.acad.ro/jnaat

AN IMPROVED SEMILOCAL CONVERGENCE ANALYSIS FOR THE MIDPOINT METHOD

IOANNIS K. ARGYROSand SANJAY K. KHATTRI

Abstract. We expand the applicability of the midpoint method for approximat- ing a locally unique solution of nonlinear equations in a Banach space setting.

Our majorizing sequences are finer than the known results in scientific literature [1, 3, 4, 10–16, 24–26, 28] and the convergence criteria can be weaker. Finally, numerical work is reported that compares favorably to the existing approaches in the literature [6, 8–16, 24–26,28].

MSC 2010. 65B05, 65G99, 65J15, 65N30, 65N35, 65H10, 47H17, 49M15.

Keywords. Midpoint method, semilocal convergence, majorization sequence, Banach space, Fr´echet-derivative.

1. INTRODUCTION

In this study, we are concerned with the problem of approximating a locally unique solutionx? of equation

(1.1) F(x) = 0,

where, F is a twice Fr´echet differentiable operator defined on a convex subset Dof a Banach spaceXwith values in a Banach spaceY. Numerous problems in science and engineering can be reduced to solving the above equation [23, 32]. Consequently, solving these equations is an important scientific field of research. In many situations, finding a closed form solution for the non-linear equation (1.1) is not possible. Therefore, iterative solution techniques are employed for solving these equations.

The study about convergence analysis of iterative methods is usually divided into two categories: semi-local and local convergence analysis. The semilocal convergence analysis is based upon the information around an initial point to give criteria ensuring the convergence of the iterative procedure. While the local convergence analysis is based on the information around a solution to

Department of Mathematical Sciences, Cameron University, Lawton, Oklahoma 73505- 6377, USA, e-mail: [email protected].

Department of Engineering, Stord Haugesund University College, Norway, e-mail:

[email protected].

(2)

find estimates of the radii of convergence balls. In this paper, we study the semilocal convergence of the midpoint method defined as

yn=xn− F0(xn)−1F(xn), (1.2)

xn+1 =xn− F0xn+y2 n−1F(xn), for each n= 0,1,2, . . . ,

where x0D is an initial point. Here, F0(x) denotes the first Fr´echet- derivative of the operator F [23, 32]. It is well-known that the Midpoint method is cubically convergent and it has a long history [see 27–32]. Let U(w, R) and U(w, R) stand, respectively, for the open and closed balls in X with center w and radius R > 0. Let the space of bounded linear operators from X into Y be denoted by L(X,Y). The following set of (C) conditions have been used

(1) There exists x0Dsuch that F0(x0)−1L(Y,X).

(2) F0(x0)−1F(x0)η.

(3) F0(x0)−1F00(x)Lfor all xD.

(4) F0(x0)−1(F00(x)− F00(y))≤ M kx−yk, for all x, yD.

The following sufficient convergence criteria have been given in connection to the (C) conditions

η4M+L2−L

L2+2M 3M(L+

L2+2M) [1, 3, 4, 23–26]

(1.3) or

η21K [12, 14], (1.4)

where

K=L q

1 +76ML2.

However, simple numerical examples can be used to show that criteria (1.3) and (1.4) are unsatisfied but the midpoint method (1.2) still converges to the solution x?. As an example, let X =Y =R, x0 = 1 andD = [ζ,2−ζ] for ζ ∈(0,1). Define functionF on D by

(1.5) F(x) =x5ζ.

Then, using conditions (C), we get

η = (1−ζ)5 , L= 4(2−ζ)3, M= 12(2−ζ)2.

Figure 1.1 plots the criteria (1.3) and (1.4) for the problem (1.5). The curve (defined by the right hand side of the inequality (1.3)) intersect the lineη(see Figure 1.1) at ζ ≈0.73 while the curve (defined by the right hand side of the criteria (1.4)) intersect the η line at ζ ≈ 0.72. We notice in the Figure 1.1 that for ζ <0.72 the criteria (1.3) and (1.4) are not satisfied. However, one

(3)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0

2 4 6 8 10 12 14 16 18 20·10−2

ζ

η

1 2K 4M+L2−L

L2+2M 3M(L+

L2+2M)

Fig. 1.1. Convergence criteria (1.3) and (1.4) for (1.5).

may see that the method (1.2) is convergent. For additional examples, see the Section 4.

In our work we expand the applicability of the midpoint method (1.2), in cases where (1.3) or (1.4) are not satisfied, using the (C) conditions together with the following center Lipschitz condition

(1) F0(x0)−1(F0(x)− F0(x0))L0kx−x0kfor all xD.

We shall refer to (C1)-(C5) as the (H) conditions.

As can be inferred from the studies [1–28], several techniques are usually employed for analyzing the convergence of iterative methods. Among these, the most popular technique is based on the concept of majorizing sequences.

In the studies that lead to the convergence conditions (1.3) and (1.4) the computation of the upper bounds on F0(xn)−1F0(x0) was based on (C3) and the estimate

(1.6) F0(xn)−1F0(x0)1−Lkx1

n−x0k.

Instead of (C3) we use the more precise and less expensive condition (C5) which leads to

(1.7) F0(xn)−1F0(x0)1−L 1

0kxn−x0k. Note that

(1.8) L0L

holds in general andL/L0 can be arbitrarily large [22, 23]. This change in the study of the semilocal convergence of the midpoint method leads to tighter

(4)

error estimates on the distanceskynxnk,kxn+1ynk,kynx?k,kxnx?k and weaker convergence criteria.

The rest of the paper is organized as follows. Section 2 develop results on majorizing sequences for the midpoint method (1.2), where as in the Section 3 we present the semilocal convergence of the Midpoint method. Numerical examples are given in the concluding section 4.

2. MAJORIZING SEQUENCES

In this section, we study the convergence of scalar sequences that are ma- jorizing for the Midpoint method (1.2). Let the positive constants beL0 >0, L > 0,M ≥0 and η >0. It is convenient for us to define functions γ, a, α, hi,i= 1,2,3 by

γ(t) = L t

2h1−L20ti, γ =γ(η), (2.1)

a(t) = 241 12Lγ(t)2+ 12Lγ(t) + 7Mη, a=a(η), (2.2)

α(t) = a(t)t

h

1−L20(1 +γ(t))ti, α=α(η), (2.3)

h1(t) =a(t)t+ L20(1 +γ(t))t−1 (2.4)

h2(t) = L2α(t)t+γ(t)L2 0[2(1 +γ(t)) +α(t)]tγ(t) (2.5)

and

h3(t) =a(t)t+L0(1 +γ(t))(1 +α(t))t−1.

(2.6)

We denote the minimal positive zeros of the functionsh1,h2 and h3 by η1,η2 and η3, respectively. Note that α(t) is well defined on (0, η1) by the choice of η1. Let us set

(2.7) η0= min{η1, η2, η3}.

Then, for allt∈(0, η0) we have

α∈(0,1), (2.8)

h1(t)<0, (2.9)

h2(t)≤0 (2.10)

and

h3(t)≤0.

(2.11)

We can show the following result on the convergence of majorizing sequences for the Midpoint method.

(5)

Lemma2.1. Let the positive constants be L0>0,L >0,M ≥0andη >0.

Suppose that

(2.12) η

(η0 if η0 6=η1,

< η0 if η0 =η1. Then, scalar sequence {tn} generated by

(2.13)

t0= 0, s0=η, tn+1=sn+ L(sntn)2 2h1−L20(sn+tn)i

,

sn+1 =tn+1+

12L(tn+1sn)2+ 6L2(sn−tn)3

1−L0

2 (tn+tn)(sntn)3+ 7M(sntn)3 24(1−L0tn+1)

is increasing, bounded from above by

(2.14) t??=1−α1+γη

and converges to its unique least upper bound t? which satisfies

(2.15) 0≤t?t??.

Moreover, the following estimates hold for each n= 0,1,2, . . . 0< tn+1snγ(sntn)≤γαnη (2.16)

and

0< sn+1tn+1α(sntn)≤αn+1η.

(2.17)

Proof. We use mathematical induction to prove (2.16) and (2.17). Estimates (2.16) and (2.17) hold for n= 0 by (2.1)-(2.3) and (2.13), since

s1t1 =

12L(t1s0)3+ 6L2(s0−t0)3

1−L20(t0+s0) + 7M(s0t0)3 24(1−L0t1)

≤ 12Lγ2+ 12Lγ+ 7M(s0t0)

24h1−L20(1 +γ)ηi (s0t0)2

a

1−L20(1 +γ)(s0t0)(s0t0)≤α(s0t0) =αη.

(2.18)

Let us assume that (2.16) and (2.17) hold for all kη. Then, we have tk+1skγ(sktk)≤γαkη,

sk+1tk+1α(sktk)≤αk+1η,

(6)

tn+1sk+γαkηtk+αkη+γαkη

tk−1+αk−1η+αkη+γαk−1η+γαkη

≤ · · · ≤t2+ (α2η+α3η+· · ·+αkη) + (γα2η+· · ·+γαkη)

s1+γαη+ (α2η+α3η+· · ·+αkη) + (γα2η+· · ·+γαkη)

t1+αη+γαη+ (α2η+α3η+· · ·+αkη) + (γα2η+· · ·+γαkη)

η+γη+αη+γαη+ (α2η+· · ·+αkη) + (γα2η+· · ·+γαkη)

1−α1−αk+1(1 +γ)η < 1+γ1−αη=t??, (2.19)

and

sk+1tk+1+αk+1ηh1−α1−αk+1(1 +γ) +αk+1iη

<1+γ1−γ +αk+1ηt??. Evidently, estimates (2.16) and (2.17) are true provided that

L(sktk) 21−L20(sk+tk)

γ (2.20)

and

a(sktk)

2(1−L0tk+1) ≤α.

(2.21)

Inequality (2.20) can be written as

(2.22) 2kη +γL2021−α1−αk(1 +γ) +αkηγ ≤0.

Estimate (2.22) motivates us to define recurrent functionsfk on [0,1) for each k= 1,2, . . .by

(2.23) fk(t) = L t2kη +γL20 21−t1−tk(1 +γ) +tkηγ.

We need a relationship between two consecutive functions fk. We have by (2.23)

fk+1(t) =fk(t) +L tk+12 ηL t2kη +γL20 2(1 +γ)(tktk−1) +tk+1tkη

=fk(t) + (t−1)hL2t+γ L0(1 +γ) + γαL2 0itk−1η.

(2.24)

It follows from (2.24) that

(2.25) fk+1(t)≤fk(t)≤ · · · ≤f1(t).

In view of (2.25), estimate (2.22) holds if

(2.26) f1(α)≤0

which is true by the choice ofη2. Similarly, estimate (2.21) can be written as (2.27) kη+αL0(1 +γ)1−α1−αk+1ηα≤0.

(7)

Define recurrent functions gk on [0,1) for each k= 1,2, . . . by (2.28) gk(t) =atk−1η+L0(1 +γ)1−t1−tk+1η−1.

Then using (2.28) we get

(2.29) gk+1(t) =gk(t) + (t−1) [a+L0(1 +t)t]tk−1η.

Hence, it follows from (2.29) that

(2.30) gk+1(t)≤gk(t)≤ · · · ≤g1(t).

In view of (2.30), instead of (2.27), we can show that

(2.31) g1(α)≤0,

which is true by the choice of η3. The induction for (2.16) and (2.17) is complete. Hence, sequence {tn} is an increasing, bounded from above by t??

and as such it converges to its unique least upper boundt?. The proof of the

Lemma is complete.

We have the following useful and obvious extension of Lemma 2.1 Lemma 2.2. Suppose there existsN ≥0 such that

t0< s0 < t1 <· · ·< tN < sN < tN+1 < L1

0. (2.32)

and

sNtN

(η0 if η0 6=η1

< η0 if η0 =η1. (2.33)

Then, the conclusions of the Lemma 2.1 hold for sequence {tn}. Moreover, the following estimates hold for each n= 0,1,2,3, . . .

0< tN+1+nsN+nγN(sN+ntN+n) (2.34)

and

0< sN+1+ntN+1+nαN(sN+ntN+n) (2.35)

where γN =γ(sNtN),αN =α(sNtN) and t??N = 1−α1+γN

N(sNtN).

Remark 2.3.

(1) Note that for N = 0, the Lemma 2.2 reduces to Lemma 2.1 with α0 =α and γ0 =γ.

(8)

(2) Let us define sequences{rn} and {vn} by (2.36)

r0 = 0, q0 =η, r1=q0+ K0(q0r0)2 2 (1−L3η/2), q1=r1+

12L1(r1q0)2+6L1−L2L02(q0−r0)3

0r0/2 + 7M0(q0r0)3

24(1−L3r1) ,

rn+1 =qn+ L(qnrn)2 2h1−L20(qn+rn)i

,

qn+1=rn+1+

12L(rn+1qn)2+1−L6L2(qn−rn)3

0(rn+qn)/2 + 7M(rnqn)3

24(1−L0rn+1) (n≥1)

for someL0,L1,L2,L3,K0,M0 such that

(2.37) L1L, L2L, L02L, K0L, L3L0 and M0 ≤ M and

(2.38)

v0= 0, u0=η, vn+1=un+ L(unvn)2 2h1−L20(vn+un)i

,

un+1 =vn+1+

12L(vn+1un)2+ 6L2(un−vn)3

1−L0

2 (vn+un)+ 7M(unvn)3

24(1−Lvn+1) .

In view of (1.8), (2.13), (2.36), (2.37)–(2.38) a simple inductive argu- ment shows that

rntnvn

(2.39)

qnsnun, (2.40)

rn+1qnsn+1tnun+1vn, (2.41)

qn+1rn+1sn+1tn+1un+1vn+1 (2.42)

and

r?= lim

n→∞rnt?v? = lim

n→∞vn. (2.43)

Moreover, (2.39)-(2.42) hold as strict inequalities for n ≥ 1 if (1.8) and (2.37) hold as strict inequalities. Sequence {vn} was shown to be majorizing for the Midpoint method (1.2) provided that (1.3) or (1.4) hold [1, 3, 4, 10–16, 24–26, 28]. We shall prove in the next section that tighter sequences{rn}and{vn}are also majorizing for the Midpoint method (1.2). Then, certainly these majorizing sequences also converge under (1.3) or (1.4). However, these sequences converge under the new convergence criteria given in the Lemma 2.1 which can be weaker that (1.3) or (1.4) (see Section 4). In the next Section, we

(9)

shall provide the connection of L0, L1, L2, L02, L3, K0, M0 to the equation (1.1) and the Midpoint method (1.2) so that estimates (2.37) are satisfied.

3. SEMI-LOCAL CONVERGENCE OF THE MIDPOINT METHOD

We need the following Ostrowski-type representation for the Midpoint me- thod (1.2).

Lemma3.1. Suppose that the Midpoint method (1.2)is well defined for each n= 0,1,2, . . .. Then, the following identities are true for each n= 0,1,2, . . .

F(xn+1) = (3.1)

= Z 1

0

F00(yn+θ(xn+1yn))(1−θ)dθ(xn+1yn)2 +14

Z 1 0

F00((xn+y2 n) +θ2(ynxn))(ynxn)F0(xn+y2 n)−1×

× Z 1

0

F00(xn+θ2(ynxn))dθ(ynxn)2 +

Z 1 0

hF00(xn+θ(ynxn))(1−θ)12F00(xn+θ2(ynxn))idθ(ynxn)2. and

xn+1yn = −12F0(xn+y2 n)−1 Z 1

0

F00(xn + θ2(ynxn))dθ(ynxn)2. Proof. The proof of (3.1) can be found in [1–4]. Using (1.2), we get in turn that

xn+1yn=

=F0(xn)−1F(xn)− F0(xn+y2 n)−1F(xn)

=F0(xn)−1− F0(xn+y2 n)−1F(xn)

=F0(xn+y2 n)−1hF0(xn+y2 n)F0(xn)−1− Ii

=F0(xn+y2 n)−1hF0(xn+y2 n)− F0(xn)iF0(xn)−1F(xn)

=F0(xn+y2 n)−1 Z 1

0

F00(xn+θ(xn+y2 nxn))(xn+y2 nxn)[−(ynxn)]

=−12F0(xn+y2 n)−1 Z 1

0

F00(xn+θ2ynxn)dθ(ynxn)2.

The proof of the Lemma is complete.

We can show the main semi-local convergence result for the Midpoint me- thod (1.2) under the (H) conditions.

(10)

Theorem 3.2. Suppose that the (H) conditions and those of the Lemma 2.1 hold. Moreover, suppose that

(3.2) U(x0, t?)⊆D.

Then, sequence {xn} generated by the Midpoint method (1.2) is well defined, remains in U(x0, t?) for alln ≥0 and converges to a solution x?U(x0, t?) of equation F(x) = 0. Moreover, the following estimates hold

kynxnk ≤sntn, (3.3)

kxn+1ynk ≤tn+1sn, (3.4)

kxnx?k ≤t?tn (3.5)

and

kynx?k ≤t?sn. (3.6)

Furthermore, if there exists Rt? such that

(3.7) U(x0, R)D

and

(3.8) L0

2 (t?+R)<1, then, the solution x? is unique in U(x0, R)

Proof. We shall prove that (3.7) and (3.8) hold using mathematical induc- tion. Using (1.2), (C2) and (2.13), we get thatky0x0k=kF0(x0)−1F0(x0)k ≤ η=s0t0t?. That is (3.3) holds forn= 0 andy0U(x0, t?). We have by (C5) and the choice ofη1 that

kF0(x0)−1hF0(x0+y2 0)− F0(x0)ik ≤ L20ky0x0k

L20(s0t0) = L20η <1.

(3.9)

It then follows from (3.9) and the Banach Lemma on invertible operators [23, 32] that

(3.10)

F0(x0+y2 0)−1L(Y,X),

F0(x0+y2 0)−1F0(x0)1

1−L20η. Using (1.2), (2.13), Lemma 3.1 and (3.10) we obtain

kx1y0k ≤ 12Lky0−x0k2

1−L20η

12L(s0−t0)2

1−L0

2 η

(3.11) and

(3.12) kx1y0k ≤t1s0γ(s0t0).

(11)

Hence, (3.4) holds for n = 0. We also get that kx1x0k ≤ kx1y0k+ ky0x0k ≤ t1s0+s0t0 = t1t?, which implies x1U(x0, t?). Let us assume that (3.3), (3.4), y?U(x0, t?) and xk+1U(x0, t?) hold for all kn. It follows from the proof of the Lemma 2.1 and (C5) that

F0(x0)−1F0(xk+y2 k)− F0(x0)L20 (kxkx0k+kykx0k)

L20(tk+sk)<1 (3.13)

and

F0(x0)−1 F0(xk+1)− F0(x0)L0kxk+1x0k

L0tk+1<1.

(3.14)

It then follows from (3.13) and (3.14) that

F0(xk+y2 k)−1L(Y,X), F0(xk+1)−1L(Y,X),

F0(xk+y2 k)−1F0(x0)≤ 1

1− L20(tk+sk), (3.15)

F0(xk+1)−1F0(x0)1−L1

0tk+1. (3.16)

Then, we have by (1.2), (C3), Lemma 3.1, (2.13), (3.15) and the induction hypothesis that

kxk+1ykk ≤ 12 Lkykxkk2 1−L20(sk+tk)

L(sk−tk)2

2

1−L20(sk+tk) =tk+1sk, (3.17)

which shows (3.4). Moreover, using (1.2), (C3), (C4), (2.13), Lemma 3.1, we obtain in turn

Z 1 0

F0(x0)−1hF00(xk+θ(ykxk))(1−θ)12F00(xk+ θ2(ykxk))i

Z 1 0

F0(x0)−1F00(xk+θ(ykxk))− F00(xk)

+12

Z 1

0

F0(x0)−1hF00(xk)− F00(xk+θ2(ykxk))i

≤ M Z 1

0

θ(1θ)dθkykxkk+M4 Z 1

0

θ dθkykxkk

= 7M24 kykxkk.

(3.18)

(12)

Thus,

F0(x0)−1F(xk+1) (3.19)

L2kxk+1ykk2+L42 1

1−L20(sk+tk)kykxkk3+7M24 kykxkk3

L2(tk+1sk)2+ L2(sk−tk)3

4

1−L20(sk+tk)+7M24 (sktk)3 and

kyk+1xk+1k ≤F0(xk+1)−1F0(x0)F0(x0)−1F(xk+1)

L(tk+1−sk)2

2 + L2(sk−tk)3

4

1−L20(tk+sk) +7M24(sktk)3 1−L0tk+1

=sk+1tk+1, (3.20)

which shows (3.3). We also have that

kyk+1x0k ≤ kyk+1xk+1k+kxk+1x0k

sk+1tk+1+tk+1t0 =sk+1t?. and

kxk+2x0k ≤ kxk+2yk+1k+kyk+1x0k

tk+2sk+1+sk+1t0=tk+2t?.

Hence,yk+1 and xk+2 belong inU(x0, t?). It follows from (3.3), (3.4) and the Lemma 2.1 that sequence {xn}is complete in a Banach spaceX and as such it converges to somex?U(x0, t?) (since U(x0, t?) is a closed set). By letting k→ ∞in (3.19) we obtain F(x?) = 0. Estimates (3.5) and (3.6) follows from (3.3) by using standard majorization techniques [23, 32]. Finally to show the uniqueness part. Let y?U(x0, R) be a solution of the equation F(x) = 0.

LetQ=R01F0(x?+θ(y?x?))dθ. Using (C5), (3.7) and (3.8), we get that

F0(x0)−1Q− F0(x0)

Z 1

0

kF0(x0)−1F0(x?+θ(y?x?))− F0(x0)dθk

L0

Z 1 0

[(1−θ)kx?x0k+θky?x0k]

L20(t?+R)<1.

(3.21)

It follows from (3.21) and the Banach lemma on invertible operators [23, 32]

that Q−1L(Y,X). Then, using the identity

0 =F(y?)− F(x?) =Q(y?x?)

we deduce thatx?=y?. The proof of the Theorem is complete.

Remark 3.3.

(1) The limit point t? can be replaced by t?? (given in closed from by (2.14)) in Theorem 3.2.

(13)

(2) The conclusions of Theorem 3.2 hold if hypotheses of Lemma 2.1 are replaced by those of Lemma 2.2.

(3) It follows from the (H) conditions that there exists constantsK0,L1, L2,L3,M0 satisfying

F0(x0)−1F00(x0+θ2(y0x0))≤ K0 (3.22)

F0(x0)−1F00(y0+θ(x1y0))L1 (3.23)

F0(x0)−1F00(x0+θ2(y0x0))L2

(3.24)

F0(x0)−1F00(x0+y2 0 +θ2(y0x0))L02 (3.25)

F0(x0)−1hF0(x0+y2 0)− F0(x0)iL23ky0x0k (3.26)

F0(x0)−1hF00(x0+θ(y0x0))− F00(x0)i≤ M0θky0x0k (3.27)

θ = θ or θ/2. For all θ ∈ [0,1], where, y0 = x0 − F0(x0)−1F(x0) and x1 = x0 − F0(x0+y2 0)−1F(x0). Estimates (3.22) -(3.27) are not additional to the (H) conditions, since in practice the verification of (C2) - (C5) requires the computation ofK0,L1,L2,L3 andM0. Note that finding these constants only involves computations at the initial data. Moreover, these constants satisfy (2.37). Furthermore, according to the proof of Theorem 3.2, {rn} is a majorizing sequence for {xn} which is finer than{tn}and{vn}(see also (2.39)-(2.43) and the Tables in the next section).

4. NUMERICAL EXAMPLES

Example 4.1. Let X =Y = R be equipped with the max-norm, x0 = 1, D= [ψ,2−ψ]. Let us define F onD by

(4.1) F(x) =xmψ.

Here, a∈(0,1.0). Through some algebraic manipulations, for the conditions (H), we obtain

η= 1−ψ

m , L= (2−ψ)m−2(m−1), L0 = (2−ψ)

m−1−1 1−ψ

and M= (m−1)(m−2)(2−ψ)m−3.

Furthermore, we see that for m= 8 andψ= 0.79 the criteria (1.3) and (1.4) yield

0.026≤0.021 and 0.026≤0.020

respectively. Thus we observe that the criteria (1.3) and (1.4) are not satis- fied. Even though the criteria (1.3) and (1.4) fall short but Midpoint method, starting atx0= 1, converges form= 8 anda= 0.79 as reported in Table 4.1.

Moreover from equations (2.4)–(2.6) we obtain

η1 = 0.038, η2 = 0.028, η3 = 0.027.

(14)

n kxn+1xnk kF(xn)k xn

0 2.879×10−02 2.100×10−01 1.000×10+00 1 2.419×10−04 1.576×10−03 9.712×10−01 2 1.576×10−10 1.026×10−09 9.710×10−01 3 4.357×10−29 2.836×10−28 9.710×10−01 4 9.209×10−85 5.994×10−84 9.710×10−01 5 8.697×10−252 5.661×10−251 9.710×10−01 6 7.327×10−753 4.769×10−752 9.710×10−01 7 0.000×10+00 1.198×10−2,023 9.710×10−01

Table 4.1. Midpoint method applied to (4.1).

From (2.7), we getη0 =η3 = 0.027. We notice that the assumption (2.12), of Lemma 2.1, holds. That is η = 0.026< η0 = 0.027. From (3.22)-(3.26), we obtain

K0 = 7, L1= 77+ψ8 6, L2 = 7, L3 = 6

(2/3+1/3ψ−(−1/3+1/3ψ)2)2−1

k−1/3+1/3ψ−(−1/3+1/3ψ)2k , M0 = 7, L02 = 65+ψ3 .

We can verify that the conditions (2.37) are fulfilled. Additionally, for the sequences {tn} (given by (2.12)), {rn} (given by (2.36)) and {vn} (given by (2.38)), we produce the Table 4.2. In Table 4.2, we observe that the sequence

n tn+1tn rn+1rn vn+1vn

0 3.084×10−02 2.909×10−02 3.542×10−02 1 6.199×10−03 6.515×10−04 2.818×10−02 2 1.022×10−04 7.473×10−08 −4.567×10−03 3 5.692×10−10 1.162×10−19 6.400×10−04 4 9.876×10−26 4.366×10−55 −4.798×10−07 5 5.158×10−73 2.317×10−161 2.204×10−16 6 7.350×10−215 3.463×10−480 −2.134×10−44 7 2.126×10−640 1.156×10−1,436 1.937×10−128 8 5.147×10−1,917 0.000×10+00 −1.450×10−380 9 0.000×10+00 0.000×10+00 6.081×10−1,137

Table 4.2. Sequences {tn},{rn} and{vn}.

{rn} provides tighter error bounds than the sequence {tn}. The convergence of the sequence{vn}is not expected, since (1.3) or (1.4) are not satisfied. Note also that{vn} was essentially used as a majorizing sequence for the Midpoint

method in [1, 3, 4, 10–16, 24–26, 28].

Referințe

DOCUMENTE SIMILARE

Nonlinear equations in Banach space; third order Newton like methods; recurrence relations; error bounds; convergence

We provide a semilocal convergence analysis of an iterative algorithm for solving nonlinear operator equations in a Banach space setting.. Using our new idea of recurrent functions

We present a semi-local as well as a local convergence analysis of Halley’s method for approximating a locally unique solution of a nonlinear equa- tion in a Banach space setting..

We provide semilocal result for the convergence of Newton method to a locally unique solution of an equation in a Banach space setting using hypotheses up to the second

Numerical examples validating our theoretical results are also provided in this study to show that DFM is faster than other derivative free methods [9] using similar information..

Newton’s method, Banach space, Kantorovich’s majorants, convex function, local/semilocal convergence, Fr´ echet–derivative, radius of

We introduce the new idea of recurrent functions to provide a new semilocal convergence analysis for Steffensen–type methods (STM) in a Banach space setting.. Applications and

Using more precise majorizing sequences than before we show that under weaker conditions than before [1]–[3], [5], [6] we can obtain using Newton’s method (see (10)), finer error