• Nu S-Au Găsit Rezultate

View of Convergence analysis for the two-step Newton method of order four

N/A
N/A
Protected

Academic year: 2022

Share "View of Convergence analysis for the two-step Newton method of order four"

Copied!
12
0
0

Text complet

(1)

Rev. Anal. Num´er. Th´eor. Approx., vol. 43 (2014) no. 1, pp. 33–44 ictp.acad.ro/jnaat

CONVERGENCE ANALYSIS

FOR THE TWO-STEP NEWTON METHOD OF ORDER FOUR

IOANNIS K. ARGYROSand SANJAY K. KHATTRI

Abstract. We provide a tighter than before convergence analysis for the two- step Newton method of order four using recurrent functions. Numerical examples are also provided in this study.

MSC 2000. 65H10; 65G99; 65J15; 47H17; 49M15.

Keywords. Two-step Newton method, Newton’s method, Banach space, Kan- torovich hypothesis, majorizing sequence, Lipschitz/center-Lipschitz condition.

1. INTRODUCTION

In this study, we are concerned with the problem of approximating a locally unique solutionx? of equation

(1.1) F(x) = 0,

where, F is Fr´echet-differentiable operator defined on a convex subset D of a Banach spaceX with values in a Banach space Y.

Many problems in computational mathematics can be brought in the form (1.1). The solutions of these equations are rarely found in closed form. There- fore most solution methods for these equations are iterative. Newton’s method (1.2) xn+1=xn− F0(xn)−1F(xn) (n≥0), (x0∈ D)

is undoubtedly the most popular method for generating a sequence{xn} con- verging quadratically to x? [5, 13, 15]. Two-step Newton method (TSNM)

yn=xn− F0(xn)−1F(xn) (n≥0), (x0 ∈ D), xn+1=yn− F0(yn)−1F(yn),

(1.3)

generates a converging sequence{xn}tox?with order four [5, 9]. The following conditions have been used to show the semilocal convergence for the Newton’s

Department of Mathematical Sciences, Cameron University, Lawton, Oklahoma 73505- 6377, USA, e-mail: [email protected].

Department of Engineering, Stord Haugesund University College, Norway, e-mail:

[email protected].

(2)

method (1.2) and consequently the semilocal convergence of (TSNM) [5, 13, 15, 17] (CK):

F0(x0)−1L(Y,X) for somex0∈ D;

F0(x0)−1F(x0)ν

F0(x0)−1F0(x)− F0(y)Lkx−yk for all x, y∈ D;

hK =12 (1.4)

and

U(x0, λ) =x∈ X kx−x0k ≤λ ⊆ D, for specifiedλ≥0.

Note that (1.4) is the, famous for its simplicity and clarity, Kantorovich sufficient convergence hypothesis for the Newton’s method (1.2). A current survey on Newton-type methods can be found in [5] and the references therein (see also [1–4] and [6–17]). We have shown [5] the quadratic convergence of the Newton’s method (1.2) using the set of conditions (CAH)

F0(x0)−1L(Y,X) for somex0 ∈ D;

F0(x0)−1F(x0)η

F0(x0)−1F0(x)− F0(x0)L0kx−x0k for allx∈ D;

F0(x0)−1F0(x)− F0(y)Lkx−yk for allx, y∈ D;

hAH =12 (1.5)

and

U(x0, λ0)⊆ D, for some specifiedλ0≥0, where

L= 18L+ 4L0+pL2+ 8L0L. (1.6)

Note that

(1.7) L0L

holds in general, and L/L0 can be arbitrarily large [4, 5]. Moreover, the L0

Center-Lipschitz is not an additional condition, sinceL0 is a special case ofL.

Furthermore, we have by (1.4)-(1.7)

(1.8) hK12 =⇒ hAH12

but not necessarily vice versa unless ifL0 =L. The error analysis under (1.5) is also tighter than (1.4). Hence, the applicability of Newton’s method (1.2) has been extended.

In this study, we provide the sufficient convergence conditions for (TSNM) corresponding to (1.4). The paper is organized as follows: §2 contains the

(3)

semilocal convergence analysis for (TSNM), whereas the numerical examples are given in§3.

2. SEMILOCAL CONVERGENCE ANALYSIS FOR (TSNM)

We need the following result on majorizing sequence for (TSNM).

Lemma 1. Let L0, L, η be constants. Assume: there exist parameters α and φsuch that

2(1−L0η)α, (2.1)

L1η

2(1−L2η)φφ0

(2.2) and

ηη0

(2.3) where,

L1 =α2L, L2= (1 +α)L0, (2.4)

φ1 = 4L0α

2(L0+L2)α−L+

[2(L0+L)α−L]2+8L0, (2.5)

φ2= 2L1

L1+

L21+8L1L2

, φ3= 2α[1−(L0+L2)η], (2.6)

φ0 = min{φ1, φ2, φ3}, (2.7)

η1= L 2

1+2L2(1+φ), η2= L 1

0+L2, (2.8)

η0 = min{η1, η2}. (2.9)

Then, sequences {sn}, {tn} generated by

(2.10) t0 = 0, s0 =η, tn+1 =sn+L(s2(1−Ln−tn)2

0sn), sn+1=tn+1+L(t2(1−Ln+1−sn)2

0tn+1), are non-decreasing, bounded from above by

(2.11) t??=1+α1−φη,

and converge to their common least upper bound t? ∈ [0, t??]. Moreover, the following estimates holds

0≤tn+1snα(sntn), (2.12)

and

0≤sn+1tn+1φ(sntn).

(2.13)

Proof. We shall show using induction onk:

0≤ 2(1−LL(sk−tk)

0sk)α, (2.14)

(4)

and

0≤ 2(1−LL1(sk−tk)

0tk+1)φ.

(2.15)

Note that estimates (2.12) and (2.13) will then follow from (2.14) and (2.15), respectively. Estimates (2.14) and (2.15) hold by the left hand side hypotheses in (2.1) and (2.2), respectively. It follows from (2.10), (2.14) and (2.15) that estimates (2.12) and (2.13) hold forn= 0. Let us assume estimates (2.14) and (2.15) hold for allkn. It then follows that estimates (2.12) and (2.13) hold forn=k. We then have:

0≤sktkφ(sk−1tk−1)≤φ·φ(sk−2tk−2)≤ · · · ≤φkη, (2.16)

0≤tk+1skα(sktk)≤αφkη, (2.17)

and

tk+1sk+αφkηtk+αφkη+φkη

sk−1+αφk−1η+αφkη+φkη

tk−1+φk−1η+αφk−1η+αφkη+φkη

=tk−1+ (φk−1+φk)η+α(φk−1+φk)η≤ · · ·

s0+α(η+φη+· · ·+φkη) +α(φη+· · ·+φkη)

= (1 +α)(1 +φ+· · ·+φkη)t??. (2.18)

In view of (2.16) and (2.18), estimate (2.14) certainly holds if

0≤ kη

2[1−L2(1+φ+···+φk−1)η−L0tk−1η] ≤α, (2.19)

or

kη+ 2αL2(1 +φ+· · ·+φk−1)η−2α+ 2L0αtk−1η≤0.

(2.20)

Estimate (2.20) motivates us to introduce functionsfk on [0,1) by (2.21) fk(t) =Lηtk+ 2αL2(1 +t+· · ·+tk−1)η+ 2L0αtk−1η−2α.

We need a relationship between two consecutive functionsfk:

fk+1(t) =Ltk+1η+ +2αL0tkη+ 2αL2(1 +t+· · ·+tk)η−2α−Ltkη

−2αL2(1 +t+· · ·+tk−1)η−2L0αtk−1η+ 2α+fk(t)

=fk(t) +Ltk+1ηLtkη+ 2αL2tkη+ 2L0αtkη−2L0αtk−1η

=fk(t) +g(t)tk−1η, (2.22)

where

g(t) =Lt2+ [2α(L2+L0)−L]t−2L0α.

(2.23)

(5)

Using (2.21), we see that (2.20) holds

if fk(φ)≤0 (2.24)

or f1(φ)≤0, (2.25)

since, g(φ)≤0 and fk+1(φ) =fk(φ) +g(φ)φkηfk(φ) (2.26)

whereφis chosen as in the right hand side inequality of (2.1). But (2.23) also holds by (2.1). Moreover, define function f on [0,1) by

f(t) = lim

k→∞fk(t).

(2.27)

Then, we have by (2.24)

f(φ)≤0.

Hence, (2.12) and (2.14) hold for all k. Similarly, (2.15) holds if L1φkη ≤2φh1−L2(1 +φ+· · ·+φki (2.28)

or

L1φkη+ 2φL2(1 +φ+· · ·+φk)η−2φ≤0.

(2.29)

As in (2.21) we define functions hk on [0,1) by

(2.30) hk(t) =L1tkη+ 2tL2(1 +t+· · ·+tk)η−2φ.

We need a relationship between two consecutive functionshk:

hk+1(t) =L1tk+1η+ 2tL2(1 +t+· · ·+tk+1)η−2φ−L1tkη−

−2tL2(1 +t+· · ·+tk)η+ 2φ+hk(t)

=hk(t) +L1tk+1ηL1tkη+ 2L2tk+2η

=hk(t) +g1(t)tkη (2.31)

where

g1(t) = 2L2t2+L1tL1. (2.32)

In view of (2.30), estimate (2.29) holds if

(2.33) if hk(φ)≤0 or h1(φ)≤0

since, g1(φ)≤0 and hk+1(φ) =hk(φ) +g1(φ)φkηhk(φ) (2.34)

where φ is chosen as in the right hand side of (2.2). Note now that (2.33) holds by (2.3). Furthermore, define functionsh on [0,1) by

(2.35) h(t) = lim

k→∞hk(t).

We then have

(2.36) h(φ)≤0.

(6)

That completes the induction for (2.13) and (2.15). Finally, in view of (2.12), (2.13) and (2.18), sequences {tn}, {sn} converge to t?. That completes the

proof of the Lemma.

We need an Ostrowski-type relationship between iterates {xn} and {yn} [5, 14].

Lemma2. Let us assume iterates{xn}and{yn}in (TSNM) are well defined for all n≥0. Then, the following identities hold:

F(xn+1) = Z 1

0

F0(yn+θ(xn+1yn))− F0(yn)(xn+1yn)dθ, (2.37)

and

F(yn) = Z 1

0

F0(xn+θ(ynxn))− F0(xn)(ynxn)dθ.

(2.38)

Proof. Identity (2.37) follows from the Taylor’s theorem and the first itera- tion in (TSNM), whereas (2.38) follows from Taylor’s theorem and the second iteration in (TSNM). That completes the proof of the Lemma.

We can show the following semilocal convergence result for (TSNM).

Lemma 3. Let F :D ⊂ X → Y be Fr´echet-differentiable operator. Assume:

there exist x0 ∈ D, L0>0,L >0 and η ≥0 such that for allx, y∈ D:

F0(x0)−1L(Y,X), (2.39)

F0(x0)−1F(x0)η, (2.40)

F0(x0)−1 F0(x)− F0(x0)L0kx−x0k, (2.41)

F0(x0)−1 F0(x)− F0(y)Lkx−yk, (2.42)

U(x0, t?)⊆ D;

(2.43)

hypotheses of Lemma 2.1 hold, where t? is given in Lemma 2.1. Then, se- quences {xn} and {yn} generated by (TSNM) are well defined, remain in U(x0, t?) for all n ≥ 0 and converge to a solution x?U(x0, t?) of equa- tion F(x) = 0. Moreover, the following estimates hold

kynxnk ≤sntn, (2.44)

kxn+1ynk ≤tn+1sn, (2.45)

kxn+1xnk ≤tn+1tn, (2.46)

kyn+1ynk ≤sn+1sn, (2.47)

kxnx?k ≤t?tn, (2.48)

kynx?k ≤t?sn. (2.49)

Furthermore, if there exists Rt? such that U(x0, R)⊆ D (2.50)

(7)

and

L0(t?+R)<2, (2.51)

then,x? is the only solution ofF(x) = 0 in U(x0, R)

Proof. We shall show using induction onkthat (TSNM) is well defined, the iterates remain in U(x0, t?) for alln≥0 and estimates (2.44) and (2.45) hold for all n≥ 0. Iterate y0 is well defined by the first equation in (TSNM) for n= 0 and (2.39). We also have by (2.6) and (2.40)

ky0x0k=

F0(x0)−1F(x0)

η=s0 =s0t0t?.

That is (2.44) holds for n= 0 and y0U(x0, t?). Using (TSNM) for n= 0, we see that x1 is well defined. Let wU(x0, t?). Then, we have by Lemma 2.1 and (2.41):

(2.52) F0(x0)−1F0(w)− F0(x0)L0kw−x0k ≤L0t? <1.

It follows from (2.52) and the Banach lemma on invertible operators [5, 13, 15]

that F0(w)−1 exists and

F0(w)−1F0(x0)1−L 1

0kw−x0k. (2.53)

In particular, for x1U(x0, t?), we have

F0(x1)−1F0(x0)1−L 1

0kx1−x0k1−L 1

0(t1−t0) = 1−L1

0t1. (2.54)

Moreover, in view of (2.38) forn= 0, (TSNM), (2.6) and (2.40)-(2.42), we get

kx1y0k= (2.55)

=

Z 1 0

hF0(y0)−1F0(x0)iF0(x0)−1F0(x0+θ(y0x0))−F0(x0)dθ(y0x0)

1−L L0

0ky0−x0k

Z 1 0

θky0x0k2

= 2(1−LL0

0ky0−x0k)ky0x0k2

2(1−LL0

0s0)(s0t0)2=t1s0,

which shows (2.45) forn= 0. We also have

kx1x0k ≤ kx1y0k+ky0x0k ≤t1s0+s0t0 =t1t0t?, which implies (2.46) holds for n= 0 and x1U(x0, t?).

(8)

Using (TSNM), (2.6), (2.37) (forn= 0) and (2.54), we get ky1x1k=hF0(x1)−1F0(x0)i hF0(x0)−1F(x1)i

F0(x1)−1F0(x0)F0(x0)−1F(x1)

1−L1

0t1

Z 1 0

F0(x0)−1F0(y0+θ(x1y0))− F0(y0)dθ(x1y0)

1−LL0

0t1

Z 1 0

θkx1y0kdθkx1y0k

1−LL

0t1

1

2(t1s0)(t1s0) =s1t1, which implies (2.44) for n= 1. We then have:

ky1y0k ≤ ky1x1k+kx1y0k ≤s1t1+t1s0 =s1s0, ky1x0k ≤ ky1y0k+ky0x0k ≤s1s0+s0t0 =s1t?, which imply (2.47) for n= 0 and y1U(x0, t?). Let us now assume (2.44)- (2.47), yn, xkU(x0, t?) for all nk. Using (TSNM), (2.6), (2.37), (2.38), (2.42), (2.53) and the induction hypotheses, we have in turn:

kxk+1x0k ≤ kxk+1xkk+kxkxk−1k+· · ·+kx1x0k

tk+1tk+tktk−1+· · ·+t1t0 =tk+1t?, (2.56)

kykx0k ≤ kykxkk+kxkx0k (2.57)

sktk+tkt0

=skt?

kyk+1xk+1k= (2.58)

=F0(xk+1)−1F0(x0)F0(x0)−1F(xk+1)

F0(xk+1)−1F0(x0)F0(x0)−1F(xk+1)

1−L 1

0kxk+1x0k

Z 1

0

F0(x0)−1F0(yk+θ(xk+1yk))− F0(yk)dθ(xk+1−yk)

1−LL

0tk+1

Z 1 0

θkxk+1ykk2

1−LL

0tk+1

1

2(tk+1sk)2

=sk+1tk+1,

kxk+2yk+1k=F0(yk+1)−1F0(x0)F0(x0)−1F(yk+1)≤ (2.59)

(9)

1L1

0sk+1

Z 1 0

F0(x0)−1F0(xk+1+θ(yk+1−xk+1))−F0(xk+1)dθ(yk+1−xk+1)

1−LL

0sk+1

Z 1 0

θkyk+1xk+1k2

2(1−LL

0sk+1)(sk+1tk+1)2=tk+2sk+1,

kyk+2yk+1k ≤ kyk+2xk+2k+kxk+2yk+1k

sk+2tk+2+tk+2sk+1 =sk+2sk+1, (2.60)

kxk+2xk+1k ≤ kxk+2yk+1k+kyk+1xk+1k

tk+2sk+1+sk+1tk+1 =tk+2tk+1 (2.61)

which show (2.44)-(2.47) hold for alln≥0. Estimates (2.48) and (2.49) follow from (2.46) and (2.47), respectively by using standard majorization technique [5, 13, 15]. It follows from Lemma 2.1 and (2.44)-(2.48) that (TSNM) is Cauchy in a Banach space X and as such it converges to somex?U(x0, t?) (since U(x0, t?) is a closed set). Moreover, we have by (2.58)

F0(x0)−1F(xk+1)L2kxk+1ykkkxk+1ykk →0, as k→ ∞.

(2.62)

That isF(x?) = 0.Finally to show uniqueness, lety?U(x0, R) be a solution of equation F(x) = 0.Let us define linear operatorM by

(2.63) M =

Z 1 0

F0(y?+θ(x?y?))dθ.

Then using (2.41), (2.50) and (2.51), we get in turn

F0(x0)M− F0(x0)L0

Z 1 0

ky?+θ(x?y?)−x0kdθ

L0 Z 1

0

[(1−θ)ky?x0k+θkx?x0k] dθ

L20(R+t?)<1.

(2.64)

It follows from (2.60) and the Banach Lemma on invertible operators that M−1 exists. Then, in view of the identity

(2.65) 0 =F(x?)− F(y?) =M(x?y?),

we conclude that x? =y?. That completes the proof of the Theorem.

Remark 4. 1) Limit point t? can be replaced by t??, given in closed form by (2.7), in hypotheses (2.40) and (2.48).

2) The verification of conditions (2.1)-(2.3) require simple algebra (see also Example 3.1).

(10)

3) If L0 =L, then scalar sequences {sn},{tn} given by (2.6) reduce essen- tially to the ones used in [9]. In particular, we have in this case

(2.66) t0 = 0, s0 =η, tn+1 =sn+L(s2(1−Lsn−tn)2

n), sn+1=tn+1+L(t2(1−Ltn+1−sn)2

n+1)

IfL0 < Literation (2.6) is tighter than (2.62). Moreover, in view of the proof of the Theorem 2.3, we note that sequence

(2.67)

t0= 0, s0 =η, tn+1 =sn+L?(sn−tn)2

2(1−L0sn), sn+1 =tn+1+L?(tn+1−sn)2

2(1−L0tn+1), is also majorizing for (TSNM), where

L?=

L0, if n= 0 L, if n >0.

In case L0 < L, (2.26) is even a tighter majorizing sequence than (2.62).

Furthermore,L, L1 can be replaced byL0, L?1 =α2L0 at the left hand sides of (2.1) and (2.2), respectively.

4) Ifα= 0, defineL1 =L, then it is simple algebra to show that conditions of Lemma 2.1 reduce to (1.5). Moreover, ifL0=L, these conditions reduce to (1.4). That is we have Newton’s method (1.2) and iteration (2.6) reduces to (2.68) t0 = 0, t1=η, tn+2=tn+1+L(t2(1−Ln+1−tn)2

0tn+1).

In the case of Newton’s method for L0 = L, we have the well-known Kan- torovich majorizing sequence.

(2.69) ν0= 0, ν1 =η, νn+2 =νn+1+L(ν2(1−Ln+1−νn)2

0νn+1).

Note that if L0 < L,{tn} is a tighter majorizing sequence than {νn} for the

Newton’s method [5, 13, 15].

3. NUMERICAL EXAMPLES

Let X = Y = R2 be equipped with the max-norm, x0 = (1,1)T, D = U(x0,1−p),p∈[0,1) and defineF on Dby

(3.1) F(x) =ξ13p, ξ23pT , x= (ξ1, ξ2)T . Using (2.35)-(2.37), we get

η = 1−p3 , L0 = 3−p and L= 2(2−p)> L0. Letp= 0.7. Then, we get

η = 0.1, L0 = 2.3 and L= 2.6.

(11)

The Newton-Kantorovich hypothesis (1.4) is satisfied, since

2

3(1−p)(2p) = 0.26<1 for all p∈[0,1/2).

Using Lemma 2.1, forα = 0.17 and φ= 0.0052, we get

L1 = 0.07514, L2= 2.691 φ = 0.756703694, φ2 = 0.111383518, φ3= 0.666923077, φ0 = φ2

η1 = 0.364622409, η2= 0.200360649, η0 = η2,

Lη/[2(1L0η)] = 0.168831169 and L1η/[2(1L2η)] = 0.005140238.

Hence, the hypotheses of Lemma 2.1 are satisfied. Moreover, we have by (2.11) that

t??= 0.11761158<1−p= 0.3.

Furthermore, using (2.48) (fort? replaced byt??), we get t??< R < L2

0t??= 0.751953637.

So, we can choose R = 0.3. Hence, hypotheses of Theorem 2.3 hold, and (TSNM) converges to

x?=3 0.7,√3

0.7T = (0.887904002,0.887904002)T . We compare (2.6) to (2.62).

n sn−tn tn+1−sn sn−tn tn+1−sn sn−tn tn+1−sn

0 1.00·10−01 1.69·10−02 1.00·10−01 1.76·10−02 1.00·10−01 1.49·10−02 1 5.07·10−04 4.57·10−07 5.78·10−04 6.27·10−07 3.49·10−04 2.15·10−07 2 3.73·10−13 2.47·10−25 5.37·10−13 1.02·10−24 8.19·10−14 1.19·10−26 3 1.09·10−49 2.11·10−98 1.94·10−48 1.09·10−96 2.49·10−52 1.09·10−103 4 7.91·10−196 1.11·10−390 9.44·10−191 1.67·10−380 2.11·10−206 7.88·10−412 5 2.21·10−780 8.70·10−1560 5.24·10−760 5.15·10−1519 1.09·10−822 2.13·10−1644

Table 1. Comparison among (2.6), (2.66) and (2.67)

As expected from the theoretical results iteration (2.6) is faster than (2.66).

REFERENCES

[1] S. Amat,S. BusquierandJ. M. Guti´errez,On the local convergence of secant-type methods, Int. J. Comput. Math.,81 (2004), no. 9, pp. 1153–1161.

[2] J. Appell,E. De Pascale,N. A. EvkhutaandP. P. Zabrejko, On the two-step Newton method for the solution of nonlinear operator equations, Math. Nachr., 172, (1995), pp. 5–14.

[3] I. K. Argyros, On a multistep Newton method in Banach spaces and the Ptak error estimates, Adv. Nonlinear Var. Inequal.,6(2003), no. 2, pp. 121–135.

[4] I. K. Argyros, A unifying local–semilocal convergence analysis and applications for two-point Newton-like methods in Banach space, J. Math. Anal. Appl.,298 (2004), no.

2, pp. 374–397.

[5] I. K. Argyros,J. Y. ChoandS. Hilout, Numerical Methods for Equations and its Applications, CRC Press Taylor & Francis Group 2012, New York.

(12)

[6] R. P. Brent, Algorithms for Minimization without Derivatives, Prentice Hall, Engle- wood Cliffs, New Jersey, 1973.

[7] E. C˘atinas¸, On some iterative methods for solving nonlinear equations, Rev. Anal.

Num´er. Th´eor. Approx.,23 (1994), no. 1, pp. 47–53.

[8] J. A. Ezquerro and M. A. Hern´andez, Multipoint super-Halley type approxima- tion algorithms in Banach spaces, Numer. Funct. Anal. Optim., 21 (2000), no. 7-8, pp. 845–858.

[9] J. A. Ezquerro, M. A. Hern´andez andM. A. Salanova, A Newton-like method for solving some boundary value problems, Numer. Funct. Anal. Optim.,23 (2002), no.

7-8, pp. 791–805.

[10] J. A. Ezquerro, M. A. Hern´andez and M. A. Salanova, A discretization scheme for some conservative problems, J. Comput. Appl. Math.,115 (2000), no. 1-2, pp. 181–192.

[11] M. A. Hern´andez,M. J. RubioandJ.A. Ezquerro, Secant-like methods for solving nonlinear integral equations of the Hammerstein type, J. Comput. Appl. Math., 115 (2000), no. 1-2, pp. 245–254.

[12] M. A. Hern´andezandM. J. Rubio, Semilocal convergence of the secant method under mild convergence conditions of differentiability, Comput. Math. Appl., 44 (2002), no.

(3-4), pp. 277-285.

[13] L. V. KantorovichandG. P. Akilov, Functional Analysis, Pergamon Press, Oxford, 1982.

[14] A. M. Ostrowski, Solutions of equations in euclidean and Banach spaces, A Series of Monographs and Textbooks, Academic Press, New York, 1973.

[15] J. M. Ortega andW. C. Rheinboldt, Iterative solution of nonlinear equations in several variables, Academic Press, New York 1970.

[16] I. P˘av˘aloiu, A convergence theorem concerning the method of Chord, Rev. Anal.

Num´er. Th´eor. Approx.,21 (1972), no. 1, pp. 59–65.

[17] F. A. Potra and V. Pt´ak, Nondiscrete induction and iterative processes, Research Notes in Mathematics,103, Pitman Avanced Publ. Program, Boston, 1984.

Received by the editors: September 12, 2012.

Referințe

DOCUMENTE SIMILARE

Nonlinear equations in Banach space; third order Newton like methods; recurrence relations; error bounds; convergence

We provide a semilocal convergence analysis of an iterative algorithm for solving nonlinear operator equations in a Banach space setting.. Using our new idea of recurrent functions

We present a semi-local as well as a local convergence analysis of Halley’s method for approximating a locally unique solution of a nonlinear equa- tion in a Banach space setting..

We provide semilocal result for the convergence of Newton method to a locally unique solution of an equation in a Banach space setting using hypotheses up to the second

Newton’s method, Banach space, Kantorovich’s majorants, convex function, local/semilocal convergence, Fr´ echet–derivative, radius of

We introduce the new idea of recurrent functions to provide a new semilocal convergence analysis for Steffensen–type methods (STM) in a Banach space setting.. Applications and

The famous for its simplicity and clarity Newton–Kantorovich hypothe- sis (7) is the crucial sufficient semilocal convergence condition for both the quadratically convergent

E., Toward a unified convergence theory for Newton-like methods, in: Rall, L.B., ed., Nonlinear Functional Analysis and Applications, Academic Press, New York, pp.. and Heindl,