• Nu S-Au Găsit Rezultate

View of Accelerating the convergence of Newton-type iterations

N/A
N/A
Protected

Academic year: 2022

Share "View of Accelerating the convergence of Newton-type iterations"

Copied!
19
0
0

Text complet

(1)

J. Numer. Anal. Approx. Theory, vol. 46 (2017) no. 2, pp. 162–180 ictp.acad.ro/jnaat

ACCELERATING THE CONVERGENCE OF NEWTON-TYPE ITERATIONS

T. ZHANLAV§, O. CHULUUNBAATAR§,†and V. ULZIIBAYAR§,‡

Abstract. In this paper, we present a new accelerating procedure in order to speed up the convergence of Newton-type methods. In particular, we derive iterations with a high and optimal order of convergence. This technique can be applied to any iteration with a low order of convergence. As expected, the convergence of the proposed methods is remarkably fast. The effectiveness of this technique is illustrated by numerical experiments.

MSC 2010. 65H05.

Keywords. Newton-type iterations, accelerating procedure, convergence order, efficiency index.

1. INTRODUCTION

As it is known, the monotone approximations for the solutions of nonlinear equations inRis interesting not only from theoretical, but also from practical view points. In particular, two-sided approximations can be efficiently used as a posteriori estimations for the errors in approximating the desired solution. It means that one can control the error at each iteration step. In the last decade, many authors have developed new monotone iterative methods [9,18,19]. The main advantage of the monotone iterations is that it does not require good initial approximations contrary to what occurs in the other iteration methods, such as secant-like methods, Newton’s methods and others [4]. On the other hand, accelerating the convergence of iterative methods is also of interest both from theoretical and computational view points [1–3,5,10–14,16]. For example, in [4] was constructed a family of the predictor-corrector iterative method from the simplified secant method and a family of secant-like methods; the authors analyzed the initial conditions on the starting point in order to improve the semilocal convergence of the method. In general, it is desirable to choose the starting point from the convergence domain [15, 16, 19].

§Institute of Mathematics, National University of Mongolia, Mongolia, e-mail:

[email protected].

Joint Institute for Nuclear Research, Dubna, 141980 Moscow region, Russia, e-mail:

[email protected].

School of Applied Sciences, Mongolian University of Science and Technology, Mongolia, e-mail: [email protected].

(2)

In recent years, many iterative methods for solving nonlinear equations have been developed [1–3, 5, 10–14, 16] to improve the local order of convergence of some methods such as Newton, Ostrowski, Potra-Ptak’s methods and so on.

The most efficient methods studied in the literature are the optimal eighth- order methods with four function evaluations per iteration, see [1–3,10,12] and references therein. The methods developed in [1, 7, 12] are based on optimal Ostrowski’s or King’s method and use arbitrary real parameters and weight functions. The methods proposed in [2, 3, 10] are obtained by composing an iterative method proposed by Chun and Potra-Ptak’s method with Newton’s method.

In this paper we propose a new accelerating procedure for Newton-type methods. By virtue of this procedure, we obtain a higher order, in particular optimal order methods. The usage of the optimal choice of parameter allows us to improve the convergence speed. This may be also helpful in order to extend the domain of convergence.

The paper is organized as follows. Section 2 describes monotone and two- sided approximations. In Section 3, we show the accelerating procedure and establish a convergence order of the new proposed methods. Section 4 is de- voted to finding an optimal parameter in the proposed iterations. Finally, Section 5 presents various numerical examples which confirm the theoretical results, and a numerical comparison with other existing optimal order meth- ods.

2. STATEMENT OF THE PROBLEMS

Let a, b ∈ R, a < b, f : [a, b] → R and consider the following nonlinear equation

f(x) = 0.

(2.1)

Assume that f(x) ∈ C4[a, b], f0(x) 6= 0, x∈(a, b) and Eq. (2.1) has a unique rootx ∈(a, b). In [18, 19] the following iterations were proposed:

x2n+1 =x2nτnf(x2n) f0(x2n), (2.2)

x2n+2 =x2n+1ff0(x(x2n+12n+1)), n= 0,1, . . . (2.3)

In [19] it is shown that the iterations (2.2) and (2.3) monotone convergent under conditions

0< τn≤1, an= M(f20|f(x(xn))n2)| < 12, M2 = sup

x∈Ur(x)

|f00(x)|, (2.4)

and under the assumption H :f0(x)6= 0, f00(x) preserve sign in the small neighborhood Ur(x) ={x:|f(x)|< r}.However the iterations (2.2) and (2.3) are not equipped with a suitable choice of parameter τn. In [18] it was shown that the iterations (2.2) and (2.3) have a two-sided approximation behavior

(3)

under conditions

τnI2n=ht1−

1−2a2n

a2n ,−1+

1+4a2n

a2n

⊆(1,2), a2n< 49. (2.5)

It is also proved that the convergence rate of these iterations is at least 2, and the order of convergence is increased up to 4, when τn → 1. From this it is clear that the accelerating of the convergence of iterations (2.2) and (2.3) is important, especially at the early stage of iterations.

3. MONOTONE AND TWO-SIDED CONVERGENCE OF ITERATIONS AND THEIR ACCELERATION

Ifτn≡1, then the iterations (2.2) and (2.3) become as Newton’s one xn+1=xnff(x0(xnn)), n= 0,1. . .

(3.1)

According to [19] the iteration (3.1) is a monotone convergent under condition (2.4) and assumption H.

Letθn= f(xf(xn+1)

n) . Then the Taylor expansion of f(xn+1) gives 0< θna2n < 14.

(3.2)

Now we proceed to accelerate the convergence of monotone iteration (3.1).

To this end, we use two known approximations xn, xn+m satisfying either xn< xn+m< x orx< xn+m< xn and consider

yn=xn+t(xn+mxn), t >1.

(3.3)

From (3.3) it is clear that yn belongs to interval connecting xn and xn+m under condition 0≤t≤1. Hence, the extrapolating approach corresponds to the case t >1. Our aim is to find the optimal value of t=topt in (3.3) such that the new approximation yn given by (3.3) will be situated more close to x as compared with xn and xn+m. We use Taylor expansion of the smooth functionf(x)∈ Ck+1[a, b]:

f(yn) =f(xn) +f0(xn)t(xn+mxn) +. . .

+f(k)k!(xn)tk(xn+mxn)k+O (xn+mxn)k+1. (3.4)

Neglecting small termO (xn+mxn)k+1in (3.4), we have (3.5)

f(yn)≈f(xn) +f0(xn)t(xn+mxn) +. . .+f(k)k!(xn)tk(xn+mxn)kPk(t).

From (3.5) it is clear that

f(xn) =Pk(0).

(3.6)

We also require that

f(xn+m) =Pk(1).

(3.7)

(4)

From (3.7) we find f(k)k!(xn)(xn+mxn)k and substituting it into (3.5), we get Pk(t). From this we findt >1 such that

f(yn)≈Pk(t) = 0.

(3.8)

Geometrically, (3.5) means that the graph (plot) off(x) in the vicinity of root x is replaced by curvePk(t) passing through the given points (xn, f(xn)) and (xn+m, f(xn+m)). Thus, Eq. (3.5) fork= 1,2 and k= 3 gives us

P0(t) = f(xn)≈f(yn), yn=xn, (3.9)

P1(t) = f(xn) + (f(xn+m)−f(xn))t, (3.10)

P2(t) = f(xn) +f0(xn) (xn+mxn)t

+ f(xn+m)−f(xn)−f0(xn)(xn+mxn)t2, (3.11)

P3(t) = f(xn) +f0(xn) (xn+mxn)t+f00(x2n)(xn+mxn)2t2 +f(xn+m)−f(xn)−f0(xn)(xn+mxn)

(3.12)

f00(x2n)(xn+mxn)2t3,

respectively. Thus, the parameter t in (3.3) is calculated as a root greater than 1 of Eq. (3.8). In particular, fork= 1, we have

topt= f(x f(xn)

n)−f(xn+m) >1.

(3.13)

Since Pk(0)Pk(1) =f(xn)f(xn+m)>0 for k≥1, Eq. (3.8) may have at least one root satisfying the condition t >1. From (3.2) it follows that

|f(xn+1)|< 14|f(xn)|.

(3.14)

Therefore, it is desirable to choose nand m such that

|f(xn+m)|< 14m|f(xn)| 0.1.

(3.15)

This inequality is written in term of Pk(t) as

|Pk(1)| ≤ 14m|Pk(0)|<0.1.

(3.16)

On the other hand, from (3.5) we see that Pk0(1) is not equal to 0 underthe assumption H, i.e. t= 1 is not a critical point of Pk(t). Thus,Pk(t) is de- creasing aroundt= 1. Therefore, there existstopt >1 such thatPk(topt) = 0.

Lemma 3.1. Assume that f ∈ C4[a, b], the assumption H is satisfied and

|xxn|=εn<1.

(3.17)

Then the following holds

topt−1 =O(εn).

(3.18)

Proof. First of all, let us note that the inequality (3.17) is equivalent to

|f(xn)|=O(εn), (3.19)

(5)

which follows from the expansion

0 =f(x) =f(xn) +f0(ξ)(xxn).

Of course, |xxn+m|< εn and |xn+mxn|< εn under (3.17).

We also use an analogous expansion for Pk(t)

0 =Pk(topt) =Pk(1) +Pk0(η)(topt−1), η ∈(1, topt).

(3.20)

Since Pk(t) is decreasing aroundt= 1, thenPk0(η)6= 0.

Hence, from (3.19), (3.20) and (3.15) we conclude that topt−1 =−f(xPn+m0 )

k(η) ≈ O(εn).

(3.21)

The Lemma is proved.

Note that in [16] the iterations was proposed:

x2n+1=x2nτnff(x0(x2n2n)), (3.22)

x2n+2=x2n+1f(xx2n+1−x2n

2n+1)−f(x2n)f(x2n+1), n= 0,1, . . . , (3.23)

which has a third order of convergence when τn = 1 or τn tends to 1. It is easy to show that the iteration (3.23) coincides fully with our acceleration procedure (3.3) and (3.13) withm = 1 andk= 1. Therefore, one can expect a high acceleration when k= 2,3 for Newton’s method.

How to accelerate the convergence rate of iteration (3.1)? The answer of this question gives the following theorem.

Theorem 3.2. Assume f(x) ∈ Ck+2 and the condition (3.17) is satisfied.

Then for yn with topt we have

|x−yn|

|x−xn|k+2 ≈ O(1), (3.24)

where O is the Landau symbol.

Proof. Let

x=xn+t(xn+mxn), t ≥1, yn=xn+topt(xn+mxn).

(3.25)

We use Taylor expansions of f(x)∈ Ck+2 0 =f(x) =

(3.26)

=

k

X

p=0

f(p)(xn)

p! (t)p(xn+mxn)p+f(k+1)(k+1)!n)(t)(k+1)(xn+m−xn)(k+1),

f(xn+m)−

k−1

X

p=0

f(p)(xn)

p! (xn+mxn)p = (3.27)

= f(k)k!(xn)(xn+mxn)k+ f(k+1)(k+1)!n)(xn+mxn)k+1,

(6)

and

0 =Pk(topt) =

k−1

X

p=0

f(p)(xn)

p! (topt)p(xn+mxn)p (3.28)

+f(xn+m)−

k−1

X

p=0

f(p)(xn)

p! (xn+mxn)p(topt)k, whereηn∈(xn, x) andξn∈(xn, xn+m). Using (3.27) in (3.28) and subtract- ing (3.28) from (3.26) we get

hf0(xn) + f00(x2n)(xn+mxn)(t+topt) +f000(x6 n)(xn+mxn)2 (t2+ttopt+t2opt) +· · ·+f(k)k!(xn)tk−1 +tk−2topt+· · ·+tk−1opt

(xn+mxn)k−1i(ttopt) =

=−(xn+m(k+1)!−xn)kf(k+1)n)tk+1f(k+1)n)tkopt. (3.29)

Since f0(xn)6= 0, then from last expression we deduce that ttopt =O(εkn).

(3.30)

It is possible to derive a more precise estimation than (3.30). Indeed, using (3.30) andf ∈ Ck+2 we evaluate

An=f(k+1)n)tk+1f(k+1)n)tkopt

=f(k+1)n)(tk+1tkopt) +f(k+2)n)(ηnξn).

(3.31)

By definition we have

nξn| ≤ |xxn|=εn. (3.32)

Using (3.18) and (3.30) we have

tk+1tkopt = topt+O(εkn)k+1tkopt

=tkopttopt(1 +O(εkn))k+1−1=tkopt topt+O(εkn)−1=O(εn).

(3.33)

Then An=O(εn) and thereby from (3.29) we get ttopt =O(εk+1n ).

(3.34)

Hence, from (3.25) and (3.26) we find that xyn=O(εk+2n ).

which proves (3.24).

The sequence {yn} given by formula (3.3) can be considered as a new a iteration. For it we have the followlng:

(7)

Theorem3.3. Assumef(x)∈ Ck+1 and the convergence order of iterations (3.1) equal to 2 i.e., the following holds

|xxn| ≤M q2n|xx0|, 0< q <1, M = const.

(3.35)

If the equation (3.8) has at least one root topt, greater than 1, then the con- vergence order of new iteration (3.3)is the same as (3.1)and we have

(3.36) |xyn| ≤M1q21n|xy0|, 0< q1=qk+1<1, M1 =const.

Proof. By virtue of (3.35) the condition (3.17) is satisfied for largen. Then by Theorem 3.2, the relation (3.24) holds. Using (3.35) in (3.24), we get

|xyn| ≤C qdnk+2|xx0|k+2=C qk+2dn|xx0|k+2

=Cqd1n|xx0|k+2M1q1dn|xy0|, q1 =qk+2 < q <1.

(3.37)

The proof is completed.

Theorem 3.3 shows that the convergence order of iteration (3.3) is the same as iteration (3.1).

However, the speed of convergence of these iterations depends on the factor q1 andq in (3.35) and (3.36), respectively. Sinceq1=qk+2< qfork= 1,2,3, one can expect a more rapid convergence of iteration (3.3). Of course, the higher is acceleration of iteration attained atk= 3.

From (3.35) and (3.36) it is clear that the iteration (3.3) converges to x more rapidly than iteration (3.1) by virtue ofq1=qk+1< q. This accelerating procedure is useful, especially at the beginning of iterations, but under condi- tion (3.17). From Theorem 3.3, it is clear that the sequence{yn}given by (3.3) together with (3.1) can be considered as a new iteration process with a small factor compared to (3.1). The acceleration procedure is achieved without ad- ditional calculations, so that the iteration (3.3) possesses a high computational efficiency. However, despite the sequence xn is monotone, the new iteration (3.3) may not be monotone. For instance, whenk= 1 it is easy to show that

f(yn) = f002n)(xn+mxn)2. (3.38)

From this it is clear that

f(yn)>0 if f00(x)>0, (3.39)

and

f(yn)<0 if f00(x)<0.

(3.40)

Let us know two successive approximationsxn andxn+1, for which f(xn)f(xn+1)<0

(3.41)

holds. We consider

yn=xn+t(xn+1xn), 0≤t≤1.

(3.42)

(8)

The acceleration technique will be the same as a previous case with m = 1.

In this case, according to (3.41) we have Pk(0)Pk(1) =f(xn)f(xn+1)<0 for k = 1,2,3. Hence, Eq. (3.8) has a root topt ∈ (0,1). Obviously, the new approximation

yn=xn+topt(xn+1xn), 0≤t≤1, (3.43)

will be situated more close to x as compared to xn, xn+1 and Theorem 3.2 holds true for this case, too. It indicates that the two-sided approximations are useful not only for estimations of roots, but also for finding it approximately with a higher accuracy. Of course, the acceleration procedure (3.3) can be continued further with xn+m := yn, xn := xn+m and with t > 1 if yn and xn+m are located on side of x and witht∈(0,1) ifyn and xn+m are located on two-sides of root. Note that the accelerating procedure (3.3) is applicable not only for iterations (3.1), but also for any iteration, in particular, for the following iterations (A), (B), (C) and (D).

Now we consider the accelerated iteration

(A) yn=xnff0(x(xnn)), xn+1=xn+topt(ynxn), n= 0,1, . . .

The iteration (A) is a damped Newton’s method [17, 20] with optimal param- eter τn=topt. The first step yn is used for finding the optimal parameter.

Theorem 3.4. Assume that the assumptions of Theorem 3.2 are fulfilled.

Then the convergence order of iteration(A)with optimaltopt isd=k+ 2, k= 1,2,3, depending on the smoothness of f.

Proof. If we compare (A) with (3.1) and (3.3), thenxn+m :=yn and yn :=

xn+1. Therefore, the expression (3.24) in the Theorem 3.2 has a form

|x−xn+1|

|x−xn|k+2 =O(1)⇐⇒ |xxn+1| ≤M|xxn|k+2, (3.44)

which completes the proof of the Theorem 3.4.

Now let us consider another three-step iteration

yn=xnff(x0(xnn)), zn=ynff(y0(xnn)), xn+1=yn+t(znyn), n= 0,1, . . . (B)

Note that ift≡1 in (B), then it leads to

(B0) yn=xnff(x0(xnn)), xn+1=ynff0(y(xnn)), n= 0,1, . . .

The iteration (B0) is a particular case of scheme (40) given in [16] withσ= 0 end τ = 1 and has a third order of convergence. Therefore, the iteration (B) can be considered as improvement of iteration (B0).

Theorem 3.5. The assumptions of the Theorem 3.2are fulfilled. Then the convergence order of iteration (B) with optimaltopt equal to d= 2k+ 3.

(9)

Proof. If we compare (B) with (3.3), thenxn:=yn,xn+m :=zn,yn:=xn+1. Then form (3.29) and (3.19) we get

ttopt=M An(znyn)k≈ O(ε2k+1n ), (3.45)

where

x=yn+t(znyn), xn+1 =yn+topt(znyn).

(3.46)

From this and from (3.45) we obtained

(3.47) xxn+1 = (ttopt)(znyn)≈ O(ε2k+1n )O(ε2n) =O(ε2k+3n ), i.e., we have

|xxn+1| ≤M1|xxn|2k+3,

which means that the convergence order of iteration (B) is equal tod= 2k+ 3,

k= 1,2,3.

From the Theorem 3.5, we see that the convergence order of iteration (B0) can be increased two or four units at the expense of only two additional eval- uations of the function. So the order of convergence and the computational efficiency of the method are greatly improved.

In [5] Algorithm 2 was constructed:

zn=xnf(x0(xnn)), xn+1 =znH(xn, yn)ff(z0(xnn)),

and it is proved that the order of convergence equals 5, 6, 7 depending on a suitable choice of two-variable function H(xn, yn). For comparison purpose we can rewrite iteration (B) as

yn=xnff(x0(xnn)), xn+1 =yntff0(y(ynn)).

We see that these two methods are different from one another only by chosen factorst and H(xn, yn).

Now we consider the following iteration:

(C) yn=xnff0(x(xnn)), zn=ynff0(y(ynn)), xn+1 =yn+t(znyn), n= 0,1, . . . The iteration (C) can be considered as improvement of iteration

(C0) yn=xnff(x0(xnn)), xn+1 =ynff0(y(ynn)), n= 0,1, . . . , since ift≡1 in (C), then it leads to (C0).

In [16], it was proven that the convergence order of (C0) is four.

Theorem 3.6. The assumptions of Theorem 3.2 are fulfilled. Then the convergence order of iteration (C) with optimal topt equal to d = 2(k+ 2), k= 1,2,3.

(10)

Proof. If we compare (C) with (3.3), thenxn:=yn,xn+m :=zn,yn:=xn+1. Therefore, the expression (3.24) reads as

|x−xn+1|

|x−yn|k+2 =O(1)⇐⇒ |xxn+1| ≤M|xyn|k+2. (3.48)

From (C), we find that

xyn=xxn+f(xfn0)−f(xn(x) ). (3.49)

Substituting here the expansion off(x)

f(x) =f(xn) +f0(xn)(xxn) + f002n)(xxn)2, (3.50)

we have

|xyn| ≤ |f|f000(xnn)|)||xxn|2. (3.51)

Using the last estimate in (3.45), we obtain

|xxn+1| ≤M1|xxn|2(k+2),

which means that the convergence order of iteration (C) equalsd= 2(k+ 2),

k= 1,2,3.

Note that the iterations (A), (B) and (C) can be rewritten as a damped Newton’s method [20]

xn+1 =xnτnff(x0(xnn)), (3.52)

τn=topt, (3.53)

τn=1 +toptf(yn) f(xn), (3.54)

τn=1 +toptf(yn) f(xn)

f0(xn) f0(yn), (3.55)

respectively. The unified representation (3.52) of different iterations shows that the choice of the damped parameter τn in (3.52) is essentially affected for the convergence order. Of course, the parameterτn in (3.52) is defined by different ways, but in all cases τn→1 as n→ ∞.

The speed of convergence of sequence {τn} to unit is different for each iteration methods. In [17] the conjecture was proposed:

|1−τn| ≤M qρn, 0< q <1.

(3.56)

Now we consider the following three-point iterative method:

yn=xnff(x0(xnn)), zn=xn+ ¯t(ynxn), xn+1=yn+t(znyn), n= 0,1, . . . , (D)

where ¯tandtin (D) are some parameters to be determined. We can formulate the following theorem.

(11)

Theorem 3.7. Assume that f(x) ∈ C4(a, b) and an initial approximation x0 is sufficiently close to the zerox∈(a, b) and the parameter¯t is chosen by as a root of equation

θn¯t2−¯t+ 1 = 0, θn= ff(x(yn)

n), (3.57)

and t is a root of equation

Ψ(t, α) =αΨ1(t) + (1−α)Ψ2(t) = 0, (3.58)

where

Ψ1(t) =at2a+f(xf(yn)

n) f(zn)−f(yn)tf(xn), (3.59)

a=−2f(zn)−f(xn)(1−¯t)2, and

Ψ2(t) = (1−t)(2¯ −¯t)f(xn)−(2−3¯t)f(zn)t + (1−t) 2f¯ (zn)−(2−¯t)f(xn). (3.60)

Then the three-point methods (D) is of eight order of convergence.

Proof. Usingznyn= (1−t)¯ ff(x0(xnn)) in Taylor expansion

f(xn+1) =f(yn) +f0(yn)t(znyn) + f00(y2n)t2(znyn)2+O (znyn)3, we get

f(xn+1) =f(yn) +t(1−¯t)ff00(x(ynn))f(xn) +f00(y2n)t2(1−¯t)2 f2(xn)

f0(xn)2 +O f6(xn). (3.61)

Analogously, the Taylor expansion off(xn+1) at point x=zn gives f(xn+1) =f(zn)−(1−t)(1−¯t)ff00(x(znn))f(xn)

+f00(z2n)(1−t)2(1−¯t)2 f2(xn)

f0(xn)2 +O (1−t)3f6(xn). (3.62)

Usingf0(zn) =f0(yn) +f00(yn)(znyn) +O (znyn)2in the last expansion, we have

f(xn+1) =f(zn)−(1−t)(1−¯t)ff00(z(xnn))f(xn) + f00(z2n)(1−t)2(1−¯t)2 f2(xn)

f0(xn)2 +O (1−t)3f6(xn). (3.63)

Usingf0(zn) =f0(yn) +f00(yn)(znyn) +O(znyn)2in the last expansion, we have

f(xn+1) =f(zn)−(1−t)(1−¯t)ff00(x(ynn))f(xn)

f00(yn)f2(xn)

2 f0(xn)2 (1−t2)(1−¯t)2+O f8(xn). (3.64)

(12)

From (3.61) and (3.64) one can eliminate term with ff00(x(ynn))f(xn). As a result, we have

f(xn+1) =tf(zn)+(1−t)f(yn)−f00(yn)f2(xn)

2 f0(xn)2 (1−¯t)2t(1−t)+O f8(xn). (3.65)

Note that in driving (3.65), we keep in mind that 1−t=O f2(xn). (3.66)

Further, using Taylor expansion of f(x)∈ C4(D) at pointsyn we obtain f00(yn) = 2 f

0(xn)2

f2(xnt(1−¯t)

(1−¯t)f(xn) + ¯tf(yn)−f(zn)

f000(y3n)f(xn)

f0(xn)(2−¯t) +O f2(xn). (3.67)

The same technique gives us f00(yn) =2 f(zn)−(1−¯t)f(xn)

¯t2f2(xn) f0(xn)2f0003(yn)ff(x0(xnn))(3−¯t)+O f2(xn). (3.68)

For (3.67) and (3.68) one can eliminate the term withf000(yn). As a result, we obtain

f00(yn)f2(xn)

2 f0(xn)2 =¯t2(1−1 ¯t) (3−¯t)¯tf(zn) + ¯tf(yn) + (1−¯t)f(xn)

−(2−¯t)(1t)¯f(zn)−(1−¯t)f(xn)+Of4(xn). (3.69)

Substituting (3.69) into (3.64), we obtain

f(xn+1) = Ψ1(t) +O f8(xn), (3.70)

where

Ψ1(t) =at2a+f(xf(yn)

n) f(zn)−f(yn)tf(xn), (3.71)

a=−2f(zn)−f(xn)(1−¯t)2. On the other hand, by virtue of (D) we have

xn+1zn=−(1−t)(1¯ −t)ff0(x(xnn)). (3.72)

If we take (3.57) and (3.66) into account in (3.72), from it we deduce xn+1zn=O f4(xn).

Then, from (3.62) we get

f(xn+1) =f(zn) + (1−¯t)(1t)ff00(x(znn))f(xn) +O f8(xn). (3.73)

(13)

Now we approximate f0(zn) by the method of undetermined coefficient such that

(3.74) f0(zn)≈anf(xn) +bnf(yn) +cnf(zn) +dnf0(xn) +O f4(xn), This can be done by means of Taylor expansion of f(x)∈ C4(a, b) at pointzn

and we obtain the following linear system of equations

an+bn+cn= 0,

an(xnzn) +bn(ynzn) +dn= 1,

an(xnzn)2+bn(ynzn)2+ 2dn(xnzn) = 0, an(xnzn)3+bn(ynzn)3+ 2dn(xnzn)2= 0, which has a unique solution

an= βωn(2βn−3ωn)

nn−ωn)2 , bn= β ωn2

nn−ωn)2, (3.75)

cn=−βnn

nωn , dn=−β βn

n−ωn, where

ωn=xnzn= ¯tff0(x(xnn)), βn=ynzn= (1−¯t)ff0(x(xnn)).

Substituting (3.74) with coefficients defined by (3.76) into (3.73), we get f(xn+1) = Ψ2(t) +O f8(xn),

(3.76) where

Ψ2(t) = (1−t)(2¯ −¯t)f(xn)−(2−3¯t)f(zn)t (3.77)

+ (1−¯t) 2f(zn)−(2−¯t)f(xn). The linear combination of (3.70) and (3.76) gives

f(xn+1) =αΨ1(t) + (1−α)Ψ2(t) +O f8(xn). Clearly, if we choosetas a root of quadratic equations

Ψ(t, α) =αΨ1(t) + (1−α)Ψ2(t) = 0, (3.78)

then we have

f(xn+1) =O f8(xn)

which completes the proof.

Remark 3.8. Since t is a root of Eq. (3.78), it depends on the parameter α, i.e. t=t(α). Therefore, (D) are one parameter family of iterations.

It is easy to show that

Ψ2t) = 0, ˆt→1, Ψ1t) = 0, ˘t→1.

Then taking this into account and passing to the limit t → 1 in Eq. (3.78), we get

Ψ(t, α)−−→t→1 αΨ1(1) + (1−α)Ψ2(1)≈0.

Referințe

DOCUMENTE SIMILARE

Moshe de Leon.(65) Therefore, in lieu of assuming that Jewish philosophy would, invariably, inhibit Jewish mysticism from using extreme expres- sions, there are examples of the

Toate acestea sunt doar o parte dintre avantajele in care cred partizanii clonarii. Pentru a si le sustine, ei recurg la o serie de argumente. Unul dintre ele are in atentie

2 Referring to the constitutional regulation of Kosovo regarding the form of state regulation, we have a unitary state, but in practice the unitary state

During the period 1992-2004, for criminal offenses with elements of abuse in the field of real estate turnover in Kosovo there were accused in total 35 persons and none

The Constitution of the Republic of Albania regulates three situations that require extraordinary measures: war situation, state of emergency and state of natural

Using our new concept of recurrent functions, we present new suffi- cient convergence conditions for the secant method to a locally unique solution of a nonlinear equation in a

, Convergence of the family of the deformed Euler-Halley iterations under the H¨ older condition of the second derivative, Journal of Computational and Applied Mathematics,

In [2] it is shown that the convergence order of the iterative methods given by (6) cannot be greater than 2, even if the number of the interpolation nodes is arbitrarily