• Nu S-Au Găsit Rezultate

View of A two-point eighth-order method based on the weight function for solving nonlinear equations

N/A
N/A
Protected

Academic year: 2022

Share "View of A two-point eighth-order method based on the weight function for solving nonlinear equations"

Copied!
21
0
0

Text complet

(1)

J. Numer. Anal. Approx. Theory, vol. 50 (2021) no. 1, pp. 73–93 ictp.acad.ro/jnaat

A TWO-POINT EIGHTH-ORDER METHOD BASED ON THE WEIGHT FUNCTION FOR SOLVING NONLINEAR EQUATIONS

VALI TORKASHVAND

Abstract. In this work, we have designed a family of with-memory methods with eighth-order convergence. We have used the weight function technique. The proposed methods have three parameters. Three self-accelerating parameters are calculated in each iterative step employing only information from the current and all previous iteration. Numerical experiments are carried out to demonstrate the convergence and the efficiency of our iterative method.

MSC 2020. 65H05, 65B99.

Keywords. Method with memory; Accelerator parameter; Weight function;

Newton’s interpolatory polynomial; Order of convergence.

1. INTRODUCTION

Nonlinearity is of interest to physicists and mathematicians, since most physical systems are inherently nonlinear in nature. One of the most important problem in computational mathematics is solving nonlinear equations. For example, nonlinear optimization aims to find a minimum or maximum of a given nonlinear function. Nonlinear equations are difficult to solve in general.

The best way to solve these equations is using iterative methods. One of the classical method to solve nonlinear equation is Newton’s method which has convergence order equal to 2. It can be said that the Secant method is the oldest with memory methods that have been studied so far. The Secant method obtain by approximating the derivative in Newton’s method via a finite divided difference f0(xk) = f(xxk−1)−f(xk)

k−1−xk . The method is given as

(1) xk+1 =xkxk−1xk

f(xk−1)−f(xk)f(xk).

In the continuation of this work, we will first define the efficiency index (EI) of an iterative method by Ostrowski [22]:

(2) EI =r

1 θf

Young Researchers and Elite Club, Shahr-e-Qods Branch, Islamic Azad University, e- mail: [email protected].

Department of Mathematics, Farhangian University, Tehran, Iran.

(2)

The Q-order of convergence r and the number of function evaluationsθf per iteration. The efficiency index of the Secant method is 1.6803. Traub in [31]

proposed the following with-memory method (TM) (3)

γk=−f[x 1

k,xk−1], k= 1,2,3, . . . , wk =xk+γf(xk), xk+1=xkf[xf(xk)

k,wk], k= 0,1,2, . . . ,

with the order of convergence 2.41421. In the following, Neta proposed (NM) three step with memory method which has the order of convergence 10.81525 [20]:

(4)

wk =xk+ ff(x0(xkk))(f(wk−1zf(zk−1w)(f(w f(xk)2

k−1−f(zk−1)), k= 1,2, . . . , zk=xk+ff(x0(xkk))(f(wkzf(zk−1w)(f(wf(xk)2

k−f(zk−1)), k= 1,2,3, . . . , xk+1=xk+ff0(x(xkk))(f(wkzf(zkw)(f(wf(xk)2

k−f(zk)), k= 0,1,2, . . . He used inverse interpolation. Neta increased the convergence order from 8 to 10.81. Therefore, he has improved the degree of convergence by 35%. Also, Traub improved by 20.71% by increasing the degree of convergence from 2 to 2.41. Bassiri et al. [2] also increased the degree of convergence of a two-step method from 4 to 7.22. Therefore, the convergence order improvement of their proposed method is 80%.

The remaining materials of this paper are uncovered as follows. Section 2 is devoted to modifications of the two-steps method proposed by Bassiri et al. [2]. Further accelerations of convergence speed are attained in Section 3.

This self-accelerating parameter is calculated by the Newton interpolating polynomial. The corresponding Q-order of convergence [8] is increased from 4 to 7.53113,7.94449,7.99315 and 7.99915≈8.

Numerical examples are given in Section 4 to confirm theoretical results.

Finally, Section 5 is devoted to the main conclusions of this work.

2. WITHOUT MEMORY METHODS

Bassiri et al. proposed the following optimal iterative without memory method [2]:

(5)

wk=xk+γf(xk), yk=xkf[x f(xk)

k,wk]+βf(wk), k= 0,1, . . . , sk= ff(y(xk)

k), xk+1 =ykH(sk)f[w f(yk)

k,yk]+βf(wk)+λ(yk−xk)(yk−wk), k= 0,1, . . . This method achieves order convergence 4 when the weight functions satisfy the conditions

(6) H(0) =H0(0) =H00(0) = 1.

(3)

And its error expression is

ek+1=((1+γf0(α))2(β+c2)(2λ+f0(α)β2(1+γf0(α))+f0(α)c2(2β(3+γf0(α)) + (5 +γf0(α))c2)−2f0(α)c3))(−2f0(α))−1e4k+O(e5k).

(7)

whereck= fk!f(k)0(α)(α) fork= 2,3, . . .If the weight function is not used, the order convergence of method (5) will be as follows:

(8) ek+1 = ((1 +γf0(α))2(β+c2)2e3k+O(e4k).

In this case, the optimality of method (5) disappears. For maintaining opti- mality, it must be four until it reaches an optimal without-memory method by according to Kung-Traub’s conjecture [12]. One way to increase the degree of convergence is by using the weight function. Refer to [3, 5, 6,10, 13, 14, 15, 25,26] for further study. Some concrete weight functions that satisfy the conditions are

(9) H1(s) = 1 +s+s22, H2(s) =es, H3(s) = 2+s2−s, H4(s) = 1

1−s−s22. Here it should be noted that under the functions of weight H1(s), H2(s) and H3(s) the error equation is (7). But the error equation of this method for the weight function H4(s) is as follows:

ek+1 = ((1 +γf0(α))2(β+c2)(f0(α)β2(1 +γf0(α))−2λ+f0(α)c2(2β(−1 + γf0(α)) + (−3 +γf0(α))c2) + 2f0(α)c3))(−2f0(α))−1e4k+O(e5k).

(10)

It is also necessary to note that any weight function that applies only in conditions H(0) = 1 and H0(0) = 1 will converge to convergence order 4. In addition, weight functionH4(s) does not apply in terms of (6), andH400(0) = 3.

Error relations (10) plays the key role in our study of the convergence ac- celeration. For method (5), we have the following result.

Theorem 1. For a sufficiently good initial approximation x0 of a simple zeroα of the functionf, the family of two-point methods (5)obtains the order at least four if the weight function H satisfies conditions (6) Then the error relation for the family (5) is given by (10).

Proof. By using Taylor’s expansion off(x) aboutαand taking into account that f(α) = 0, we obtain

(11) f(xk) =f0(α)(ek+c2e2k+c3e3k+c4e4k+O(e5k)).

Then, computingek,w=wkα, we attain wk=xk+γf(xk) (12)

ek,w=f0(α)(1 +γf0(α))ek+γf0(α)c2e2k+γf0(α)c3e3k+γf0(α)c4e4k+O(e5k), and

f(wk) = f0(α)(1 +γf0(α))ek+f0(α)(1 +γf0(α)(3 +λf0(α)))c2e2k +f0(α)(2γf0(α)(1 +γf0(α))c22+γf0(α)c3+ (1 +γf0(α))3

(4)

c3)e3k+f0(α)(c4+γf0(α)(γf0(α)c32+ (1 +γf0(α))(5 + 3γf0(α)) c2c3+ (5 +γf0(α)(6 +γf0(α)(4 +γf0(α))))c4))e4k+O(e5k).

(13)

Now by the Eqs. (11) and (13), we get that f[xk, wk] =

(14)

=f0(α) +f0(α)(2 +γf0(α))c2ek+f0(α)(γf0(α)c22+ (3 +γf0(α) (3+γf0(α)))c3)e2k+f0(α)(2+γf0(α)(2+γf0(α))c2c3+ (2 +γf0(α)(2

+γf0(α)))c4)e3k+f0(α)(5c5+γf0(α)(λf0(α)c22c3+ (3 + 2γf0(α)) c23+ (7 +γf0(α)(8 + 3γf0(α)))c2c4+ (10 +γf0(α)(10 +γf0(α) (5 +γf0(α))))c5))e4k+O(e5k).

Furthermore, we have

f(xk)

f[xk,wk]+βf(wk) =

=ek−(1 +γf0(α))(β+c2)e2k+ ((β+βγf0(α))2+ (2 +γf0(α) (2 +γf0(α)))c2(β+c2)−2c3γf0(α)(3 +γf0(α))c3)e3k+

+ (−(β+βγf0(α))3β(5 +γf0(α)(7 +γf0(α)(4 +γf0(α))))c22

−(4 +γf0(α)(5 +γf0(α)(3 +γf0(α))))c32+β(4 +γf0(α)(7+

+γf0(α)(5 +γf0(α))))c3+c2(−β2(1 +γf0(α))(3 +γf0(α)(2 +γf0(α))) + (7 +γf0(α)(10 +γf0(α)(7 + 2γf0(α))))c3

−(1 +γf0(α))(3 +γf0(α)(3 +γf0(α)))c4)e4k+O(e5k).

(15)

Using the second step of (5) and ek,y =ykα, we get:

yk= (16)

=α+(1+βf0(α))(β+c2)e2k+ (−(β+βγf0(α))2−(2+λf0(α)(2+γf0(α))) c2(β+c2) + 2c3+γf0(α)(3 +γf0(α))c3e3k+ ((β+βγf0(α))3+β(5 +γ f0(α)(7 +λf0(α)(4 +λf0(α))))c22+ (4 +γf0(α)(5 +γf0(α)(3 +γf0(α)))) c32β(4 +γf0(α)(7 +γf0(α)(5 +γf0(α))))c3+c22(1 +γf0(α))(3 +γ f0(α))(2 +γf0(α)))−(7 +γf0(α)(10 +γf0(α)(7 + 2γf0(α))))c3)+

+ (1 +γf0(α))(3 +γf0(α)(3 +γf0(α)))c4)e4k+O(e5k).

Forf(yk), we also have

f(yk) = f0(α)(1 +γf0(α))(β+c2)e2kf0(α)((β+βγf0(α))2(2 +γf0(α)

(5)

(2 +γf0(α)))c2(β+c2)−2c3+γf0(α)(3 +γf0(α))c3)e3k+f0(α) ((β+βγf0(α))3+β(7 +γf0(α)(11 +γf0(α)(6 +γf0(α))))c22 +(5 +γf0(α)(7 +γf0(α)(4 +γf0(α))))c32β(4 +γf0(α)(7 + γf0(α)(5+γf0(α))))c3+c22(1+γf0(α))(4+γf0(α)(3+γf0(α)))

−(7+γf0(α)(10+γf0(α)(7+2γf0(α))))c3)(1+γf0(α))(3+γf0(α) (3 +γf0(α)))c4)e4k+O(e5k).

(17)

Additionally, by using relations (12), (13), (16) and (17), we gain

f(yk)

f[yk,wk]+βf(wk)+λ(yk−wk)(yk−xk)

= (1 +γf0(α))(β+c2)e2k+ (−2(β+βγf0(α))2β(4 + 3γf0(α)(2 + γf0(α)))c2−(3 + 2γf0(α)(2 +γf0(α)))c22+ (1 +γf0(α))(2 +γf0(α)) c3)e3k+ (f01(α))(βf0(α)(11 +γf0(α)(19 +λf0(α)(14 + 5γf0(α))))c22 +f0(α)(7 +γf0(α)(11 +γf0(α)(8 + 3γf0(α))))c32βf0(α)(7 + 3γf0(α) (5 +γf0(α)(4 +γf0(α))))c3+c2((1 +γf0(α))(β2f0(α)(8 +γf0(α)(9 + 5γf0(α)))−(1+γf0(α))λ)−2f0(α)(5+γf0(α)(9+γf0(α)(7+2γf0(α)))) c3) + (1 +γf0(α))(β(1 +γf0(α))(3β2f0(α)(1 +γf0(α))−λ)

+f0(α)(3 +γf0(α)(3 +γf0(α)))c4))e4k+O(e5k).

(18)

Dividing these two relations (17) and (11) on each other gives us sk= f(yf(xk)

k) = (19)

= (1 +γf0(α))(β+c2)ek+ (−(β+βγf0(α))2−(3 +γf0(α) (3 +γf0(α)))c2(β+c2) + 2c3+γf0(α)(3 +γf0(α))c3)e2k+ ((β+γf0(α))3+β(10 +γf0(α)(14 +γf0(α)(7 +γf0(α)))) c22+ (2 +γf0(α))(4 +γf0(α)(3 +γf0(α)))c32β(5 +γf0(α) (8 +γf0(α)(5 +γf0(α))))c3+c22(1 +γf0(α))(5 +γf0(α) (4 +γf0(α)−2(5 +γf0(α)(7 +γf0(α)(4 +γf0(α))))c3))γf0(α))4 (1 +γf0(α))(3 +γf0(α)(3 +γf0(α)))c4)e3k(−(β+β

β(30 +γf0(α)(50 +γf0(α)(34γf0(α)(11 +γf0(α)))))

c32−(2 +γf0(α)(10 +γf0(α)(10 +γf0(α)(5 +γf0(α))))c42+β2

(6)

(1 +γf0(α))(7 +γf0(α)(10 +γf0(α)(6 +γf0(α))))c3−(8 +γf0(α) (15 +γf0(α)(13 +γf0(α)(6 +γf0(α)))))c23+c22(−β2(20 +γf0(α) (41 +γf0(α)(32 +γf0(α)(11 +γf0(α))))) + (37 +γf0(α)(3+

γf0(α))(20 +γf0(α)(8 + 3γf0(α))))c3)−β(7 +γf0(α)(3 +γ

f0(α))(5 +γf0(α)(3 +γf0(α))))c4+c2(−β3(1 +γf0(α))2(7 +γf0(α) (5 +γf0(α))) +β(2 +γf0(α))(3 +γf0(α))(5 +γf0(α)(5 + 2γ

f0(α)))c3−(14 +γf0(α)(5 + 2γf0(α))(5 +γf0(α)(2 +γf0(α)))c3)

−(14 +γf0(α)(5 + 2γf0(α))(5 +γf0(α)(2 +γf0(α))))c4)+

(1 +γf0(α))(2 +γf0(α))(2 +γf0(α)(2 +γf0(α)))c5)e4k+O(e5k).

Expanding H at 0 yields

H(sk) =H(0) +H0(0))sk+H00(0)s

2 k

2; (20)

and

H(sk) =H(0) +H0(0)(1 +γf0(α))(β+c2)ek+ (12(H00(0)(1 +γf0(α))2 (β+c2)2H0(0)((β+βγf0(α))2+ (3 +γf0(α)(3 +γf0(α))) c2(β) +c2)−2c3γf0(α)(3 +γf0(α))c3))e2k+ (−H00(0)(1 +γ f0(α))(β+c2)((β+βγf0(α))2+ (3 +γf0(α)(3 +γf0(α)))c2

(β+c2)−2c3γf0(α)(3 +γf0(α))c3) +H0(0)((β+βλf0(α))3 +β(10 +γf0(α)(14 +γf0(α)(7 +γf0(α))))c22+ (2 +γf0(α))(4+

γf0(α)(3 +λf0(α)))c32β(5 +γf0(α)(8 +γf0(α)(5 +γf0(α))))c3

+c22(1 +γf0(α))(5 +γf0(α)(4 +γf0(α)))−2(5 +γf0(α)(7+

γf0(α)(4 +γf0(α))))c3)(1 +γf0(α))(3 +γf0(α)(3 +γf0(α)))c4)) e3k+12(−(2H0(0)−3H00(0))(β+βγf0(α))4+ 2β(H00(0)(3 +γ f0(α)(3+γf0(α)))(9+γf0(α)(11+3γf0(α)))−H0(0)(30 +γf0(α) (50+γf0(α)(34+γf0(α)(11+γf0(α))))))c32+ (H00(0)(25+3γf0(α) (3 +γf0(α))(6 +γf0(α)(3 +γf0(α))))−2H0(0)(2 +γf0(α))(10+

γf0(α)(10 +γf0(α)(5 +γf0(α)))))c42+ 2β2(1 +γf0(α))(H0(0)(7+

γf0(α)(10+γf0(α)(6+γf0(α))))−H00(0)(7+γf0(α)(13+γf0(α) (9+2γf0(α)))))c3+(H00(0)(1+γf0(α))2(2+γf0(α))2−2H0(0)(8

+γf0(α)(15 +γf0(α)(13 +γf0(α)(6 +γf0(α))))))c23+c222 (−2H0(0)(20 +γf0(α)(41 +γf0(α)(32 +γf0(α)(11 +γf0(α)))))

(7)

+H00(0)(45 +γf0(α)(112 +γf0(α)(105γf0(α)(44 + 7γf0(α)))))) + 2(H0(0)(37 +γf0(α)(3 +γf0(α))(20 +γf0(α)(8 + 3γf0(α))))−

H00(0)(1+γf0(α))(16+γf0(α)(23+γf0(α)(13+3γf0(α)f0(α)))))c3) +2β(H00(0)(1+γf0(α))2(3+γf0(α)(3+γf0(α)))−H0(0)(7+γf0(α)

(3 +γf0(α))(5 +γf0(α)(3 +γf0(α))))c4−2c23(1 +γf0(α))2 (−3H00(0)(3+γf0(α)(3+γf0(α)))+H0(0)(7+γf0(α)(5+γf0(α))))+

β(H0(0)(2+γf0(α))(3+γf0(α))(5+γf0(α)(5+2γf0(α)))−H00(0) (1+γf0(α))(21+γf0(α)(31+2γf0(α)(9+2γf0(α)))))c3|1+H00(0)(1+γ f0(α))2(3+γf0(α)(3+γf0(α)))−H0(0)(14+γf0(α)(5+2γf0(α)) H0(0)(5 +γf0(α))(2 +γf0(α)))))c4) + 2H0(0)(1 +γf0(α)) (2 +γf0(α))(2 +γf0(α)(2 +γf0(α)))c5)e4k+O(e5k).

Finally, by placing H(0) = H0(0) = H00(0) = 1 as well as using equations (16), (18) and (2), the error equation of the memoryless method (5) will be as follows:

ek+1 = (2f0(α))−1((1+γf0(α))2(β+c2)(f0(α)β2(1+γf0(α))+2λ+f0(α)c2

(2β(3 +γf0(α)) + (5 +γf0(α)) +c2−2f0(α)c3))e4k+O(e5k) (21)

which finishes the proof of the theorem.

3. ACCELERATION OF THE TWO-POINT METHOD

It is easy to recognize from (21) that the order of convergence of (5) is four when γ 6= f−10(α), β 6= −2ff000(α)(α) and λ 6= f0006(α). By taking the value of γk= f−10(α), βk =−2ff000(α)(α) and λk = f0006(α), it can be established that the order of the method (5) would be 6, 7, 7.22 and 7.53. For this type of acceleration of convergence and in actual fact the exact values of f0(α), f00(α) and f000(α) are not obtainable. We could replace the parameters γ, βand λbyγk, βkand λk. In the remainder of this chapter, we consider the following two-parametric methods:

(a) If we only interpolate parameterγk using the Newton method, a pro- cedure by six order with memory is obtained.

(22)

γk=−N01

3(xk), k= 1,2,3, . . . ,

H(0) = 1, H0(0) = 1,|H00(0)|<∞, sk= ff(x(yk)

k), k= 0,1,2, . . . , yk =xkf[x f(xk)

k,wk]+βf(wk), wk =xk+γkf(xk), xk+1=ykH(sk)f[y f(yk)

k,wk]+βf(wk)+λ(yk−xk)(yk−wk).

(8)

(b) We attempt to prove that the method with memory (5) has conver- gence order seven provided that we use acceleratorsγk, λk.

(23)

γk=−N01

3(xk), βk =−2NN4000(wk)

4(wk), k= 1,2,3, . . . , H(0) = 1, H0(0) = 1,|H00(0)|<∞, sk= ff(x(yk)

k), k= 0,1,2, . . . , yk =xkf[x f(xk)

k,wk]+βkf(wk), wk=xk+γkf(xk), xk+1=ykH(sk)f[y f(yk)

k,wk]+βkf(wk)+λ(yk−xk)(yk−wk). (c) Bassiriet al. approximated self-accelerator parameters as

(24)

γk=−N01

3(xk) ' f−10(α), βk=−2NN4000(wk)

4(wk) ' −2ff000(α)(α), λk = N

000 5 (wk)

6 'f0(α)c3 = f0006(α),

and thus three parameters family with memory is given by (BBAM)

(25)

γk=−N01

3(xk), βk =−2NN4000(wk)

4(wk), λk= N50006(wk), k= 1,2,3, . . . , H(0) = 1, H0(0) = 1,|H00(0)|<∞, sk= ff(x(yk)

k), k= 0,1,2, . . . , yk =xkf[x f(xk)

k,wk]+βkf(wk), wk=xk+γkf(xk), xk+1=ykH(sk)f[y f(yk)

k,wk]+βkf(wk)+λk(yk−xk)(yk−wk).

(d) The self-accelerating parametersγk, βk andλkare calculated by using of the formula:

(26)

γk=−N01

3(xk) ' f−10(α), βk =−2NN4000(wk)

4(wk) ' −2ff000(α)(α), λk= N

000 5 (yk)

6 'f0(α)c3 = f0006(α), whereN3(xk);N4(wk) and N5(yk) defined by:

(27)

N3(xk) =N3(t;xk, xk−1, wk−1, yk−1), N4(wk) =N4(t;wk, xk, xk−1, wk−1, yk−1), N5(yk) =N5(t;yk, wk, xk, xk−1, wk−1, yk−1).

Now, we obtain the following three-parameter iterative with mem- ory(TM1)method :

(28)

γk=−N01

3(xk), βk =−2NN4000(wk)

4(wk), λk= N

000 5 (yk)

6 , k= 1,2,3, . . . , H(0) = 1, H0(0) = 1,|H00(0)|<∞, sk= ff(x(yk)

k), k= 0,1,2, . . . , yk =xkf[x f(xk)

k,wk]+βkf(wk), wk=xk+γkf(xk), xk+1=ykH(sk)f[y f(yk)

k,wk]+βkf(wk)+λk(yk−xk)(yk−wk).

It should be noted that the convergence order varies as the iteration go ahead. First, we need the following lemma:

(9)

Lemma 2. If γk=−N01

3(xk), βk=−2NN4000(wk)

4(wk), and λk= N50006(yk), then (29) (1 +γkf0(α))∼ek−1ek−1,wek−1,y,

(30) (c2+βk)∼ek−1ek−1,wek−1,y,

(f0(α)β2(1 +γf0(α)) + 2λ+f0(α)c2(2β(3 +γf0(α))+

(31)

+ (5 +γf0(α)) +c2−2f0(α)c3))∼ek−1ek−1,wek−1,y where ek=xkα, ek,w=wkα, ek,y =ykα.

Proof. The proof is similar to Lemma 3.1 in [30].

Now we state the following convergence theorem:

Theorem 3. If an initial approximation x0 is sufficiently close to the zero αof f(x)and the parametersγk, βkandλkin the iterative schemes (22),(23), (25) and (28) are recursively calculated by the forms given in (24) and (26).

Then, the R-order of convergence of the three-point methods (22), (23), (25) and (28) with the corresponding expressions γk, βk and λk are at least 6, 7, 7.22 and 7.53.

Proof. Here, we obtain the convergence order of 6 and 7.5 for the methods (22) and (28). Bassiri and his colleagues [2] have achieved the degree of con- vergence of the method mentioned in Equation (25). Proof of convergence of method (23) is similar to these three cases.

First we assume that the C-order of convergence of sequence xk, wk, yk is at leastr, p and q, respectively. Hence:

(32) ek+1erkerk−12 , (33) ek,wepkerpk−1, and

(34) ek,yeqkerqk−1. By (32), (33), (34), and lemma(2), we obtain (35) 1 +γkf0(α)∼ep+q+1k−1 . On the other hand, we get

(36) ek,w∼(1 +γkf0(α))ek, (37) ek,y ∼(1 +γkf0(α))e2k, (38) ek+1∼(1 +γkf0(α))2e4k.

Combining (32)–(38), (33)–(36), and (34)–(37), we conclude

(39) ek,we(1+p+q)+rk−1 ,

(10)

(40) ek,ye(1+p+q)+2rk−1 , and

(41) ek+1e2(1+p+q)+4r

k−1 .

Equating the powers of error exponents of ek−1 in pair relations (32)–(41), (33)–(39), and (34)–(40), we have

(42)

rpr−(p+q+ 1) = 0, rq−2r−(p+q+ 1) = 0, r2−4r−2(p+q+ 1) = 0.

This system has the solution p = 2, q = 4 and r = 6 which specifies the C- order of convergence of the derivative-free scheme with memory (22). Varying parameters γk, βk and λk in (28) using (26), we obtain the family of two- point with-memory methods of order 7.53, which is the improvement of the convergence rate of 88.25%. Similar to the first part of theTheorem 3:

(43) ek+1erkerk−12 , (44) ek,wepkerpk−1, and

(45) ek,yeqkerqk−1. By (43), (44), (45), andLemma 2, we obtain (46) 1 +γkf0(α)∼ep+q+1k−1 . On the other hand, we get

(47) ek,w∼(1 +γkf0(α))ek, (48) ek,y ∼(1 +γkf0(α))(βk+c2)e2k,

ek+1 ∼ ((1 +γf0(α))2(β+c2)(f0(α)β2(1 +γf0(α)) + 2λ+f0(α)c2

(2β(3 +γf0(α)) + (5 +γf0(α)) +c2−2f0(α)c3))e4k (49)

Combining (46)–(47), (46)–(48), and (46)–(49), we conclude

(50) ek,we(1+p+q)+rk−1 ,

(51) ek,ye2(1+p+q)+2r

k−1 ,

and

(52) ek+1e4(1+p+q)+4r

k−1 .

Referințe

DOCUMENTE SIMILARE

We extend the applicability of the Kantorovich theorem (KT) for solving nonlinear equations using Newton-Kantorovich method in a Banach space setting.. Under the same information

Argyros, A unifying local–semilocal convergence analysis and applications for two-point Newton-like methods in Banach space, J.. Hilout , Numerical Methods for Equations and

Nonlinear equations in Banach space; third order Newton like methods; recurrence relations; error bounds; convergence

We provide a semilocal convergence analysis of an iterative algorithm for solving nonlinear operator equations in a Banach space setting.. Using our new idea of recurrent functions

Numerical examples validating our theoretical results are also provided in this study to show that DFM is faster than other derivative free methods [9] using similar information..

In this paper we introduce a class of explicit numerical methods for approximating the solutions of scalar initial value problems for first order differential equations, using

Using more precise majorizing sequences than before we show that under weaker conditions than before [1]–[3], [5], [6] we can obtain using Newton’s method (see (10)), finer error

Using this remark, the Steffensen and Aitken-Steffensen methods have been generalized using interpolatory methods obtained from the inverse interpolation polynomial of Lagrange