• Nu S-Au Găsit Rezultate

View of Newton type iterative methods with higher order of convergence

N/A
N/A
Protected

Academic year: 2022

Share "View of Newton type iterative methods with higher order of convergence"

Copied!
13
0
0

Text complet

(1)

J. Numer. Anal. Approx. Theory, vol. 45 (2016) no. 1, pp. 14–26 ictp.acad.ro/jnaat

NEWTON TYPE ITERATIVE METHODS WITH HIGHER ORDER OF CONVERGENCE

PANKAJ JAIN, CHET RAJ BHATTA„and JIVANDHAR JNAWALI∗∗

Abstract. Newton type iterative methods with higher order of convergence are obtained. The order of convergence is further increased by amalgamating these methods with the standard secant method. The methods are compared to the similar recent methods.

MSC 2010. 65H05.

Keywords. Newton method, secant method, iterative method, nonlinear equa- tion, order of convergence.

1. INTRODUCTION

Quite often, we come across numerous nonlinear equations which need to be solved. If the equation is not a polynomial equation, then it is not always easy to deal with such equations. To this end, one or the other numerical iterative method is employed. One such classical standard method is the Newton method

xn+1=xnff0(x(xnn))

which is quadratically convergent. Over the years, a lot of methods have appeared, each one claims to be better than the other in some or the other aspect. We mention here the method given by Weerakoon and Fernando [8]

which is based on the Newton’s theorem f(x) =f(xn) +

Z x xn

f0(λ)

and the integral involved is approximated by the trapezoidal rule,i.e., Z x

xn

f0(λ)= (x−x2 n) f0(x) +f0(xn).

Department of Mathematics, South Asian University, Akbar Bhawan, Chanakya Puri, New Delhi-110021, India, e-mail:[email protected]and[email protected].

„Central Department of Mathematics, Tribhuvan University, Kirtipur Kathmandu, Nepal, e-mail: [email protected].

∗∗Central Department of Mathematics, Tribhuvan University, Kirtipur Kathmandu, Nepal, e-mail: [email protected].

(2)

As a result, Weerakoon and Fernando obtained the following iterative method for solving the nonlinear equationf(x) = 0 :

(1) xn+1 =xnf0(xn2f)+f(x0n(z)n+1), wherezn+1=xnff0(x(xnn)).

The method so obtained is of third order. In the present paper, the aim is to modify method (1). In fact, in (1), f0 is a function of the previously calculated iterate. In our modification, f0 would be a function of some other convenient point. It is proved that the corresponding method has order of convergence 5.1925. We follow the technique of McDougall and Wotherspoon [7] who modified Newton’s method in a similar way yielding the order of convergence of their method as 1 +√

2.

Further, in [3], it was proved that if any method for solving nonlinear equa- tion is used in conjunction with the standard secant method then the order of the resulting method is increased by 1. We shall show, in this paper (see Theorem 3.2), that this order can be increased by more than 1. In fact, we prove that if our own method (which is of order 5.1925) is combined with the secant method than the new method is of order 7.275.

2. THE METHOD AND THE CONVERGENCE

We propose the following method:

Ifx0 is the initial approximation, then

(2)

x0 =x0

x1 =x0f0(x2f(x0)+f00)(z1), where z1 =x0ff(x0(x00))

x1 =x1f0(x2f1)+f(x10)(z1), with z1 =x1f(x1)

f0[1

2(x0+x0)] =x1ff0(x(x10)).

Subsequently, for n≥1, the iterations can be obtained as follows:

(3)

xn=xnf0(x2fn)+f(xn0)(zn), where zn=xnf(xn)

f0[1

2(xn−1+xn−1)]

xn+1 =xnf0(xn2f)+f(x0n(z)n+1), with zn+1 =xnf(xn)

f0[1

2(xn+xn)].

Below, we prove the convergence result for the method (2)–(3).

(3)

Theorem 1. Let α be a simple zero of a function f which has sufficient number of smooth derivatives in a neighborhood ofα. Then the method(2)–(3) is convergent and has the order of convergence 5.1925.

Proof. Letenand en denote respectively the errors in the termsxnandxn. Also, we denote cj = j!ffj0(α)(α), j = 2,3,4..., which are constants. The error equation for the method (1) as obtained by Weerakoon and Fernando [8] is given by

en+1 =ae3n,

where a = c22 + 12c3 and we have neglected higher power terms of en. In particular, the errore1 inx1 in the equations (2) is given by

(4) e1=ae30.

We now proceed to calculate the errore1 inx1. By using Taylor series expan- sion and binomial expansion, we get

f(x1)

f0(x0) = ff0(α+e(α+e10))

= e1+c2e21+c3e31+O(e41) 1 + 2c2e0+ 3c3e20+O(e30)−1

=e1−2c2e0e1+O(e50) so that

x1ff(x0(x10)) =α+ 2c2e0e1+O(e50).

Consequently, by Taylor series expansion, it can be calculated that f0(z1) =f0(α) 1 + 4c22e0e1+O(e50).

Also

f0(x1) =f0(α) 1 + 2c22e1+ 3c3e21+O(e31) so that

(5) f0(x1) +f0(z1) = 2f0(α) 1 +c2e1+ 2c22e0e1+O(e50).

Now, using (4) and (5), the errore1 inx1 in the equation (2) can be calculated as

e1 =e1e1+c2e21+O(e31) 1 +c2e1+ 2c22e0e1+O(e50)−1

= 2c22e0e21

=ba2e70,

whereb= 2c22. Usinge1, we now compute the error e2 in the term x2 =x1f0(x2f1)+f(x1)0(z2),

where

z2=x1f(x1)

f0 x1+x1 2

.

(4)

Now

f0 x1+x2 1=f0 α+e1+e2 1

=f0(α) 1 +c2e1+c2e1+34c3e21+O(e90) so that

f(x1) f0 x1+x

1 2

= e1+c2e21+O(e31) 1 +c2e1+c2e1+34c3e21+O(e90)−1

=e1+ 14c3e31c2e1e1 and therefore

z2 =α14c3e31+c2e1e1, where the higher power terms are neglected. Thus

f0(z2) =f0(α) 1−12c2c3e31+ 2c22e1e1 and

f0(x1) =f0(α) 1 + 2c2e1+ 3c3e12. Using the above considerations, the errore2 inx2 is given by

e2=e1e1+c2e12+c3e∗31 1 +c2e114c2c3e31−1

=−14c2c3e31e1

=ce31e1,

wherec=−14c2c3. In fact, it can be worked out that for n≥1, the following relation holds:

(6) en+1=ce3nen.

In order to computeen+1 explicitly, we need to computeen. We already know e1. We now computee2. We have

x2 =x2f0(x2f2)+f(x20)(z2), where

z2 =x2f(x2)

f0 x1+x1 2

.

Like above, it can be calculated that the errore2 is given by e2=de1e22,

where d = c22 and, again, it can be checked that in general, for n ≥ 2, the following relation holds:

(7) en=den−1e2n.

In the view of (6) and (7), the error at each stage inxnandxn+1are calculated which are tabulated below:

(5)

n en en

0 e0 e0

1 ae30 a2be70

2 a5bce160 a11b2c2de350 3 a26b5c6de830 a57b11c13d3e1820 4 a135b26c32d6e4310 a296b57c70d14e9450 5 a701b135c167d32e22380

... ... ...

Table 1. Successive errors.

It is observed that the powers of e0 in the errors at each iterate form a sequence

(8) 3, 16, 83, 431, 2238, ...

and the sequence of their successive ratios is

16

3, 8316, 43183, 2238431, ...

or,

5.3334, 5.1875, 5.1927, 5.1925, ...

This sequence seems to converge to the number 5.1925 approximately. Indeed, if the terms of the sequence (8) are denoted by{αi}, then it can be seen that (9) αi = 5αi−1+αi−2, i= 2,3,4...

If we set the limit

αi

αi−1 = ααi−1

i−2 =R, Then dividing (9) byαi−1, we obtain

R2−5R−1 = 0 which has its positive root as R = 5+

29

2 ≈ 5.1925. Hence the order of

convergence of the method is at least 5.1925.

Next, we give two variants of the method (2)–(3). Note that, in (2)–(3), the arithmetic average of the points xn, xn, n = 0,1,2... has been used. We propose methods in which the arithmetic average is replaced by harmonic as well as geometric averages. With harmonic average, we propose the following

(6)

method: Ifx0 is the initial approximation, then

(10)

x0=x0

x1=x0f0(x2f(x0)+f00)(z1), where z1=x0f(x0)

f0

2x0x

0

x0+x0

=x0ff0(x(x00))

x1=x1f0(x2f(x1)+f10)(z1), with z1=x1f(x1)

f0

2x0x

0

x0+x0

=x1ff0(x(x10)).

Subsequently, for n≥1, the iterations can be obtained as follows:

(11)

xn=xnf0(x2f(xn)+fn0)(zn), where zn =xnf(xn)

f0

2xn−1xn−1 xn−1+xn−1

xn+1=xnf0(xn2f(x)+fn0(z)n+1), with zn+1=xnf(xn)

f0

2x

nxn xn+xn

.

For the geometric average of the points xn, xn, n = 0,1,2..., the following method is proposed:

(12)

x0 =x0

x1 =x0f0(x2f(x0)+f00)(z1), where z1 =x0f(x0)

f0(√

x0x0) =x0ff(x0(x00))

x1 =x1f0(x2f1)+f(x10)(z1), with z1 =x1f(x1)

f0(√

x0x0) =x1ff(x0(x10)).

Subsequently, for n≥1, the iteration can be obtained as follows:

(13)

xn=xnf0(x2f(xn)+fn0)(zn), where zn =xnf(xn)

f0(√

xn−1xn−1) xn+1=xnf0(xn2f(x)+f0n(z)n+1), with zn+1=xnf(xn)

f0( xnxn).

The convergence of the methods (10)–(11) and (12)–(13) can be proved on the similar lines as those in Theorem 1. We only state the results below:

(7)

Theorem 2. Let α be a simple zero of a function f which has sufficient number of smooth derivatives in a neighborhood of α. Then for solving non- linear equation f(x) = 0, the method (10)–(11) is convergent with order of convergence 5.1925.

Theorem 3. Let α be a simple zero of a function f which has sufficient number of smooth derivatives in a neighborhood of α. Then for solving non- linear equation f(x) = 0, the method (12)–(13) is convergent with order of convergence 5.1925.

3. METHODS WITH HIGHER ORDER CONVERGENCE

In this section, we obtain a new iterative method by combining the iterations of method (2)–(3) with secant method and prove that the order of convergence is more than 5.1925. Precisely, we propose the following method: Ifx0 is the initial approximation, then

(14)

x0 =x0

x∗∗0 =x0f0(x2f0)+f(x0)0(z1), where z1 =x0ff0(x(x00))

x1 =x∗∗0f(xx∗∗∗∗0 −x0

0 )−f(x0)f(x∗∗0 ).

Subsequently, for n≥1, the iterations can be obtained as follows:

(15)

xn=xnf0(x2f(xn)+fn0)(zn), where zn =xnf(xn)

f0 xn−1+xn−1 2

x∗∗n =xnf0(xn2f)+f(x0n(z)n+1), where zn+1=xnf(xn)

f0 xn+xn 2

xn+1=x∗∗1f(xx∗∗∗∗n −xn

n )−f(xn)f(x∗∗n).

Remark4. In [3], it was proved that if the iterations of any method of order p for solving nonlinear equations are used alternatively with secant method, then the new method will be of order p+ 1. Thus, in view of that result, the method (14)–(15) is certainly of order at least 6.1925. However, we prove below that the order is more.

Theorem 5. Let f be a function f having sufficient number of smooth derivatives in a neighborhood ofαwhich is a simple root of the equationf(x) = 0. Then method (14)–(15) to approximate the root α is convergent with order of convergence 7.275.

(8)

Proof. We argue on the lines of that of Theorem 1 and the error equa- tion of the standard secant method. In particular, the errors e0, e∗∗0 and e1, respectively, in x0, x∗∗0 and x1 in equations (14) are given by

e0 =e0

e∗∗0 =ae30, wherea=c22+12c3 e1 =λae40, whereλ=c2. Also, the errorse1 in x1 in equation (15) is given by

e1 = 2c22e0e21

=λ2a2be90, whereb= 2c22 and the errore∗∗1 inx∗∗1 in equation (15) is given by

e∗∗1 =−14c2c3e31e1

=ce31e1,

wherec=−14c2c3. In fact, it can be worked out that for n≥1, the following relation holds:

(16) e∗∗n =ce3nen.

In order to compute e∗∗n explicitly, we need to compute en and en. We have already computede1 and e1. From the proof of Theorem 1

e2=de1e22,

whered=c22 and, again, it can be checked that the following relation holds:

(17) en=den−1e2n.

Also from (15), it can be shown that

e2 =λe1e∗∗2 .

Thus, for n ≥ 1, it can be shown that error en+1 in xn+1 in the method (14)–(15) satisfies the following recursion formula

(18) en+1 =λene∗∗n

Using the above information, the errors at each stage in xn, x∗∗n and xn are obtained and tabulated as follows:

We do the analysis of Table 2 as done in the proof of Theorem 1 for Table 1. Note that the powers ofe0 in the error at each iterate from the sequence (19) 4, 30, 218, 1586, 11538, ....

and the sequence of their successive ratios is

30

4 , 21830, 1586218, 115381586, ...

or

7.5, 7.2667, 7.2752, 7.2749, ....

(9)

n en en e∗∗n

0 e0 e0 ae30

1 λae40 λ2a2be90 λ5a5bce210

2 λ8a7b2ce300 λ17a15b5c2e640 λ42a36b11c6e1540 3 λ60a51b13c8e2180 λ128a109b29c17e4660 λ308a260b68c42e11200 4 λ437a369b97c59e15860 λ934a789b208c126e33900 λ2245a1896b499c304e81480 5 λ3180a2685b707c430e115380

... ... ... ...

Table 2. Successive errors.

If the terms of the sequence (19) are denoted by{Ni}, then it can be seen that Ni = 7Ni−1+ 2Ni−2, i= 2,3,4, ....

Thus, as in Theorem 1, the rate of convergence of method (14)–(15) is at least

7.275.

It is natural to consider the variants of the method (14)–(15), where in the expression of zn and zn, the arithmetic mean is replaced by harmonic mean as well as geometric mean as done in methods (10)–(11) and (12)–(13), respectively. Precisely, with harmonic mean, we propose the following method:

(20)

x0 =x0

x∗∗0 =x0f0(x2f(x0)+f00)(z1), where z1 =x0f(x0)

f0

2x0x

0

x0+x0

=x0ff(x0(x00))

x1 =x∗∗0f(xx∗∗∗∗0 −x0

0 )−f(x0)f(x∗∗0 )

followed by (forn≥1)

(21)

xn=xnf0(x2f(xn)+fn0)(zn), where zn =xnf(xn)

f0

2xn−1xn−1 xn−1+xn−1

x∗∗n =xnf0(xn2f(x)+fn0(z)n+1), where zn+1=xnf(xn)

f0

2x

nxn xn+xn

xn+1=x∗∗1f(xx∗∗∗∗n−xn

n)−f(xn)f(x∗∗n )

(10)

and with the geometric mean, we propose the following :

(22)

x0 =x0

x∗∗0 =x0f0(x2f0)+f(x0)0(z1), where z1 =x0f(x0)

f0(√

x0x0) =x0ff0(x(x00))

x1 =x∗∗0f(xx∗∗∗∗0 −x0

0 )−f(x0)f(x∗∗0 )

followed by (forn≥1)

(23)

xn=xnf0(x2f(xn)+fn0)(zn), where zn =xnf(xn)

f0(√

xn−1xn−1) x∗∗n =xnf0(xn2f)+f(x0n(z)n+1), where zn+1=xnf(xn)

f0( xnxn)

xn+1=x∗∗1f(xx∗∗∗∗n −xn

n )−f(xn)f(x∗∗n).

The convergence of the methods (20)–(21) and (22)–(23) can be proved by using the arguments as used in the proof of Theorem 5. We skip the details for conciseness.

4. ALGORITHMS AND NUMERICAL EXAMPLES

We give below an algorithm to implement the method (2)–(3):

Algorithm 6. Step 1 : For the given tolerance ε > 0 and iteration N, choose the initial approximation x0 and set n= 0.

Step 2 : Follow the sequence of expressions:

x0 =x0

x1 =x0f0(x2f(x0)+f00)(z1), where z1 =x0ff0(x(x00))

x1 =x1f0(x2f1)+f(x10)(z1), where z1 =x1f(x1)

f0 x0+x0 2

=x1ff(x0(x10))

(11)

Step 3 : For n= 1,2,3, . . ., calculate x2, x3, x4, . . . by the following sequence of expressions:

xn=xnf0(x2fn)+f(xn0)(zn), where zn =xnf(xn)

f0 xn−1+xn−1 2

xn+1 =xnf0(xn2f)+f(x0n(z)n+1), where zn+1 =xnf(xn)

f0 xn+xn 2

Step 4 : Stop if either |xn+1xn|< ε or n > N. Step 5 : Set n=n+ 1 and repeat Step 3.

Example 7. We apply method (2)–(3) on the nonlinear equation

(24) cosxxex+x2 = 0.

This equation has a simple root in the interval (0,1). Taking initial approx- imation as x0 = 1, Table 3 shows the iterations of McDougall-Wotherspoon method, a third order method (1) and our method (2)–(3).

n W-F Method (1) M-W method (2)–(3) method 1. 1.1754860092539474 0.89033621746836966 0.64406452481689269 2. 0.7117526001461193 0.66469560530044569 0.63915407608296659 3. 0.63945030188514695 0.63928150457301036 0.63915411559451774 4. 0.63915408656045591 0.63915408990276223 0.6391540955014231 5. 0.63915410631623149 0.63915410965853769 0.63915407540832936 6. 0.63915412607200606 0.6391540698096656 0.6391541149198805 7. 0.63915408622313585 0.63915408956544117 0.63915409482678587 8. 0.63915410597891142 0.63915410932121663 0.63915407473369212 9. 0.639154125734686 0.63915406947234454 0.63915411424524327 10. 0.63915408588581579 0.63915408922812 0.63915409415214863 11. 0.63915410564159136 0.63915410898389557 0.63915407405905489 12. 0.63915412539736594 0.63915406913502348 0.63915411357060603 13. 0.63915408554849573 0.63915408889079894 0.6391540934775114 14. 0.63915410530427119 0.63915410864657451 0.63915407338441765 15. 0.63915412506004576 0.63915406879770231 0.6391541128959688 16. 0.63915408521117556 0.63915408855347788 0.63915409280287416 17. 0.63915410496695113 0.63915410830925345 0.63915407270978042 18. 0.6391541247227257 0.63915406846038125 0.63915411222133156 19. 0.6391540848738555 0.63915408821615682 0.63915409212823693 20. 0.63915410462963107 0.63915410797193239 0.63915407203514318

Table 3. Numerical results for different methods.

Referințe

DOCUMENTE SIMILARE

The generalized Newton method for solving nonlinear and nondifferentiable systems which uses the notion of B-differentiability is equivalent with the generalized Newton method

Argyros, A unifying local–semilocal convergence analysis and applications for two-point Newton-like methods in Banach space, J.. Hilout , Numerical Methods for Equations and

Nonlinear equations in Banach space; third order Newton like methods; recurrence relations; error bounds; convergence

We present a semi-local as well as a local convergence analysis of Halley’s method for approximating a locally unique solution of a nonlinear equa- tion in a Banach space setting..

, Convergence of the family of the deformed Euler-Halley iterations under the H¨ older condition of the second derivative, Journal of Computational and Applied Mathematics,

In [2] it is shown that the convergence order of the iterative methods given by (6) cannot be greater than 2, even if the number of the interpolation nodes is arbitrarily

The convergence order of our method is greater or equal to the number of the controlled nodes used in the Lagrange-type inverse interpolation, which, in its turn, is substantial

P˘ av˘ aloiu, On a Chebyshev-type method for approximating the so- lutions of polynomial operator equations of degree 2, Proceedings of International Con- ference on Approximation