• Nu S-Au Găsit Rezultate

View of On Newton's method for subanalytic equations

N/A
N/A
Protected

Academic year: 2022

Share "View of On Newton's method for subanalytic equations"

Copied!
13
0
0

Text complet

(1)

J. Numer. Anal. Approx. Theory, vol. 46 (2017) no. 1, pp. 25–37 ictp.acad.ro/jnaat

ON NEWTON’S METHOD FOR SUBANALYTIC EQUATIONS

IOANNIS K. ARGYROSand SANTHOSH GEORGE

Abstract. We present local and semilocal convergence results for Newton’s method in order to approximate solutions of subanalytic equations. The local convergence results are given under weaker conditions than in earlier studies such as [9], [10], [14], [15], [24], [25], [26], resulting to a larger convergence ball and a smaller ratio of convergence. In the semilocal convergence case contravariant conditions not used before are employed to show the convergence of Newton’s method. Numerical examples illustrating the advantages of our approach are also presented in this study.

MSC 2010. 65H10, 65G99, 65K10, 47H17, 49M15.

Keywords. Newton’s methods, convergence ball, local-semilocal convergence, subanalytic functions.

1. INTRODUCTION

In this study we are concerned with the problem of approximating a solution x of the equation

(1) F(x) = 0,

whereF is a continuous mapping from a subsetD of Rn intoRn.

Many problems in computational sciences and other disciplines can be brought in a form like (1) using mathematical modeling [3], [7], [8], [9], [14], [16], [17], [22], [24]– [28]. In general the solutions of equation (1) can not be found in closed form. Therefore iterative methods are used for obtaining approximate solutions of (1). In Numerical Functional Analysis, for finding solution x of equation (1) is essentially connected to variants of Newton’s method. Newton’s method is defined by

(2) xk+1=xkF0(xk)−1F(xk), for each k= 0,1,2, . . . ,

where x0 is an initial point and F is a continuously Fr´echet differentiable function on D, i.e., F is a smooth function.

Cameron University, Department of Mathematicsal Sciences, Lawton, OK 73505, USA, e-mail: [email protected].

National Institute of Technology Karnataka, Department of Mathematical and Compu- tational Sciences, India-575 025, e-mail: [email protected].

(2)

The study about convergence matter of iterative procedures is usually based on two types: semi-local and local convergence analysis. The semi-local con- vergence matter is, based on the information around an initial point, to give conditions ensuring the convergence of the iterative procedure; while the lo- cal one is, based on the information around a solution, to find estimates of the radii of convergence balls. There exist many studies which deal with the local and semilocal convergence analysis of Newton’s methods (2) under var- ious Lipschitz-type conditions on F0. We refer the reader to [1–28] and the references therein for this type of results.

However, in many interesting applications F is not a smooth function [3], [7], [8], [23], [24], [26], [28]. In particular, we are interested in the case when F is a semismooth function. Then, we define Newton’s method

(3) xk+1 =xk−Λ(xk)−1F(xk), for each, k= 0,1,2, . . . ,

where x0 ∈ Rn is an initial point and Λ(xk) ∈ ∂F(xk) the generalized Ja- cobian of F as defined by Clarke [14]. We present local as well as semilo- cal convergence results under weaker conditions than in earlier studies such as [9], [10], [14], [15], [24], [25], [26]. In the case of local convergence, our convergence ball is larger and the ratio of convergence smaller than before [9], [10], [14], [15], [24]– [26]. These advantages are also obtained under weaker hypotheses. This type of improved convergence results are important in com- putational Mathematics, since this way we have a wider choice of initial guesses and we compute less iterates in order to obtain a desired error tolerance.

The rest of the paper is organized as follows: In order to make the paper as self contained as possible, we provide the definitions of semismooth, semi- analytic and subanalytic functions as well as earlier results in Section 2. The semilocal and local convergence analysis of Newton’s method is given in Sec- tion 3. Finally the numerical examples illustrating the theoretical results are given in the concluding Section 4.

2. SEMISMOOTH, SEMIANALYTIC AND SUBANALYTIC FUNCTIONS

In order to make the paper as self contained as possible we state some standard definitions and results. In [24], Milfflin introduced the concept of semismoothness for functionals, later in [26], L.Qi and J. Sun extended this concept for functions of several variable. In fact they showed that semismooth- ness is equivalent to the uniform convergence of directional derivatives in all directions.

Definition 1. (see [14, p. 70])Let F : Rn → Rn be a locally Lipschitz continuous function. The limiting Jacobian of F at x∈Rn is defined as

F(x) ={A∈L(Rn,Rm) :∃ukDF;F0(uk)→A, k→ ∞}

where DF denotes the point of differentiability ofF.The Clarke Jacobian ofF at x∈Rn denoted ∂F(x) is the subset of X dual of X, defined as the closed convex hull of F(x).

(3)

Definition 2. [26]We say that F is semismooth at x∈Rn if F is locally Lipschitzian at x and

lim

V∈F0(x+th0),h0→h, t↓0{V h0} exists for any h∈Rn.

Note that convex functions and smooth functions are semismooth functions, Further the product and sums of semismooth functions are semismooth func- tions (see [10]). Moreover, semismoothness of F implies

lim

h0→h, t→0

F(x+th0)−F(x)

t = lim

V∈F0(x+th0),h0→h, t↓0{V h0}.

Definition 3. [15] A subset X of Rn is semianalytic if for each a∈ Rn there is a neighbourhoohU of a and real analytic functionsfi,j onU such that

XU =

r

[

i=1 si

\

j=1

{x∈U|fi,jεi,j0}

where εi,j ∈ {<, >,=}.

Remark4. Xis said to be semianalytic ifX=Rnandfi,j are polynomials.

Definition 5. [15] A subset X of Rn is subanalytic if for each a ∈ Rn admits a neighborhoodU such thatX∩U is a projection of a relatively compact semianalytic set: there is a semianalytic bounded set A in Rn+p such that XU =Q(A) where Q:Rn+p →Rn is the projection.

Definition 6. Let Xbe a subset of Rn.A function F :X →Rn is semian- alytic (resp. subanalytic) if its graph is semianalytic (resp. subanalytic).

It can be seen that the class of semianalytic (resp. subanalytic) sets are closed under elementary set operations, further the closure, the interior and the connected components of a semianalytic (resp. subanalytic) set are semi- analytic(resp. subanalytic). But, the image of a bounded semianalytic set by a semianalytic functions is not stable under algebraic operations (see [23], [13]).

That is why the subanalytic functions are introduced. If X is a subanalytic and relatively compact set the image ofXby a subanalytic function is suban- alytic (see [23], [9]). Further, if F and g are subanalytic continuous functions defined on a compact subanalytic set K then F+g is subanalytic.

For examples and properties of subanalytic or semianalytic functions we refer the interested reader to [1], [14], [16], [17], [24], [25], [27], [28]. The following Propositions and Remark can be found in [10]

Proposition7. [10] IfF :X⊂Rn→Rnis a subanalytic locally Lipschitz mapping then for all xX

kF(x+d)F(x)−F0(x;d)k=ox(kdk).

(4)

Remark 8. [28] A subanalytic functiontoc(t) admits a Puiseux devel- opment; so there exist a constant c >0,a real number > 0 and a rational number γ > 0 such that kF(x+d)F(x)−F0(x;d)k = ckdkγ whenever kdk ≤.

Proposition9. [10] LetF :Rn→Rn be locally Lipschitz and subanalytic, there exists a positive rational number γ such that:

kF(y)−F(x)−Λ(y)(y−x)k=Cxky−xk1+γ

where y is close to x, Λ(y) is any element of ∂F(y) and Cx is a positive constant.

3. CONVERGENCE

We present semilocal and local convergence results for Newton’s method.

First, we present a semilocal result for Newton’s method. LetU(x, ρ),U¯(x, ρ) denote the open and closed balls in Rn with centerxand of radius ρ >0.

Theorem 10. Let F :D⊆Rn→Rn is locally Lipschitz subanalytic on D.

For any Λ(x)∈∂F(x), x∈D,Λ(x) nonsingular;

(1) kΛ(x0)−1F(x0)≤η, for somex0D;

kΛ(y)−1[F(y)−F(x)−Λ(y)(y−x)]k ≤ Kky−xk1+γ, (2)

for each x, yD and some γ≥0;

kΛ(y)−1(Λ(y)−Λ(x))(y−x)k ≤ Mky−xk1+γ0, (3)

for each x, yD and some γ0 ≥0;

(4) 0≤α:=γ+M ηγ0 <1 and for

r=1−αη (5)

U¯(x, r)D.

Then, sequence{xk}generated by Newton’s method(3)is well defined, remains in U(x0, r) for each k= 0,1,2, . . . and converges to xU¯(x0, r) of equation F(x) = 0.Moreover, the following estimates hold:

(6) kxk+1xkk ≤αkxkxk, for each k= 0,1,2, . . . and

(7) kxkxk ≤ 1−ααkη, for each k= 0,1,2, . . . .

(5)

Proof. It follows from (1), (4), (5) and Newton’s method fork= 0 that kx1x0k=kF0(x0)−1F(x0)k ≤η1−αη =r.

Hence, x1U¯(x0, r). Using Newton’s method (3) for k = 1, we get the approximation

F(x1) = F(x1)−F(x0)−Λ(x0)(x1x0)

= [F(x1)−F(x0)−Λ(x1)(x1x0)] + [Λ(x1)−Λ(x0)](x1x0).

(8)

Then, since x1D we have that Λ(x1)−1L(Y, X). In view of (1), (2), (3), (4) and (8) we get that

kx2x1k = kΛ(x1)−1F(x1)k

≤ kΛ(x1)−1(F(x1)−F(x0)−Λ(x1)(x1x0))k +kΛ(x1)−1(Λ(x1)−Λ(x0))(x1x0)k

Kkx1x0k1+γ+Mkx1x0k1+γ0

= (Kkx1x0kγ+Mkx1x0kγ0)kx1x0k

≤ (Kηγ+M ηγ0)kx1x0k=αkx1x0k and

kx2x0k ≤ kx2x1k+kx1x0k ≤(α+ 1)kx1x0k

= 1−α1−α2kx1x0k ≤ 1−α1−α2ηr, (9)

which shows that (6) holds for k= 1 and x2U¯(x0, r).

Let us assume that (6) holds for all ik and xiU¯(x0, r). Then, by simply usingxi−1, xi in place of x0, x1 in (8)–(9) we get that

kxi+1xik ≤αkxixi−1k, so

kxi+1x0k ≤ 1−α1−αi+1kx1x0k ≤ 1−α1−αi+1ηr,

which complete the induction for (6) and xi+1U¯(x0, r). It follows that sequence{xk}is complete inRnand as such it converges to somexU¯(x0, r) (since ¯U(x0, r) is a closed set). By lettingi→ ∞in the estimate

kΛ(xi)−1F(xi)k=kxi+1xik ≤αi+1η

and since Λ(xi)−1L(Y, X),we get that F(x) = 0.We also have that kxk+ixkk ≤ kxk+ixk+i−1|+kxk+i−1xk+i−2k+. . .+kxk+1xkk

≤ (αk+i−1+αk+i−2+. . .+αk)kx1x0k

= αk1−α1−αikx1x0k ≤αk1−α1−αiη.

(10)

By letting i→ ∞in (10) we obtain (7).

(6)

Remark11. (a) Condition (3) does not necessarily imply that Λ is Lipschitz and cannot be avoided if you want to show convergence.

(b) If you use

(11) kΛ(y)−1k ≤K1

kF(y)−F(x)−Λ(y)(y−x)k ≤ K2ky−xk1+γ, (12)

for each x, yD, and some γ ≥0;

k(Λ(y)−Λ(x))(y−x)k ≤ M1ky−xk1+γ0, (13)

for each x, yD,and some γ0 ≥0,then, we have the estimates kΛ(y)−1[F(y)−F(x)−Λ(y)(y−x)]k ≤ kΛ(y)−1k

×kF(y)−F(x)−Λ(y)(y−x)k

K1K2ky−xk1+γ

kΛ(y)−1[Λ(y)−Λ(x)](y−x)k ≤ kΛ(y)−1kk[Λ(y)−Λ(x)](y−x)k

K1M1ky−xk1+γ0.

Set K = K1K2, M = K1M1. If (1), (2) are replaced by (11), (12) and (13), then the conclusion of Theorem 10 hold in this stronger though setting.

(c) Notice that due to the estimate

kΛ(y)−1[Λ(y)−Λ(x)](y−x)k ≤ kΛ(y)−1[Λ(y)−Λ(x)]kky−xk, M in (3) can be chosen to be an upper bound on kΛ(y)−1[Λ(y)−Λ(x)]k.That is

kΛ(y)−1[Λ(y)−Λ(x)]k ≤M.

Notice also thatkΛ(y)−Λ(x)k ≤M2 holds if e.g. Λ is continuous. Then, we can choose e.g. M =K1M2 forγ0 = 0.

(d) Ifγ0 =γ = 0 and (1)-(3) are given in non-invariant form, then Theorem 10 reduces to the corresponding one in [26]. Otherwise, our Theorem 10 is an extension of the one in [26]. Moreover, it is an improvement in the case γ0=γ = 0,since our results are given in affine invariant form. The advantages of affine over non-affine invariant form results are well known in the literature [17].

(e) It was shown in [10] that if F :D ⊆Rn →Rn is locally Lipschitz and subanalytic, then (12) always holds. Therefore, (2) holds forK =K1K2.

Next, we present a local convergence result for Newton’s method.

Theorem 12. Suppose F : D ⊆ Rn → Rn is locally Lipschitz and sub- analytic; there exists a regular point xD such that F(x) = 0; for any Λ(x)∈∂F(x), x∈D,Λ(x) is nonsingular;

kΛ(y)−1[F(y)−F(x)−Λ(y)(y−x)]k ≤ λkyxk1+β (14)

for each x, yD and some λ >0, β >0,and for R= min{1λ,λ1β}

(7)

U¯(x0, R)D.

Then, sequence {xn} generated by Newton’s method (3) converges to x pro- vided that x0U(x, R). Moreover, the following estimates hold for each n= 0,1,2, . . .

(15) kxn+1xk ≤λkxnxk1+β ≤ kx0xk< R and

(16) kxn+1xk ≤λ1β(λkx0xk)(1+β)n+1.

Proof. We have that x1U(x, R) by the choice of R. Then, using the estimate

x2x =−Λ(x1)−1[F(x1)−F(x)−Λ(x1)(x1x)]

and (14), we get that

kx2xk = kΛ(x1)−1[F(x1)−F(x)−Λ(x1)(x1x)]k

λkx1xk1+β ≤ kx1xk< R

by the choice of R, which shows (15) and (16) for n = 0. Suppose that (15) and (16) hold for each kn.Then, we have that

(17) xk+1x =−Λ(xk)−1[F(xk)−F(x)−Λ(xk)(xkx)]

so by (14) and (17) we get that

kxk+1xk = kΛ(xk)−1[F(xk)−F(x)−Λ(xk)(xkx)]k

λkxkxk1+βλ(λkxk−1xk1+β)1+β

λλ1+β(kxk−1xk)(1+β)2 ...

= λ

(1+β)k+1+...+(1+β)+1

β kx0xk(1+β)k+1

= λβ1(λkx0xk)k+1

which shows (15), (16) for alln and that limk→∞xk=x. Remark 13. Condition (14) certainly holds if replaced by the stronger (18) kΛ(y)−1[F(y)−F(x)−Λ(y)(y−x)]k ≤λ1ky−xk1+β, for each x, yD.

In this case however,

(19) λλ1

holds in general and λλ1 can be arbitrarily large [3, 7, 8]. Moreover, if λ1 =λ and (14) is replaced by (19), then our result reduces to the corresponding one in [26]. Otherwise (i.e., if λ < λ1), it constitutes an improvement with advantages:

(i) (14) is weaker than (18). That is (18) implies (14) but (14) does not necessarily imply (18).

(8)

(ii) If λ < λ1,the new error bounds on the distances kxnxk are tighter and the ratio of convergence smaller. That means in practice fewer iterates are required to achieve a given error tolerance. Hence, the applicability of Newton’s method is expanded under less computational cost. Notice also that the computation of constant λ requires the computation of constant λ1 as a

special case (see, Example 4.2).

Next, we present the corresponding semilocal and local convergence results under contravariant conditions [3, 6, 7, 17, 25].

Theorem 14. Suppose: F :D⊆Rn→Rn is locally Lipschitz subanalytic;

for ant Λ(x)∈∂F(x), x∈D,Λ(x) is nonsingular;

(20) kF(y)−F(x)−Λ(y)(y−x)k ≤µ0kΛ(x)(y−x)k1+γ for some µ0 >0, γ >0 and each x, yD;

(21) k(Λ(y)−Λ(x))(y−x)k ≤µ1kΛ(x)(y−x)k1+γ for some µ1 >0 and eachx, yD;Define the setQ by

Q={x∈D:kF(x)kγ < 1+γµ },

and let Q be bounded, whereµ=µ0+µ1;x0D is such that kF(x0)k ≤ξ

and

ξµγ1 <(1 +γ)γ1

(i.e., x0Q). Then, sequence{xk}generated for x0Q by Newton’s method (3)is well defined, remains inQfor eachn= 0,1,2, . . .and converges to some xQ such that F(x) = 0. Moreover, sequence {F(xk)} converges to zero and satisfies

kF(xk+1)k ≤µkF(xk)k1+γ, for each k= 0,1,2, . . .

Proof. By hypothesis x0Q. Suppose xkQ. Then, using Newton’s method (3) we get the approximation

F(xk+1 = [F(xk+1)−F(xk)−Λ(xk+1)(xk+1xk)]

(22)

+[Λ(xk+1)−Λ(xk)](xk+1xk).

Using (20), (21), (22) we get in turn that

kF(xk+1k = k[F(xk+1)−F(xk)−Λ(xk+1)(xk+1xk)]k +[Λ(xk+1)−Λ(xk)](xk+1xk)

µ0kΛ(xk)(xk+1xk)k1+γ+µ1kΛ(xk)(xk+1xk)k1+γ

= µkΛ(xk)(xk+1xk)k1+γ. Since xkQ, we have that

kF(xk)kγ < 1+γµ .

(9)

Therefore, we get that

kF(xk+1k ≤ µkF(xkk1+γ< µkF(xµkk =kF(xkk.

Notice also that

kF(xk+1kγ<kF(xkkγ < 1+γµ . We also have the implication

xk+1Q⇒ {xk} ⊆Q.

Set

sk=µ1γkF(xk)k.

Then, we have in turn

sk+11+γ1 s1+γk sk+11+γ1 s1+γk

. . .1+γ1 (1+γ1 )

(1+γ)[(1+γ)k−1]

γ s(1+γ)0 1+k

= (1 +γ)1γ( s0

(1+γ)γ1

)(1 +γ)k+1. But we have that

s0 =ξµγ1 <(1 +γ)γ1.

Hence, we obtain limk→∞sk = 0, which imply limk→∞kF(xk)k= 0. The set Qis bounded, so there exists an accumulation pointxQof sequence{xk}

such thatF(x) = 0.

IfF is Fr´echet differentiable andDis a convex set, then due to the estimate (23) F(xk+1) =

Z 1 0

[F0(xk+θ(xk+1xk))−F0(xk)](xk+1xk)dθ

by repeating the proof of Theorem 14 using (23), we arrive at the follow- ing semilocal convergence result for Newton’s method (2) under contravariant conditions.

Theorem15. Suppose :F :D⊆Rn→Rn is Fr´echet-differentiable; for any xD, F0(x)−1L(Rn);

k(F0(y)−F0(x))(y−x)k ≤µ1kF0(x)(y−x)k1+γ for some µ1 >0 and eachx, yD;Define the setQ1 by

Q1 ={x∈D:kF(x)kγ< 1+γµ

1 };

x0Dis such that

kf(x0)k ≤ξ and

ξµ

1 γ

1 <(1 +γ)1γ.

(10)

Then, sequence{xk}generated by Newton’s method (2) is well defined, remains in Qfor each n= 0,1,2, . . . and converges to some xQsuch that F(x) = 0. Moreover, sequence{F(xk)} converges to zero and satisfies

kF(xk+1)k ≤µ1kF(xk)k1+γ, for eachk= 0,1,2, . . . .

Remark 16. If γ = 1,Theorem 15 reduces to the corresponding Theorem in [26]. However, there are examples where γ 6= 1 (see Example 17). Then, in

this case the results in [26] cannot apply.

4. NUMERICAL EXAMPLES

We present numerical examples to illustrate the theoretical results. First, we present an example under contravariant conditions.

Example 17. Let X =Y =R2, D ={x= (x1, x2) : 12xi ≤2, i= 1,2}

equipped with the max-norm and defineF onD forx= (x1, x2) by

F(x) =

x

3 2

1 −2x1+x2

x1+x

3 2

2 −2x2

.

Then, the Fr´echet-derivative is given by

F0(x) =

3 2x

1 2

1 −2 1

1 32x

1 2

2 −2

.

Therefore, for any x, yD,we have in turn that kF0(x)−F0(y)k =

3 2(x

1 2

1y

1 2

1) 0

0 32(x

1 2

2y

1 2

2)

= 32max{|x

1 2

1y

1 2

1|,|x

1 2

2y

1 2

2|}

32max{|x1y1|12,|x2y2|12}

32[max{|x1y1|,|x2y2|}]12 = 32|x−y|12. We also have thatkF0(x)k ≤2 for eachxD.Then, we get that

k(F0(x)−F0(y))(x−y)k ≤ 3

2

8 kF0(x)(x−y)k32. Therefore, we can choose γ = 12, µ1= 3

2 8 and Q1 ={x∈D:kF(x)k ≤8}.

The, conclusions of Theorem 15 hold and Newton’s method converges tox =

(1,1).

Next, we present an example for the local convergence case.

(11)

Example 18. LetX=Y =R, D=U(0,1) and define functionF onDby

(24) F(x) =ex−1.

Then, we have that x = 0.Using (24) we get in turn that F0(y)−1(F(y)−F(x)−F0(y)(y−x)) =e−y(ey−1−eyy)

=1−y−(1−y+y2!2y3!3 + y4!4. . .)

=(2!13!y +y4!2. . .)y2 so,

kF0(y)−1(F(y)−F(x)−F0(y)(y−x))k = |2!13!y +y4!2. . .||y|2

≤ (2!1|y|3! + |y|4!2. . .)|y|2

≤ (2!13!1 + 4!1. . .)|y|2

= (e−2)|y|2. Hence, we can chooseλ=e−2 and β= 1.Moreover, we have that

F0(y)−1(F(y)−F(x)−F0(y)(y−x)) =

= e−y(ey−1−ex+ 1−ey(y−x))

= 1−(y−x)ex−y

= 1 +xy−[1 + (x−y) + (x−y)2! +(x−y)3! 2 +(x−y)4! 3. . .]

= (2!1 +x−y3! +(x−y)4! 2 +. . .)(xy)2 so,

kF0(y)−1(F(y)−F(x)−F0(y)(y−x))k=|2!1 +x−y3! +(x−y)4! 2 +. . .||y−x|2

≤(2!1 +|x−y|3! +|x−y|4! 2 +. . .)|yx|2

≤(2!1 +|x−y|3! +|x−y|4! 2 +. . .)|yx|2

≤(2!1 +3!2 +24!2 +. . .)|yx|2

=(e2−3)|y−x|2.

Hence, we can chooseλ1=e2−3 and β = 1.Therefore, we obtain R1= λ1

1 = 0.227839421<1.392211191 = 1λ =R.

That is, we deduce that the new convergence ball is larger and the ratio of convergence smaller than the old convergence ball and the old ratio of convergence. Finally, the convergence of Newton’s method is guaranteed by

Theorem 12 provided that x0U(x0, R).

(12)

REFERENCES

[1] S. Amat, S. Busquier, J.M. Guti´errez,Geometric constructions of iterative func- tions to solve nonlinear equations, J. Comput. Appl. Math.,157(2003), 197–205.

[2] I.K. Argyros, A unifying local-semilocal convergence analysis and applications for two-point Newton-like methods in Banach spaces, J. Math. Anal. Appl., 298 (2004), 374–397.

[3] I.K. Argyros,Computational theory of iterative methods. Series: Studies in Compu- tational Mathematics, 15, Editors: C.K. Chui and L. Wuytack, Elsevier Publ. Co. New York, U.S.A, 2007.

[4] I.K. Argyros,Concerning the convergence of Newton’s method and quadratic majo- rants, J. Appl. Math. Comput.,29(2009), 391–400.

[5] I.K. Argyros,A semilocal convergence analysis for directional Newton methods, Math.

Comput.,80(2011), 327–343.

[6] I.K. Argyros, S. Hilout,Weaker conditions for the convergence of Newton’s method, J. Complexity,28(2012), 364–387.

[7] I.K. Argyros, Y.J. Cho, S. Hilout,Numerical methods for equations and its appli- cations, CRC Press/Taylor and Francis Publ., New York, 2012.

[8] I.K. Argyros, S. Hilout, Computational methods in nonlinear analysis, World Sci- entific Publ. Comp., New Jersey, USA 2013.

[9] E. Bierstone, P.D. Milman,Semianalytic and subanalytic sets, IHES. Publications mathematiques,67(1988), 5–42.

[10] J. Bolte, A. Daniilidis, A.S. Lewis, Tame mapping are semismooth, Math. Pro- gramming (series B),117(2009), 5–19.

[11] V. Candela, A. Marquina, Recurrence relations for rational cubic methods I: The Halley method, Computing,44(1990), 169–184.

[12] V. Candela, A. Marquina,Recurrence relations for rational cubic methods II: The Chebyshev method, Computing,45(1990), 355–367.

[13] C. Chun, P. Stanica, B. Neta, Third order family of methods in Banach spaces, Computers Math. Appl.,61(2011), 1665–1675.

[14] F.H. Clarke,Optimization and Nonsmooth Analysis, Society for industrial and Ap- plied Mathematics, 1990.

[15] J.P. Dedieu,Penalty functions in subanalytic optimization, Optimization,26(1992), 27–32.

[16] J.E. Dennis Jr, R.B. Schnabel, Numerical methods of unconstrained optimization and nonlinear equations, Pretice-Hall, Englewood Cliffs., 1982.

[17] P. Deuflhard,Newton methods for nonlinear problems: Affine invariance and Adap- tive Algorithms, Berlin: Springer-Verlag, 2004.

[18] J.M. Guti´errez, M.A. Hern´andez,Recurrence relations for the super-Halley method, Computers Math. Appl.,36(1998), 1–8.

[19] J.M. Guti´errez, M.A. Hern´andez,Third-order iterative methods for operators with bounded second derivative, J. Comput. Appl. Math.,82(1997), 171–183.

[20] M.A. Hern´andez, M.A. Salanova,Modification of the Kantorovich assumptions for semilocal convergence of the Chebyshev method, Journal of Computational and Applied Mathematics,126(2000), 131–143.

[21] M.A. Hern´andez,Chebyshev’s approximation algorithms and applications, Computers Math. Applic.,41(2001), 433–455.

[22] L.V. Kantorovich, G.P. Akilov, Functional Analysis, Pergamon Press, Oxford, 1982.

[23] S. Lojasiewicz,Ensembles semi-analytiques, IHES Mimeographed notes, 1964.

(13)

[24] R. Mifflin,Semi-smooth and semi-convex functions in constrained optimization, SIAM J. Control and Optimization, 15(1977), 959–972.

[25] J.M. Ortega, W.C. Rheinboldt,Iterative Solution of Nonlinear Equations in Several Variables, Academic press, New York, 1970.

[26] L. Qi, J. Sun,A non-smooth version of Newton’s method, Mathematical Programming, 58(1993), 353–367.

[27] R.T. Rockafellar,Favorable classes of Lipschitz-continuous functions in subgradient optimization, in E. Nurminski ed., Nondifferentiable Optimization (Pergamon Press, New York, 1982), 125–143.

[28] L. Van Den Dries, C. Miller,Geometric categories and 0-minimal structures, Duke.

Math. J.,84(1996), 497–540.

Received by the editors: September 23, 2014.

Referințe

DOCUMENTE SIMILARE

Toate acestea sunt doar o parte dintre avantajele in care cred partizanii clonarii. Pentru a si le sustine, ei recurg la o serie de argumente. Unul dintre ele are in atentie

Abstract: The Canadian Immigration and Refugee Protection Act provides that one of the objectives of immigration is “to see that families are reunited in Canada.” The Act

3 (a &amp; b) shows the specific wear rate of the composites with varying the parameters such as load and sliding velocity. Similarly, the sliding velocity was taken as 10 m/s and

In the paper a local and a semi-local convergence of combined it- erative process for solving nonlinear operator equations is investigated.. This solver is built based on Newton

, Convergence of the family of the deformed Euler-Halley iterations under the H¨ older condition of the second derivative, Journal of Computational and Applied Mathematics,

In this note we shall present a procedure to obtain extremal elements of the unit ball of the quasi-normed semilinear space of real-valued semi-Lipschitz functions defined on

Development of new oral cyclosporine formulation using cyclodextrin tetrapolymer (P- αβγ-CD) in attempt to enhance its stability and dissolution rate.Two spray-dried dispersion

Membrane filter assay of 45nm nano silver-coated filter for detection of E-coli in the treated water sample after 2 hours filtration.. The number of bacteria is