• Nu S-Au Găsit Rezultate

View of An infeasible-interior-point method for the \(P_\ast(k)\) -matrix LCP

N/A
N/A
Protected

Academic year: 2022

Share "View of An infeasible-interior-point method for the \(P_\ast(k)\) -matrix LCP"

Copied!
10
0
0

Text complet

(1)

REVUE D'.^NALYSE NUMÉRIQUE ET DE THÉORIE DE L,APPROXIMATION TOME XXVTI, No 2, 1998, pp. 277_29s

AN INFEASIBLE-INTERIOR-POINT METHOD FOR TI{E P-(r).MATRIX LCp

ruN JI, FLORIAN A. POTRA

l.INTRODUCTION

The P--matrix rinear comprementarity problem requires the computation of a

vector pafu

(x,s)

e R2, satis$ing

(1.1)

s=

fuh+q, xrs=(J, (.x,s))O,

where

qeR'

and

M €[R'"'

is ap*-matrix. The class of p*-matrices was intrcr"

duce{,þy Kojima

et

ar, [7] and

it

contains many types of matrices encountercd in practical applications, .Let

r

be a nonnegative number.

A

matrix

M is

carecl n

P-(rc)-matrix

if

(r.2) where

(1+4r)

.

I x,[lutxl,+

i eJr(x)

I x,[Mx], >0,

Vx e

R,,

i eJ_ (x)

J*(x)={i:x,[Mx],> 0],

J_

(x)={i:x,fMxl, <0),

or, equivalently,

if

(l'3)

xr

Mx> -4r I x,[M1,,

e

R,

ieJ*(x)

The class

of ail

p.(r)-matrices

is

denr:ted

by n(r),

antl the crass

p.

is

defined by

¿ = U p-(r),

x>0

i.e., M is a P-.matrix

if

M ep_ (rc) for some

r )

0.

obviously,

p.(0) =

pg¿

(the crass

of

positive

semi-definitc matrices).

Every,çonvex quadratic optimization problem can be written as a monotone LCp

iAMS Subject Classificaríon: 65F10, 65y20.

(2)

278 Jun Ji, Florian A, Potra 2

and therefore the

R

LCP generalizes this case. Also, we have P^

= P,

where P is

the class of all matrices with positive principal minors. ThisJollows from the fact

that a P-matrix M is a &(r)-matrix for K=maxi-^j"'"'(,Y),01,

where

I qv@)')

X.i"(M) is the

smallest eigenvalue

of (M+M')12,

and

y(M)>0 is

the so-called P-matrix number of M(see [7, Lemma 3.3]).

Most interior-point methods for li¡ear programming have been successfully extended to the monotone LCP. However, there are comparatively fewer results for

the

P--matrix LCP. The potential reduction method given by Kojima

et

al. [71 solves

a

P- (rc)-matrix LCP in at most

O((r

+ t>

JiÐ

iterations. Nevertheless, no superlinear convergence results have been proved so far for that method. The first algorithm

for this

new class

of

LCP having both polynomial complexity and quadratic convergence has been recentþ proposed by Miao [11]. His method is actually an extension of the Mizuno-Todd-Ye's predictor-corrector algorithm for linear programming

[4].

In the

above mentioned algorithms

it is

assumed that the starting point ("n, ro) satisfies exactly the linear constraints (i.e., s0 = Mx\ +

q)

and lies

in

the

interior of the region defined by the inequality constraints (i.e., the vectors x0 and s0 are strictly positive). Such a starting point is callerl strictly feasible or simply interíor.

All

the points generated by the algorithm are also strictly feasible, which accounts for the name interior-point method. However, in practice

it

is sometimes very clifficult to obtain feasible starting points. Numerical experiments have shown that it is possible to obtain good practical performance by using starting points that

lie in

the interior

of

the region dehned by the inequality constraints, but do not satis$, the equality constraints (cf. [10]). The points generated by the algorithm

will

remain

in

the interior

of

the region defined

by

the inequality constraints but, in general,

will

not satisff the equatity constraints. This property is reflected

in

the

name infeasible-interior-point

algt

'n,m that has been suggested for such methods.

While there

is an

enonnous litcrature dedicated

to the

study

of

interior-point methods, the fìrst results on infeasible-interior-point methods were obtained only a couple ofyears ago. For a recent survey ofthe results we refer the reader to [20],

Most of the results on infeasible-interior-point algorithms have been obta- ined for linear programming. The best computational complexity results obtained so far show that infeasible-interior-point algorithms can solve standard form linear programs

with

integer data

of

length

L in o{nl)

iterations. This compleúty is shared by the algorithms proposed

n

[2],

[9],

Lt2],

tt3l, [lS]

and [19]. The algo- rithms

of F2l

and [19] are also quadratically convergent.

ye,

Todd and Mizuno

l27l

have obtained

OfJiU-ireration

complexity by applying the Mizuno-Todd-

Ye

algorithm

to a

homogeneous selÊdual reformulation

of

the original linear

(3)

(2.4c)

(l

+ 4r(1

+2r)) t2

þ2

1-p

=c[

(2.4d) 2(t!2r)þ l-p *

(t +

4r(1+ 20-Ð2

25)) 02

-,.

Q.ae) fJ-a=o(l/(l+rc)), B-cr<0.5.

The starting point

of

the algorithm can be any pair

of

strictly positive vectors (ro, ro )

. Rii

that is q,-centered in the sense that it belongs to the following set

No

=

{(x,s)

e

R]'l :ilXs-pell<ap},

where, as throughout this paper, we have denoted þ = xr s I n.

At

a fypical step

of ow

algorithm we are given

a pair (x,s)

e

R]l

and obtain a predictor direction (u, v)by solving the linear system

(2.5a)

Su+ Xv =

-Xs,

(2.5b)

M.u-v =

r,

where

r

is the residual r = s

-

Mx

- q.

clearly, this is just the Newton's direction for the nonlinear system (2.1), whose Jacobian

udr*JJ)< x<tnG+Jr) 1-p-

r<þ2 I

n> 1-p -zrþ2

I

n>0

"10 +

4r(t +2r))

/ 2 þ2

.

2(1-þ)-2rcþ2

I n

5 An lnfeasible-interior-point Method

,D0+zo) , , 2(l+2r)2

r+

+

1+ 4r<(1+ 2rc)

281

where

(2.3)

)t=ll

I + 4rc(1+ 2rc) It follows successively that

Q.aa) (2.4b)

F'(x,

s) : =

^9x

M-I

is nonsingularwheneverx > 0 ands > 0 and

Misap.-matrix

(see [7, Lenuna 4.1])

If

we take a step length 0 along this direction we obtain

x(0)= x+0u, s(0)=s+0v, p(0)=

x(Q)r

s(0)l

n

280 Jun Ji, Florian A. Potra

condition

is

necessary

for

superlinear convergence even

in the

case

of

the

monotone LCP.

The notation used throughout the paper

is

rather standard: capital letters denote matrices, lower-case letters denote vectors, script capital letters ãenote sets, and Greek letters denote scalars,

All

vectors are considered to be column vectors.

The components of a vector ¿¿ e

[R' will

be denoted by lu],(and when there is no danger of confusionby

u,),i:1,...,n.Therelation

z > 0 is equivalentto [z]i > 0,

i:l,...,z,while z>0

means

[u]¡>0, i:1,...,n. \f øeR,,w€[R,,,

then (u,

w)

denotes the column vector formed

by

the components

of u

and

w,

i.e.,

(u,w)

e R"*'',1(u,w)1,

=[z], for l<i<n

and.

l(u,w))n*¡ =frl¡ for I<i<m.

We denote Ri={ze[R':u2.}l,Ri* ={¿¿eRi :u>0). If zelR,,

then

u

:

:

Diag(u) denotes the diagonal matrix having the components of z as diagonal entries. The most used norm

is

the /2-norrn so that

we write ll.ll

irrstead

of

ll'll, '

both

for

vector norms and

for

the corresponding matrix

nolms llAll=

= max

{llAxll:llxll=

1}, rWhenever we need other norms

like il.lL or ll.ll.

we use

the corresponding symbol.

2. THE PRtrDICTOR-CORRECTOR ÄLGORITHM

we

denote

the

feasible set

of the

problem

(l.l)

and

its

solution set respectively by

={(x,s)

e

R]' :s= luh+q)

and

Ø*

= {(x.

,s*¡ e,Ø:x*ts*

= 0}.

Throughout this paper

it will

be assumed that

Ø"

is not empty,

It

is easily seen

that (x., s*¡eØ* if

and only

if (x*,"*)>0 is

the solution

of

the following nonlinear system

(2.r) F(x, s) : =

For any given e > 0 we define the set of e-approximate sorutions of (L

l)

as

Ø! =

11x.,s*) e Rl,'

:x*ts* tt,,llMx. -.i* +øll<e).

In what follows we shall present an algorithm that finds a point in this set in a finite number of steps. The algorithm depends on two positive constants o and B given by

?t"

4

Xs

lulx-s+ )

0

(2.2) C[=

l,+

(4)

282 Jun Ji, Florian A. Potia

We define

0

as the largest step length for which

(2.6) llx(0)s(0)-(1-0)pell<B(l-0)p, forall 0<e<0,

and consider the predicted pair

(2.7) x=x*0u,

s

=s+0y.

We

shall see later that these aré shictly positive vectors. Therefore the Jacobian

F'(x,s)

is again nonsingular and we can define the corrector direction

(u,v)

as the solution of the following linear system

(2.8a) Su+h =(1-e) pe-Æ,

(2.8b) lt/fr-v=0.

Along this direction we consider the family of pairs

t(e)= x+0u, s(0)=s+0v.

By using (2.8) and the fact that (x ,

r)

> 0, ,we have

(2.g) X(e)s(e)>0(1- 0)pe+e2rJv, for 0<0<1.

With a unit step length along the corrector direction we obtain a new pair

(2.10) î=x*u,,î=5+v.

It is easily seen that

(2.11) ,ft=( t-0)¡te+Uv, û=1;tî=(t-O) p!¡'¡,

rr urt=0,

then we have

û=(t-o) I, ^:,by defrning; ".*

cunenr pair as

(x* , s* ) =

(î,.î),

we obtain the same rate of decrease in feasibility and optimality, i.e.,

(2.12) r*

=,s*

- Mx* -q=(l-e)", u' - (x*)rsn =(l-O)

p.

n

Otherwise a new corrector direction (ri,

í)

is obtained by solving the linear system

(2.13a) Sît+

_T_

u'v

An Infeasible-interior-point Method

Along this direction we consider the family of pairs

i(e)= i+0û,

31e¡=3+0û.

By using (2.8) and (2.13) we obrain

(Z.r4a) Îie;

elo¡ =

(t-0) pe+tv -e-{!e+0(Vfi+02ûî,

n

6 7 283

(2.r4b) where (2. r 5)

û(e)=(l-o) p+-

Iô(e),

n

ô(o) = ur v

Q-

o) + (vr û+ùT Ð o + ûr îa2

Finally, let

0

be the smallest positive root of the quadratic equation ô(g) = o. (to the proof of rheorem 2.4we shall showrhat such a

ô

exists and

0<ô<2.¡ rn,

new current pair

(xn,s* )

is defined as

(2.16)

0i

It

is easily seen that (2.L2) holds in this case, too. ln order to have a well-defined algoritlnn, we have

to

show that

(x*,s*) e.fl'o

so that the above steps can be repeated with

(l*

, s*

)

instead of (x, s),

using the technique

of [5]

(see also [20]), we can compute explicitly the largest number 0

e

[0,

l]

satisffing (2.6). The result

is

summarized

in

the

following lemma.

LENß44 2,1'

ï (x,s)eîto,

then the largest number 0

e

[0,

rl

satisfying (2.6) ß given by

Í =!xs- e, g=!uv, trp

ô =

llgll,

0o = F2

-llf

ll,

,

&t =

lr

E,

(2.17a)

gr

= 0o I

(a, +J"? *

croô'),

(2.17b) Q

=2/(l+

l+41<p,),

where

(u,v)

is the solution of the linear system (2.5), Moreover, the

pair (r,s)

defined by (2.7) satisfies

(2.18)

jl.¡e

-f t-e)r,rll=p(t-0)p, x>0, r

>0,

,s' =.s +

x'=x+

eu

(2.t3b)

n

Mît-Ç

=0.

(5)

284 Jun Ji, Florian A. Pot¡a

CoROLLARY 2.2. Let x, s, a,

r

be

four

n-dimensional vectors with

x>

0 and

s > 0, and

let M

[R'"'

be

a

P.(k) -matrix. Then the solwtion (u, v) of the linear system

(2.19a)

Su+ Xv = a,

(2.19b) Mu-v

=b

satisfies the following relations:

An Method 285

We are now ready to prove that the algorithm described in this section is well defined, For ease of later reference let us first fonnally define our algorithm.

Algorithm 2.3. Choose

(ro,ro)

eTtfo and

set

ylo =

l. For k=0,I,...,

do

Al

through A7:

Al. Set

x= xk,J=sk

anddefine ¡1 =1x7's)

lþ,,

=

s- lulv-Q,V

=V

t.

A2.

If

.x?'.i

( r,

and

llrll(

e, then report

(x,s)

e

9!

and terminate.

A3. Find the solution u, v of the linear system (2.5), define x,

s

as in (2.7), and set

V*

=

(l -

0)

V,

where

0

is given by (2.17).

A4. Find the solution

u,v

of the linear system (2.8), and define

î,3

as þt (2,10).

A5.

Il

ur v = 0, then set

x*

=

î,.s*

= 3 and go to A7.

A6. þ-ind the solution

û,i

of the linear system (2.13), and de-/ìne x*

,s'

cts

in (2.16)

with 0

being the smallest positive root of (2.15),

AT.Set

ro*, =x*,,rÉ*'--r*,60 =0,

Ltr

=ft, rk =rrVt*l=\lj*.

TtæonËu 2.4. For any integer

k>

0, Algor"ithm2.3 defines a

pair

(2.24) (xk,rk)e1,[o,

and the coruesponding residuals satisfy (2.2s)

where

t'k =V kro ,

F* :

VrPo

'

k-l

9

(2.20a) (2.20b) (2.20c)

llDull<llå'll+

Jllãll'+lll

ll2

+2rll7ll'

,

ll D-'ull <

Jlla lf+

llr- ll'

+2rllv ll2, llD"ll'

+ llD-rv

ll'

<llãll2

+2rll7ll'z+2llãll'*

+z¡l ¡^l¡a1f *lll

ll' +Zt<llvll2

=

y2, ,

¡u,n' =f

ttult-

.l*?e? -laf

),

(2.20d) where

D

-

X-tt2 Stt?

,

v =1XS)-\/2

a,

T =

D-tb,

d

+T .

Proof. By premultiplying (2.19a) and (2.19b)

by

(XS)-tt2 we get

(2.21a) ù+(l+l¡=ã+1,

(2.21b)

l,tA

-

(V

+ã)

= 0,

where ù =

Du, l

=

D-lv

and 14 = D-t

MD-t, tt is

easily seen that

f[

e P.

(r)

(see [7, Theorem 3.5]) and it follows from [Lemma3.4f that ùr ñÍù

=ùr (t +l)>-Kllã +lll'

.

Then we have

(2.22) llùll'+lltll'=lla'lf -2ùr l,ñ +2ùrl <llãll'+2rllã +111'+zllTll

llå"ll

Therefore,

(2.23) (ll7ll- llâ'll)'

+

lltll' <ldl' +llT

ll'?

+zr¡¡a

+111'z .

Finally, (2.20a)-{2.20b) follow ftom (2.23) and (2.20c) from (2.20a) and (2,22).It is easily seen

from

llUvll2

= llftll'?

that (2.20d) follows from Proposition 2.2

of

Potra [20]. u

Proof. The'proof

is

by induction. For

k-0,

(2.24) and (2.25) are clearly satisfied. Suppose that they are satisfied for some fr > 0. As in Algorithm 2.3, we- shall omit the index /c. Therefore we can write

(x,s)

e

ffo, r=Vto,

l"l=Vþlo, ,

The

fact

rhat (2.25) holds

for

fr

+ I

follows immediately from (2.1'2). By applying Corollary (2,2)to (2.8) and using (2.18), we deduce that

(2.27a)

llurll<

1+ 4r(1+

2r)B

8

(1-p)

(2.27b)

llD¡l'+

¡¡D-rv¡¡2 <

(I+2r)þ2 (r-B)

(2.26)

Vo=1, v*=fI (t-6,),

í=0

(l-o)p,

1-o)p,

(6)

il

An Infeasible-interior-point Method

Similarly, we have

(2.33) l¡tû+vû1¡=$ff,,o0,'.

By

substituring the above inequalities

in

(2.r5),

we get ô(g)<(?rrt)a(e) if ùrv>0,

Otherwise ô(0)

>@rn)q(O) if urv <0,

where

(1+

2r)

B

287

g(0)=l-0+

(l

-p)

n

Together with (2.4d), which implies

a(2)<0 for nà2,

we have þ(0)

p(2)<0.

Therefore

the

positive

root ô of the

quadratic equation ô(g) =

0

satisfies

0< ô < 2. Because na

çu'v¡e

is the orthogonal projection

of u ¡

onto span(e:),

and

0<ô<2,

wehave

llt

v

-,n-t

(u,

v)ell< llArll.

By using (2.4), (2.12), (2.14), (2.16), (2.27) and (2,32)12.33), we get

llX's* -tt* ell<2llu vll<

(1+4r(1

+2r)) l2 p' (l-0)p=c¿lt*.

l-p

The positivity of x*

Td {.

can, also be proved by continuity based on the following inequality, which is obtained fuom (2.4), (2.r4a), (2.27a) and (2,321{2.33):

Îqe¡S1e¡ >

(t-O) (t-c¿)

þe >

0,

V 0<0 <2.

Finally,

it

follows that (2.24)

is

satisfied for

k-r

1 and the proof

of

our theorem is complete. D

3' GLOBAL CONVERGENCE AND POLYNOMIAL COMPLEXITY

ln what follows we assume that

ü-*

is nonempty. under this assumption we shall prove that Algorithm 2.3, wfth e = 0, is globaliy convergent in the sense that

.li. lr-

=

0

and lim

rk

= 0.

t(-+0 k _+o

LENß44 3.1. Assume

that

.g-*

is

nonempty.

Let (¡*,¡") e,g'and

the sequence (xo,so

)

is generated by Atgorithm 2.3. Then

(3.1a) vt(ek )tro+(ro)t"o)<(t+ 4r)(z+Onþt

,

(3.1b) (l-V*)(("0)tr. +1s¿¡rx-)<(l+4r) (t+V*)+ (t_

yt

t)Ç) npt,

286 Jun Ji, Florian A. Potra 10

where D = Y-ttzg

"'.

On the other hand, from [7, Lemma 3.4] we have

(z.zl)

ut v

>-rll(xs)'

ll

ll.¡rr-(l-0)

velz

" >-,I9i=tr-e) (1-B)'

p.

It is easily seen from (2.1 1) and (2.28) that

(2.2e) û>- (l-6)rr.

By using (2.4) and (2.27H2.29) together with the fact that

n-l1u'V¡e

is

the orthogonal projection

of U¡

on span(e), we can write

(2.30) llft -

r.tr ll =llu-r

- na çu'v¡

e ll < llTv ll <

. Jt ++r(t Jsll-B¡ +?r) P' (t_O) -'r-- u.!lt+4i0*2"_)) 2(l-B-rp2

t z

þ'

Ê < oÊ.

ln¡¡

By using (2.4), (2.9) and (2.27a), we obtain

X(e) r(e)

>

0(1-

e)

(1- o)

pe >

0, for

0 < 0 < L

Obviously, we have

t(0)=t>0,s(0)=s>0

and

i(1)=i,s(1)=.î

so that

if i>0,.î>0

fails, then there must exist

a

0

e(0,1]

and an index

i

such that

[t(0)]r [s(0)],

= 0, which contradicts (2.31). Therefore, we have fhat

i

> 0,.î > 0.

Inthe case when

uTv:0

wehave

x* =î,.s* =ô

so that(2.24)holds for

k+ I

as

well.

It

remains

to

prove that (2.24)

is

also satisfied when

tzt

#

0.

Applying Ccrrollary 2.2 to (2.I3), we deduce that

(2.32a)

ll ll < J t +

qrT

+

zr)

lln-t (nr

Ð ell

Ú¡ ll

.

Jstl -

Þ) (r

-

6) rr

I -7'-

rXí>-; u

'

n

(1 +

4r(l +2r)) þ' lu'¡l

s(r-p)' J;

2

(2.32b)

llDî,ll' + ¡¡D-rú¡1'? < (t + 2rc) which fi.uther implies that

n

<Ji

uv

^I^ Uv

ls lu''rl

irû+uri <¡ Du

2

+ D]v 21i¡ Dû2+ 5-tç

tl 211 <

.)

(7)

13 An Infeasible- interior-point Method 289

Proof.

we

omit the index ,t. Applying corollary 2.2 to linear system (2.5) and using Lemma 3:1, we can write

(3.6a)

llãll=

ll(X^r),,'rll= Jnrr,

(3.6b)

llE- l l = ll (xS)

-t'' Xrll<(

l

-

-rrz

p..,r

llxr ll

<\r

( l _

*) -'l, ìr

t, r ll

xro

llr <

I (l -

a)-rl2

p-"'

ll(^to)-'

roll_,¡l(ro)t" <.,1^[ñ,

(3.6c) llãll<llãll+

¡¡â"¡¡s

(r+ a)''fi[,

Finally, the required inequality follows by substituting (3.6)

n

(2.20d). n

It is easily seen from (2.17.a) that

1=,1o, l*,.'1o,

12 +aoõ2 )

r .'o.

VI

, The

right-hand side

of the

above inequalify

is

increasing

in lcr,l

and decreasingin

ø0.

Usingrhe factthat

coàpt-crr>0

and

lcr,l<ll,fll ligll<ug,

we obtain

(3.7) tlet<ôi(B-a).

Finally, from Lemma3.2, (2.17b) and (3.7) we obtain

(3.8) ek

)0*

= 2l

(l+./t++a. /(Ê-c[), k>

0

with

the help of (3.8) and Theorem 2.4 we can easily prove the main result of this section, which basically states that Algorithm

z.:

is

4óualy

converg,e nt at a linear rate.

TIüOREM 3,3, Suppose that the optimal set

g-"

is nonempty

(i)

If

e = a, then Algorithm 2.3 either finds an optimal solution z* e

Øu

in

,

finite

number of steps or produces an infinite sequence

,o

=

(ro

, sk

,ll )

strch

that

,(-+æ hrn (ro

)tro =0, and lim (rr

¡ = 6.

f+o'

(ä)

If

e> 0, then Algorithm2,3 terminates with a z e !F" in at most

K,. = lln(e / e )t

lln(1- 0.)l

iterations,

where

eo

=max{po,ll"oll}, and lx]

denotes the smallest inÍeger grenter or equal ,to y,

288 Jrur Ji, Florian A. Potra I2

where

(3.2) E=11x0)rs.+(s0)rx.)/11x0¡rs0;.

Proof. By writing x, r,

V for

x¿ , sk , V¿

,

respectively, and by using (2.25), we have

ysO

+(l-V)s* -s=

M(tyxo

+(l-V)x. -x).

By the definition

of

P.(rc)-malrix (cf. (1.3)) and using the facf

that (x.,s*))0

and (x, s) > 0, we have

(3.3)

[ryx0 + (1

-

V)

x. - ,]t

[,yru +

(1-

V) s*

-

s]

)

>

- 4r(v' ("0)tro

+

(l - v) v((x. )tso

+

(¡o)t

s*) +

xrs),

where

Jn = {t:[r.yxO

+(1-V) y* -x'J,[Vro +(l-V)s- -s], >0].

On the other hand, we have

(3.4)

[,U"0

+(1-V) *" -*f'[ry"o *(1-V)s* -s]=

= lt2 nþo+ (l

-

V) V((xo

)tr*

+ (ro

)tr.

)

-

- V((xo)rs+(s0)r x)+xrs-(l-VXst

x*

+x's* )+(l-V) (*-).r..

The desired inequalities (3.1) follow from (3.3) and (3.4) by using (2.25) and the factthat

("*)tr*

=0,

sIr* +.rts* >0

and s?'"0 +x?'so >0.

¡

From Lemma 3.1 and corollary 2.2 we shall derive a useful bound for the quantities

(3.5)

õo

=llUkvklltpo, k>0,

where (uo,uo

)

is obtained at step A3 of Algorithm 2.3. This bound is going to play an important role in our analysis.

LBmr¿a 3.2.

Let (ro,uo)

be obtained

in

the k-th iteration at step

A3 of

Algorithm2.3 and let õ o be defined by (3.5). Then

ô¿

<ð- =nJ .125+4(l+n)4

(1+K)2, where

^\= ^ñ

0

+ 4r) (2+ O ll(^so

)-'

"o ll* I ^l I

-

s.,

with Ç given by (3.2).

(8)

on

the

sent

ergence

2.3

strictly

Let

us denote

by ü-"

the set of all such solutions, i.e.,

9"

=

{(x,s) eØ. :[x], +[s],

> 0,

i=1.2,...,n].

It is well known that there is a unique partition

ßU ît

=

{1,2,...,ft}, Øl îr

=Ø,

such that for any

(x, s) e

Ø" we have

([x], > 0, [s], = 0, V

i eØ)

and

([x]r

= 0, [s], > 0, v

i

e

Í').

This means that the "basiç', and ,,non-basic,,

variables are invariant

for

any strictly complementary solution. Let us denote the corres- ponding partition of

Mby

Mru

Mou Mu¡, M*¡,

l5 An

Method 291

tu[-

Also, for any vector

.y

[R, we

denote by ,ya

the

vector

of

components

ïyl,,i

e,%1, andbyy¡y the vector of componenls

fyl,, i eIt.

LENß4A 4.1. The iteration sequence {(ro ,ro

)}

generated by Argorithm 2.3 is bounded, t,e,,

(4.1) 0<[ro],, [ro], <( + 4x)(z+O(( x 0.r ) s 0. ),!f.!,,luT,¡r5J=t' I I I

I

Proof.

It is

easily seen from (2.25) and (3,1a) that

(xk).ro +(ro)r"0.

< (1 +

4r)

(2 + O (r o

). r0,

which frmher implies our desired result.

!

LENov{A 4.2.

Let

{zk

=(xk,so¡¡

be generated by Atgorithm 2.3.

For

any solution z*

=(x* ,s*¡

e,g-* , there is a constant y l suchthat

(4.2)

l[.r¿

(l)], -[x.], l,

llso (l)1,

-[s.],ls ,,llto ' --t'll2

.

l)n

Proof. For any

z'

= (x* , s* ¡ e:Ð*

,

using (2.5) we have

(so *r)(-:(r)-'

I =(rro - x.)

(,0

_,.1

la -r/["*(l)_".J =[ o )

290 Jun Ji;Florian A. Potra 14

From the above theorem we can obtain polynomial complexity under certain assumptions on the starting point. For the case when the starting point is feasible,

or

close

to

being feasible,

it is

easily seen from (2.4e), (3,8), Lemma

3.2

and Theorem 3.3 that the following corollary holds.

coRotLaRy 3.4. Assume thqt

Ø*

is nonempty and that the starting point is chosen such thaî there ís e constont C independent of n satisfying the inequality

(2

+Oll(so)-''oll-

< C

(l+r)

n

Then Algorithm 2.3 terminates in at most K., = O((l+ K)

J;

h(eo. /

e))

íterations.

Most of the

complexity results

on

infeasible-interior-point. methods are obtained for starting points of the form

xo

=pp€,

so =p¿è,

where Po and

p(t

are sufficiently large positive constants (big

M

initialization).

F-or such starting points we clearly have (x0,s0) e $î

o

and

(=ll"-ll, l(np)+lls-lI l(np¿), forsome (x*,s*).Ø",

ll (^so ) -' r o

ll* < I + (pp I p

ìll

Mell_ +(t I p

)llq

ll_ .

'fherefore,

if p,

and

p,

satisff the inequalities

p

r2n-t

llx.ll,

,

pr/ > max {o

rllMell*,

llqll- , n -r lls-llr },

for

some

(r*,r*).,1", then n<o((1 +K)^l;)

and therefore \rye obtain the following complexity result from (2.4e),(3.8), Lemma3.2 and.Theorem 3.3.

coRorLeRv 3.5. Assume that

ü-.

is nonempty and thqt the starting point is chosen of the form (3 .9) such that (3 .r0) is satisfied

for

some (x*, s * ) e

!Í* .

Then

Algorithm2.3 terminates in at most

(3.11)

R" =

o((l+r)2

nln(eo / e))

iterations,

4. QUADRATIC CONVERGENCD

ln

the previous section

we

have proved that Algorithm

2.3 is

globally QJinearly convergent under very general assumptions. polynomial complexity was

(9)

l7 An Method

:--Qn

-'sN

= -8 u

*0 -0 's/r

> 0.

293

LÈML{,\ 4.4'. Suppose that.Ð"

#Ø. Let

1zk

:(xk ,sk)}

be generatecl by Algorithm 2.3.Then there is q constant y

,

such that

þr

each k there is a solution

z!

e Ø" such that

(4.4)

llzk

-zl ll<yrtro.

Proof. consider the following equarity-inequality linear system:

Mnaxa

M

¡rtx a

(4.s) xN

SB

XB,

Under the assumption that

g" +Ø,

(4.5) has a solution and the solution set

of

14's¡

is 9..'Bty

Hoffman's lemma

[4], for

any zk,there

is

a constant

ï+,

inde-

pendent ofkand

zl

e

g-.

such that

(4.6)

llzk

-zÍll<y qll(Mur*L+qo,

Mrnxru +Q u

_s¡t, xh,sÍ)ll<

,<T

+llGMnnxfi +rf ,

_ M¡vrxkr,

"n ,rß)ll+y4lhrll.

l\doreover,

ll"oll =

llvroll= 4(*uo

¡ = ll"ollu.

lro

Fo

Finally, (4.4) follows from Lemma 4.3 and,(4.6H4.7).a

L'vn¿n

4'5 Let {zk = (xk, tk

¡¡

b" generated by Argorithm 2.3. Then

(4.8) l[z],1<Tsþk, l[u],|<^tsþk, with ys:(t+ï fl)Tt.

,

Proof. Let

z!

=

(xl

, s! ¡ e

ø.

sarisff (4.4). rt follows from Lemm a

4.2

and Lemma 4.4 that

l[¿¿], l<

l[r'

], + lul,

-[x( ],l+

ltxf

l, -lro l,l=

llro

(t)],

_

[xf

1, ¡+

llrÍ ],

_ ["0 ],1<

<

.

Similarly we can obtain the inequality

forl[v],l.

tr

we end the paper by stating and proving our quadratic convergence result.

292 Jun Ji, Florian A. Potra

llDo (ro

(1)-x-)ll<

t6

Applying Corollary 2.2tothe above linear system, we have

I

(4.3)

llDo (*o

(1)-x.ll<Jr-x¡¡ (xoso) 1(xo - x-)("0 -t-)ll<

-

^ll+2.

llto

- t.llo

Jt

_

o Jrr*

ll

where Dk

=(Xu;-715t

¡1 = Diag(d). Thus

l["u (1)],

-[x.],1= [d]l'lldl,[ro

(1)],

-[x-], )l<

lro ),

The inequality involving s can be obtained similarly'

!

LElvff\4a 4.3. Let

g"

+ ø. Then there is a constant y

,

such that

[¡o], I yr(ro)''

ro

, Yie !tt, ltu], I yrlxl¡r sk, Yie'Ø].

Proof.

Let

(;r.,

s*¡e!v,. It is

easily seen from (3.1b) and the fact that

Vr SVr, k>l

that

(ro

)tr.

+(to

)t x.

< (l +arc) ((l + Vr

)/ (1-

Vr

)+Ç)npt

<

<(l+4r)((2-60)/eo +O (*o)"0

.

Therefore,

(1+4r)((2-0n)/00 +o ("u)tr', vie9,t,

[s- ], and

trr,, .W (*o)r ro,

v

i

e Ø).

(["0 ], lro l, ) t

r.k -

-* 12

<Y,!T, with

T

t=

tr

(l+2rc)

yo 1-cr

["u ],

(

Hence the desired result holds with

Tz = (1 +

r)

((2

-

0o) / 0o

+O

max

11

tftÍtX:, lnâX

"

i e%

fx 1- i.rr [s

l,

(10)

14. S. Mizuno, linear

programuing,

M. J.

Todd

primal-tlual ínteúor_po¡nt algorithms search 1g, 4 (lgg3), 964-.9gl. for

15. R. D. C. Monteiro and S

monotonelCp, compurational oprimiza

,t"

'ff;' ;ol:::;:i{,"¿i;;r9i,"'r::i:rfor.degenerate

16' R' D' c' Monteiro and s. J. wright, superlinear prinai-duat affinL'scahíg algorithnslorLCp, SIAM J. Optimization 6, 1 (1996), l_18.

17' A' M' ostrowski, Solulion of Equations in Euclidean anil Banach Spaces,Academic press, New York, (1973).

l8' F' A' Potta, An infeasible interior-point predictor-corrector algorilhm for linear programming, SIAM J. Optimization 6, I (1996), tg_32.

19' F' A' Potta, A quadratically convergent predictor-correcror method for solving linear programs

/ron infeasible sÍarring poinls, Mathematicar programmin g 6i , (rgg4),3g3-406.

20' F A' Poia, An o(nL) infeasible interior-point algirithm

fã rci ,ií'q*draric

convergence, .81-102.

ontal Linear Complementarity problems, search and lndust¡ial Engineering, Cornell

^0",,o::;:,;:':{,'"Yt##:i:Å::î":il::{i;:'rl:#:::

23.

S.

A, oint (1993).algorithn

for

linear complemenlarity problens, 24.

S.

r'9-52'r-point algorithm for linear complementarìty

2 (t9e3),79-106.

25.Y. Ye and K. Anstreicher, On quadratic and O(fnL) convergence oJ predictor_coruector algorithm forLCp, Mathematical programmi ng 62, 3 (1993), 537_551.

26' Y' Ye' o' Güle¡, R. A. Tapia andY. Zhang, A quadratically convergent ofJ-nt¡ -iteratíon algo- rithn lor linear programming, Mathematicar programming sg,2 (rgg3), 15r-162.

27.Y. Ye, M. J. Todd and S. Miauro, An

O(

al lineur

28.Y.

orizontal 29.

Y.

programming, Mathematical programming 66, rce 3 of (lgg4), infeosible 361_37 interior_point g. methods for linear

l9 An lnfeasible- interior-point Method

Jun Ji

Department of Mathematics and Computer Science

Va Idos ta Slule Un iversity Valdosta, GA 31698, USA E-m ail : j unj i @val d os t a. ed u

Florian A. potra Departmen I of Mothematics

The University of lowa Iowa City, Iowa 52242, USA E- m ail : p otra@m a!h. uiow a edu

295

Received lllday 70,1997

294 Jtut Ji, Florian A. Potra 18

THEoREM 4.6.

the linear complementarity problem (1.1) has

a

strictly complementarity solution, then there are two constants y qnd

y

independenl of k such that the points produced by Algorithm 2.3 satisfu

(4.9)

pt

*,3yþ?, ll"o.'ll< yllroll, , k>1.

Proof. From (2. L7), (3.7), (3.5) and (4.8) it follows that 0r

>l-ô/(Þ-cr)21-yp

witlr

y =

j-"y?/ (Þ-

cr). From (4.7) we see that (4.9)holds with y

:

ypTll¡ro ¡¡. n

REFERENCES

I' J. F. Bonnans and C. C. Gonzaga, Convergence of inlerior point algorithms for the monotone

linear conrplenrenlarity problern, Mathematics of Operations Research 21,

I

(1996), 115.

2.

R M'

F-reund, An Infeasible-slart Algoríthm

for

Linear Programming Wose Complexify Depends on the Distance from the Starting Point to the Optimal Solution, Working Paper 3559-93-MSA, Sloan School

of

Management, Massachusetts lnstitute

of

Technolog5r, Cambridge, MA 02139, USA, 1993.

3. O' Güler, Generalized linear complementarity problens, Mathematics of Operations Research 20, 2 (199s),441448.

4. A. J. Hofûnen, On approximate solutions of systems of linear inequalities, Joumal of Research of

tlre Nation al B ureau of Standards 49, (19 52), 263 -265 .

5. J. Ji and F. A. Potra, On the Average Complexity of Finding an e-Optimal Solution for Linear

Programming, Reports on Computational Mathematics 25, Department of Mathematics, University of lowa, Iowa CiS, IA 52242, USA, (1992).

6' J. Ji, F. A. Potra and S. Huang, A predictor-corrector method for linear complementarity problems with polynomial aomplexity and superlinear convergence. Joumal of òptimization

Theory and Applications 85, (1995), lB7-199.

7 M. Kojima, N. Megiddo, T. Noma and A. Yoshise, A tlnified Approøch to Interior point Algo- rithms for Línear Complementarity Problems, Lecture Notes in Comput. Sci., 538, Spririger Verlag, New York, 1991.

8. M. Kojima, N. Megiddo, and S. Mizruro, À primal-dual infeasible-interior-point clgorithm for

I i n e o r p r o gr a nt m i n g, Mathematical programming, 61, ( 1 993 ), 263_2g0.

9' M. Kojima, S. Mizuno and M. J. Todd,, Infeasible-interior-point primal-dual potential-reduction algorithms for linear programning, SL\NIJ. Oprimization S,

I

(lgg5), 52_67 .

10'

i'

J. Lustig, R. E. Marsten and D. F. Shanno, Computational experience +uith a globally convergenl primal-dual prediclor-cotector algorithm for Iinear programmiitg,Mathematical Programming 66 (1994), L23-135.

1l' J. Miao, A quadrarically convergent O((1+r)

JiL¡

-it"ratíon algorithm

þr

the p.(r)-matrix Iinear complemenlarity problem, Mathematical programming 69, (1995), 355-3óg.

12. J. Miao, Two infeasible interior-poinl predictor-corrector algorithms ¡oi tinear programmíng, SIAM J, Optirnization 6, 3 (1996),597-599

13. S. Mizuno' Polynomiality of infeasible-interior-point algorithn for linear programming, Mathe- rnatical Programming 67, I (1994), 109.

Referințe

DOCUMENTE SIMILARE

In this section, we propose a new displacement step α B better than the one which was introduced by Schrijver, in order to reduce the number of iterations and accelerate the

Some recent results for best simultaneous approximation in the Banach space of P -Bochner integrable (essentially bounded) functions have been obtained in [5, 6, 9, 16, 19]..

Talvila , Estimates of the remainder in Taylor’s theorem using the Henstock- Kurzweil integral,

Semidefinite linear complementarity problems, interior point meth- ods, long and small-update primal-dual algorithms, polynomial complexity.. Its growing importance can be measured

, Convergence of the family of the deformed Euler-Halley iterations under the H¨ older condition of the second derivative, Journal of Computational and Applied Mathematics,

Once the first point has been rendered, the additional computational effort required for rendering one more point on the curve is just that of performing one Pólya

Using panel data methodology for the period 1995-2018, the results of the study show that there has been a unidirectional causality running from economic complexity to

Actor – method – object, a tripartite unit which in Greenspan’s case can be considered a complete control panel, maybe the most coveted by a professional, Greenspan’s merit seems