• Nu S-Au Găsit Rezultate

View of Complexity analysis of primal-dual algorithms for the semidefinite linear complementarity problem

N/A
N/A
Protected

Academic year: 2022

Share "View of Complexity analysis of primal-dual algorithms for the semidefinite linear complementarity problem"

Copied!
12
0
0

Text complet

(1)

Rev. Anal. Num´er. Th´eor. Approx., vol. 40 (2011) no. 2, pp. 95–106 ictp.acad.ro/jnaat

COMPLEXITY ANALYSIS OF PRIMAL-DUAL ALGORITHMS FOR THE SEMIDEFINITE LINEAR COMPLEMENTARITY PROBLEM

MOHAMED ACHACHEand NAIMA BOUDIAF

Abstract. In this paper a primal-dual path-following interior-point algorithm for the monotone semidefinite linear complementarity problem is presented. The algorithm is based on Nesterov-Todd search directions and on a suitable prox- imity for tracing approximately the central-path. We provide an unified analysis for both long and small-update primal-dual algorithms. Finally, the iteration bounds for these algorithms are obtained.

MSC 2000. 90C30, 90C33.

Keywords. Semidefinite linear complementarity problems, interior point meth- ods, long and small-update primal-dual algorithms, polynomial complexity.

1. INTRODUCTION

LetSn be the space of all real symmetric matrices of order n, Sn+ be the cone of positive semidefinite matrices in Snand Sn++is the cone formed by all symmetric positive definite matrices. The expression X 0 (X 0) means that X ∈ Sn+(X ∈Sn++). Let L :Sn −→ Sn be a given linear transformation and Q∈Sn. The semidefinite linear complementarity problem (SDLCP) is to find a couple of matrices (X, Y) ∈ A such that:

(1) X0, Y 0, and XY = 0,

where

A={(X, Y)∈Sn×Sn:Y −L(X) =Q}

is an affine subset of Sn.

This problem has made the object of many studies of research this last years. Its growing importance can be measured by its different applications in control theory and various areas of optimization problems. This problem can be also viewed as a generalization of the standard linear complementarity problem (LCP) and also included the geometric monotone semi-definite linear

Department of Mathematics, Faculty of Science, University Ferhat Abbas of Setif, 19000, Algeria, Laboratory of Fundamental and Numerical Mathematics, e-mail: achache [email protected].

Department of Mathematics, Faculty of Science, University El Hadj Lakhdar, Batna 05000, Algeria, e-mail: [email protected].

(2)

complementarity introduced by Kojima et al., [9] and which contains the pair of primal-dual semidefinite programming problems (SDP). For more details on (SDLCP) we refer the reader to the references [6-8, 11] and the thesis of Phd of Song [20]. Moreover it turns out that primal-dual path-following interior point algorithms can solve efficiently many problems such as linear, semidefi- nite programming, convex quadratic programming, conic and complementarity problems. These algorithms have polynomial complexity and numerical effi- ciency (see[1-4, 9, 10, 12-19, 21, 22]). It has been shown that most primal-dual (IPMs) algorithms and their analysis can be extended naturally from linear programming to (SDP) and so to more general context of (SDLCP).

The goal of this paper is to analyze the polynomial complexity of a primal- dual path-following interior-point algorithm for solving (SDLCP). Here, we reconsider the analysis used by Peng et al., (Ref. [17]) for (SDP) and we make it suiting to (SDLCP) case. The algorithm uses at each interior point iteration a full Nesterov-Todd (NT) step and a suitable proximity for tracing approximately the central-path. We provide also an unified analysis for both long and small-update primal-dual algorithms. Finally, the total iteration bounds for these algorithms are obtained. These polynomial complexity are analogous to such methods for linear, quadratic, semidefinite programming, conic and complementarity problems.

The rest of the paper is as follows. In section 2, the central path with its properties and Nesterov-Todd search directions are discussed. In section 3, the algorithm and its complexity analysis are stated. In section 4, a conclusion and future researches are given.

Throughout the paper we use the following notation.Rn×nis the space of all real n×n matrices,k.k denotes the Frobenius norm of matrices. For a given matrix A ∈ Rn×n, detA denotes its determinant if in addition A is nonsingular then A−1 denotes its inverse whereas AT is the transpose of A.

Let X ∈ Rn+, X1/2 represents its square root matrix. The trace of a matrix A is the sum of its diagonal entries and it is denoted by Tr(A). Recall that for any two matrices A = (aij) and B = (bij) in Rn×n,their inner product (trace) is defined by: hA, Bi:= Tr(ATB) =P

i,jaijbij.The identity matrix of order n is denoted by I. Finally, g(t) = O(f(t)) if and only if g(t) ≤ kf(t) for some positive constantkwheref(t) andg(t) are two real valued functions and t >0.

2. CENTRAL PATH AND NESTEROV-TODD SEARCH DIRECTIONS FOR SDLCP

In this section we define the central path for SDLCP and its properties and we determine Nesterov-Todd search directions.

The feasible set and the strict feasible set of (1) are subsets of Rn×n: F ={(X, Y)∈ A:X0,Y 0}

F0={(X, Y)∈ F :X0,Y 0}.

(3)

In the sequel we do the following hypothesis.

Hypothesis 1. The linear transformation Lis monotone [6, 20] i.e.

hL(X), Xi ≥0, for allX ∈Sn; Hypothesis 2. F0 6=∅.

Under our hypothesis it is shown that the set

Scp={(X, Y)∈ F :XY = 0}

of solutions of (1) is compact and non empty [6, 20].

In addition (1) is equivalent to the following optimization problem:

(2) (OP) min

X,Y [Tr(XY) :X, Y ∈ A:X 0, Y 0].

Hence solving (1) is equivalent to find the minimizer of (2) with its objective value is zero.

Now to introduce an interior point method for solving (2) we use the tech- nique of the logarithmic barrier approach. So for (2) we associate with it the following minimization problem:

(3) (OP)µ min

X,Y [Tr(XY)−µlog det(XY) : X, Y ∈ A:X 0, Y 0], where µ >0 is the barrier parameter and log det(XY) is the logarithmic bar- rier function associated with (2), [4].

Under our hypothesis we deduce the following useful properties for (3).

• (OP)µ is a convex optimization problem.

• (OP)µ has a unique solution (X(µ), Y(µ)) for allµ >0.

• If A 6= ∅. Then limµ7→0(X(µ), Y(µ)) = (X, Y) is a solution of (1), [9].

• The solution (X(µ), Y(µ)) is characterized by the first-order necessary and sufficient optimality conditions called conditions of Karush-Kuhn- Tucker of (OP)µgiven by the following nonlinear system of equations:

(4)

Y −L(X) =Q,

XY =µI, X0, Y 0.

• Hence solving (OP)µ is equivalent to solve (4).

• The set

{(X(µ), Y(µ)) :µ >0}

of solutions of the system (4) defines thecentral-pathwhich is a smooth curve.

For solving (4) we use primal-dual path-following (IPMs) algorithms. The basic ideas behind these algorithms is to follow the central path approximately and to approach the solution of (4) as µ goes to zero by using Newton step.

Suppose now that we have (X, Y) ∈F0. Applying Newton method for the

(4)

system (4) we obtain a class of search directions given by the following linear system of equations:

(5)

L(∆X) = ∆Y,

X∆Y + ∆XY =µI−XY.

Note that the system (5) has a unique solution (∆X,∆Y) but unfortunately this last it is not symmetric. To remedy this default many works have been done in literature for symmetrizing the second equation in (5). In most sugges- tions used by researchers in this domain is to introduce a scaling and invertible matrixP and to consider the following linear transformationHP given by:

HP(M) = 12(P M P−1+P−TMTPT) for a given M inRn×n. Then the second equation in system (5) becomes:

(6)

L(∆X) = ∆Y,

P(X∆Y + ∆XY)P−1+P−T(Y∆X+ ∆Y X)PT =

= 2µI−P XY P−1−P−TY XPT.

Now we give some popular directions used in IPMs. For P =I we get the Alizadeh-Haerberly-Overton (A.H.O) direction and for P =Y1/2 orP =X1/2 it defines the so-called (H.K.M) direction.

Here we use the Nesterov-Todd (NT) direction where Pis given by P = D−1/2 and D=X1/2(X1/2Y X1/2)−1/2X1/2 =Y−1/2(Y1/2XY1/2)1/2Y−1/2.

The role of the symmetric and positive definite matrixD is to rescale the two matrices X and Y to the same symmetric and positive definite matrixV given by:

(7) V =D12XD12 =D12Y D12. Then the scaling Newton directions are:

(8) DX =D12∆XD12, DY =D12∆Y D12. Hence using (7) and (8) the system (6) can be written as:

(9)

L(D¯ X) =DY,

V DV +DVV = 2µI−2V2, where

(10) DV =DX +DY

and ¯Lis given by:

L(D¯ X) =D12L(D12DXD12)D12.

The linear transformation ¯Lis also monotone on Sn. Under our hypothe- sis the new linear system in (9) has a unique symmetric solution (DX, DY).

Furthermore these directions satisfy:

(11) Tr(DYDX) = Tr(DXDY)≥0.

(5)

This inequality shows that the Newton directions in the primal-dual space are not orthogonal in contrast to SDP case. Thus makes the analysis of the complexity a little different to SDP. As mentioned before the proximity used here is defined as:

(12) δ(XY, µ) = 12

1

µV −√ µV−1

. One can easily verify that:

δ2(XY, µ) = 14(1µTr(XY)−2n+µTr(XY)−1).

Remark 1. The unique solution of the Sylvester equation:

V DV +DVV = 2µI−2V2, stated in (9) is

(13) DV =µV−1−V,

and using (12), we deduce

(14) kDVk2 = 4µδ2.

3. THE ALGORITHM AND ITS COMPLEXITY ANALYSIS

In this section we present the algorithm and we study its complexity and we compute the total number of iterations produced by it. We start to state some technical results that are needed for the analysis of the algorithm. LetA be a given matrix in Rn×n. We decompose Ain its symmetric part A and its skew-symmetric partA. Thus we havee

A=A+A, Ae = A+A2 T, Ae= A−A2 T.

Lemma 2. [14, Lemme 2.1] If A is positive definite matrix then TrA−1 ≤Tr(A−1)

equality holds if and only A=AT.

Lemma3. [14, Lemme 2.2]Suppose thatA1 andA2 are both symmetric and positive definite and, α, β≥0, α+β = 1. Then

Tr(αA1+βA2)−1 ≤αTr(A1)−1+βTr(A2)−1 with equality only if α =1 or β = 1.

Now the generic form of this algorithm is stated as follows.

(6)

3.1. The Algorithm. (Generic primal-dual algorithm for (SDLCP))

Input:

an accuracy parameter ;

an update parameterθ,0 < θ <1;

a proximity parameter τ; a feasible step sizeα;

a strictly feasible point (X0, Y0) and µ0 = 1nTr(X0Y0) s.t. δ(X0Y0, µ0)≤τ; begin

set X:=X0; Y :=Y0; µ:=µ0; while nµ≥do

begin

µ:= (1−θ)µ;

while δ(XY, µ)≥τ do begin

compute (∆X,∆Y) via (9);

set X:=X+α∆X; Y :=Y +α∆Y; end

end end.

3.2. Complexity analysis. Defining the symmetric matrix (15) De = 1 (DXDY +DYDX).

Since De is symmetric then its eigenvalues λi(D) are real for alle i ∈ I = {1,2, . . . , n} and due to (11) there exist two nonnegative numbers (Ref. [18]) such that:

σ+= X

i∈I+

λi(D)e , σ=−X

i∈I

λi(D)e where

I+ = n

i∈I :λi(D)e ≥0 o

and

I=I−I+ = n

i∈I :λi(D)e <0 o

are index sets.

Lemma 4. One has

σ ≤σ+≤2δ2 and

0≤Tr(D)e ≤2δ2.

(7)

Proof. For the left hand of the first statement it is clear from (11) and (15) that:

Tr(D) =e X

i∈I

λi(D) =e σ+−σ≥0.

For the right hand of it since Dfis symmetric then it is diagonalizable and there exists an orthogonal matrix Q

Desuch that:

QT

DeDQe

De = diag(λi(D), ie ∈I+, λi(D), ie ∈I) and let

D= 1 QT

De(DX2 +D2Y)Q

De. Since the matrix

1 QT

De(DX+DY)2Q

De =D+ diag(λi(D), ie ∈I+, λi(D), ie ∈I) is positive semi-definite it holds that:

Diii(D)e ≥0 for all i∈I, Dii≥ −λi(D)e >0 for all i∈I, and

X

i∈I

Dii≥ X

i∈I

−λi(D) =e σ. On the other hand the matrix

1 QT

De(DX −DY)2Q

De =D−diag(λi(D), ie ∈I+i(D), ie ∈I) is also positive semi-definite whence

Dii−λi(D)e ≥0,i.e. Dii≥λi(D)e >0 for alli∈I+, and

X

i∈I+

Dii≥ X

i∈I+

λi(D) =e σ+, and together leads to

σ+≤Tr( ¯D) = 1 Tr(DX2 +D2Y)

1 Tr(DX+DY)2

= 1 kDVk2 = 2δ2. Thus gives

σ+≤2δ2, σ≤2δ2. For the last result it follows from (11) and (15) that:

2µTr(D) = Tr(De XDY +DYDX)

≤ kDX +DYk2.

(8)

Then using (10) and (14) we deduce:

0≤Tr(D)e ≤ 1 kDX +DYk2

= 21 kDVk2

= 2δ2.

It completes the proof.

Next we want to estimate the quantityδ2+−δ2 i.e. the effect of a damped Newton iteration on the proximity. Here δ+denotes the proximity after a stepsizeα. Let

δ+=δ(X+Y+, µ), X+=X+α∆Xand Y+=Y +α∆Y First we start to give a result forδ+.

Lemma 5. Suppose that the step α is strictly feasible. Then δ+21−α Tr(V2) +αn4n2 +α42

n

X

i=1

λi(D)+e µ(1−α)4 Tr(V−2)+α4

n

X

i=1 1 1+αλi(D)e

with De is defined in (15).

Proof. We deal first with the quantity Tr(X+Y+). We have:

Tr(X+Y+) = Tr((V +αDX)(V +αDY))

= Tr(V2+α(DXV +V DY) +α2DXDY)

= Tr(V2+αV DV2DXDY), since Tr(V DX) = Tr(DXV) and Tr(V DY) = Tr(DYV).

Now substitutingDX,DY,DV by their values and by (11) we deduce:

Tr(X+Y+) = Tr((V2+α(µI−V2)) +α2µTr(D)e

= (1−α) Tr(V2) +αnµ+α2µ

n

X

i=1

λi(D).e

For the quantity Tr((X+)−1(Y+)−1) we have by a simple calculus and using Lemma 3.1 that:

Tr((X+)−1(Y+)−1) =

= Tr((V +αDX)−1(V +αDY)−1)

≤2 Tr [(V +αDX)(V +αDY) + (V +αDY)(V +αDX)]−1

= 2 Tr

(2V2+αV DV +αDVV +α2DYDX2DXDY)−1

= 2 Tr h

(2V2+αV DV +αDVV + 2α2µD)e −1 i

= 2 Tr([(1−α)2V2+ 2αµ(I+αD)]e −1).

(9)

Now for the last term in the right hand side of the above inequality and if 0 < α < 1, is sufficiently small such that the matrix (I +αD) is positivee definite, then by Lemma 3.2 we get:

Tr([(1−α)2V2+ 2αµ(I+αD)]e −1)≤ 1−α2 Tr(V−2) +α Tr(I+αD)e −1

1−α2 Tr(V−2) +α

n

X

i=1 1 1+αλi(D)e . (16)

It completes the proof of the lemma.

Theorem 6. If 0< α < σ1

+. Then the step size is strictly feasible and the matrix

(V +αDX)(V +αDY)

is positive definite. This result is an immediate consequence of (16).

Now by Lemma 3.3 λi(D) the eigenvalues ofe Desatisfy λi(D)e

≤ σ ≤ 2δ2 for all i ∈ I where σ = σ+ since Tr(D)e ≥ 0, it implies σ+ ≥ σ, then the following result holds.

Theorem 7. Let δ=δ(XY, µ) andσ =σ+. For all 0< α < σ1, one has δ+2 ≤(1−α)δ2+ 1−4α3δ24δ4.

Proof. By Lemma 3.2 it follows that:

δ2+1−α Tr(V2) +αn4n2 +α42

n

X

i=1

λi(D)+e µ(1−α)4 Tr(V−2)+α4

n

X

i=1 1 1+αλi(D)e

= 1−α4 Tr(1µV2+µV−2−2I)−αn4 +α42

n

X

i=1

λi(D) +e α4

n

X

i=1 1 1+αλi(D)e

= 1−α4 Tr(1µV2+µV−2−2I) +α42

n

X

i=1

λi(D) +e α4

n

X

i=1

( 1

1+αλi(D)e −1)

= (1−α)δ2+α42

n

X

i=1

λi(D)e −α42

n

X

i=1

λi(D)e 1+αλi(D)e ,

and

δ2+≤(1−α)δ2+α42

n

X

i=1

λi(D)e −α42

n

X

i∈I+

λi(D)e

1+αλi(D)e +α42 X

i∈I

−λi(D)e 1+αλi(D)e

δ2+≤(1−α)δ2+α42σ+α42(1+ασσ+

+)+ α43 X

i∈I

i(D))e 2 (1+αλi(D))e

≤(1−α)δ2+ α

3σ+2

4(1+ασ+) + α

3σ2 4(1−ασ).

(10)

Now using the fact thatσ ≤2δ2 and σ≤σ.We deduce:

α3σ2+

1+ασ+ = 1+ασα3σ21+2αδ3δ42 and

α3σ2

1−ασ1−ασα3σ21−2αδ3δ42.

It completes the proof of the theorem.

Theorem 8. If α = 1, then from the above inequality in Theorem 4.2, we get:

δ+21−4δ44. Proof. Since

α= 12σ1,

then the step size is strictly feasible and by Theorem 3.2 it follows that δ+2 −δ2≤ −αδ2+ 1−4α3δ24δ4

≤ −14 +24δ12 ≤ −245 .

This completes the proof of the theorem.

Now we are ready to compute the iteration bounds of the algorithm.

3.3. Iteration bounds.

Lemma 9. [19, Lemme IV.36] Let (X, Y) be a strictly feasible point and µ >0. If µ+= (1−θ)µ then

(17) δ(XY, µ+)2(2δ+θ

n)2 4(1−θ) .

Lemma 10. If δ(XY, µ)≤τ and τ ≥1, then after an update of the barrier parameter no more than

h

5(1−θ) nθ+ 4τ √

n+ 4τ 2i ,

iterations are needed to recenter.

Proof. By (17) in Lemma 3.5 after the update δ2(XY, µ+)≤ (2δ+θ

n)2 4(1−θ) .

Then every damped Newton step decreases δ2 by at least 245. Hence after at most

[245((2τ+θ

n)2

4(1−θ) −τ2)],

iterations the proximity will have passed the threshold valueτ. This completes

the proof of the lemma.

Consequently we have the following theorem.

(11)

Theorem 11. Ifτ ≥1, then the total number of iterations produced by the primal-dual algorithm is not more than

h

5(1−θ)(nθ+ 4τ√

n+ 4τ2)i h

1

θlog0i .

Proof. It is shown that the number of iterations produced by the algorithm (see Ref. [19, Lemme II.18 page116] is given by

1

θlog0.

Multiplication of this number by the bound in Lemma 3.5 yields the theorem.

Omitting the round-off brackets in Theorem 3.4 (this does not change the order of magnitude of the iteration bound) our algorithm does not exceed the following upper bound of iterations

O 5(1−θ)6 (nθ+ 4τ√

n+ 4τ2) log0 . Thus if θ= 12 and τ = 1 the bound becomes:

O 125 (n2 + 4√

n+ 4) log0

=O nlog0

which is the bound complexity of such long-update primal-dual algorithms.

Meanwhile if θ = n12 and τ = 1 we obtain the best well-known iteration bound for small-update primal-dual algorithms, namely:

O(√

nlog0).

It completes the proof of the theorem.

4. CONCLUSION AND FUTURE RESEARCH

In this paper we have extended the study of complexity analysis of a primal- dual path-following algorithm designed for (SDP) to monotone (SDLCP). We have succeed to give an unified analysis for its polynomial complexity. For long-update algorithm we have O nlog0

iteration bound. This complex- ity is similar to such primal-dual (IPMs) methods. For small-update algorithm the iteration bound isO √

nlog0

which is the best known iteration bound.

Finally, an important topic for further research is the numerical implementa- tion of this algorithm and to extend it for other optimization problems.

REFERENCES

[1] M. Achache,A new primal-dual path-following method for convex quadratic program- ming, Comput. Appl. Math.,25, No. 1, pp. 97–110, 2006.

[2] M. Achache, Complexity analysis and numerical implementation of a short-step primal-dual algorithm for linear complementarity problems, Applied Mathematics and Computation,216, pp. 1889–1895, 2010.

[3] F. Alizadeh, J.A. Haeberly and M. Overton,Primal-dual interior point methods for semidefinite programming. Convergence rates, stability and numerical results, SIAM J. Optimization,8, pp. 746–768, 1998.

(12)

[4] J.M. BorweinandA.S. Lewis,Convex Analysis and nonlinear optimization. Theory and Examples, Springer-Verlag, New-York Berlin Heidlberg, 2000.

[5] M. El ghami,New primal-dual interior point methods based on kernels functions, Phd thesis, Delft University, Netherland, 2005.

[6] M.S. Gowda andY. Song,On semidefinite linear complementarity problem, Mathe- matical Programming, Series A,88, pp. 575–587, 2000.

[7] M.S. Gowda, Y. Song, and G. Ravindran, Some interconnections between strong monotonicity, GUS and P properties in semidefinite linear complementarity problems, Linear algebras and its applications,370, pp. 355–386, 2003.

[8] M.S. GowdaandY. Song,Some new results for the semidefinite linear complementar- ity problem, SIAM Journal on Matrix Analysis and Applications, 24, no.1, pp. 25–39, 2003.

[9] M. Kojima, M. ShindohandS. Hara,Interior point methods for monotone semidef- inite linear complementarity problem in symmetric matrices, SIAM J. Optimization,7, pp. 86–125, 1997.

[10] Z. Liu andW. Sun, An infeasible interior-point algorithms with full-Newton step for linear optimization, Numer. Algor.,46, pp. 173–188, 2007.

[11] M. Malik and S.R. Mohan,Some geometrical aspects of semidefinite linear comple- mentarity problems, Linear and multilinear algebras,54 1, pp. 55–70, 2006.

[12] Y. Nestrov andM. Todd,Self-scaled barriers and interior point methods for convex programming, Mathematics of operations Research,22, No. 1, pp. 1–42, 1997.

[13] J. Peng, C. Roos and T. Terlaky,A new and efficient large-update interior point method for linear optimization, Vychist. Tekhnol.,6, pp. 61–80, 2001.

[14] J. Peng, C. Roos andT. Terlaky,Self-regularity: A new paradigm for primal-dual interior point algorithms. Princeton University Press, Princeton, NJ, 2002.

[15] J. Peng, C. Roos andT. Terlaky,A new class of polynomial primal-dual methods for linear and semidefinite optimization, European J. Oper. Res., 143, pp. 234–256, 2002.

[16] J. Peng, C. RoosandT. Terlaky,Self-regular functions and new search directions for linear and semidefinite optimization, European Mathematical Programming, 93, pp. 129–171, 2002.

[17] J. Peng, C. RoosandT. Terlaky,New complexity analysis of the primal-dual method for semidefinite optimization based on the Nesterov-Todd direction. Journal of Optimiza- tion Theory and Application, 109, No. 2, pp. 327–343, 2001.

[18] J. Peng, C. Roosand T. Terlaky,New complexity analysis of primal-dual Newton methods for P(k) linear complementarity problem, Manuscript, 1998.

[19] C. Roos, T. TerlakyandJ.Ph. Vial,Theory and algorithms for linear optimization.

An interior point approach, John Wiley and Sons, Chichester, UK, 1997.

[20] Y. Song,The P and globally uniquely solvable properties in semidefinite linear comple- mentarity problem, Phd thesis, University of Maryland, USA, 2000.

[21] J.F. SturmandS. Zhang,Symmetric primal-dual path-following algorithms for semi- definite programming, Technical report 9554/A, Tinbergen Institute, Erasmus Univer- sity, Rotterdam, The Netherlands, 1995.

[22] S.J. Wright,Primal-dual interior point methods, SIAM, Philadelphia, USA, 1997.

Received by the editors: December 25, 2010.

Referințe

DOCUMENTE SIMILARE

These algorithms are both based on the reformulation of (NCP) as unconstrained minimization problems by using some smoothing merit functions including the so-called

P. In this paper numerous necessary and sufficient conditions will be given for a vector to be the unique optimal solution of the primal problem, as well as for that of the

Pareto solution of a multicriteria linear programnring problem, some interesting properties of the Pareto transpoft plan set can be found for insrance in [7]... The

Most of the results on infeasible-interior-point algorithms have been obta- ined for linear programming. The best computational complexity results obtained so far show

Let us co'sider .the general_ problem of geometric progra'rming (PG).. Then the following

86, 1654 (2001); Analysis of the computational complexity of solving random satisfiability problems using branch and bound search algorithms, Eur.. Wolf (eds), Scale

¸si algoritmul lui Tseng. Ace¸sti algoritmi au fost studiate ˆın detaliu ˆın [4]... Capitolul trei este dedicat algoritmului primal-dual de divizare pentru rezolvarea problemei

ABSTRACT. Let be M an interior point of triangle ABC.. Let be M an interior point of triangle ABC.. Let be M an interior point of triangle ABC.. Let be M an interior point of