• Nu S-Au Găsit Rezultate

View of Vector optimization problems and approximated vector optimization problems

N/A
N/A
Protected

Academic year: 2022

Share "View of Vector optimization problems and approximated vector optimization problems"

Copied!
12
0
0

Text complet

(1)

Rev. Anal. Num´er. Th´eor. Approx., vol. 39 (2010) no. 2, pp. 122–133 ictp.acad.ro/jnaat

VECTOR OPTIMIZATION PROBLEMS AND APPROXIMATED VECTOR OPTIMIZATION PROBLEMS

EUGENIA DUCAand DOREL I. DUCA

Abstract. In this paper, a so-called approximated vector optimization problem associated to a vector optimization problem is considered. The equivalence be- tween the efficient solutions of the approximated vector optimization problem and efficient solutions of the original optimization problem is established.

MSC 2000. Primary: 90C29; Secondary: 90C30, 90C46.

Keywords. Efficient solution, invex function, pseudoinvex function, approxi- mation.

1. INTRODUCTION

We consider the vector optimization problem (V OP)

C−min f(x)

s.t. x∈X

g(x)∈ −K,

where X is a subset of Rn, C is a convex cone in Rp, K is a convex cone in Rm,and f :X→Rp,g:X →Rm are functions.

Let

F(V OP) :={x∈X: g(x)∈ −K}, denote the set of all feasible solutions of Problem (V OP).

Let L : X×K → Rn be the lagrangian of Problem (VOP), i.e. the function defined by

L(x, v) :=f(x) +hg(x), vie, for all (x, v)∈X×K, where

K:={u∈Rm : hu, vi=0, for all v∈K}

is the polar of the convex cone K,and e= (1, ...,1)∈Rn.

Technical University, Department of Mathematics, Barit¸iu Street, no. 25–28, 400027 Cluj-Napoca, Romania, e-mail: [email protected], [email protected].

“Babe¸s-Bolyai” University, Faculty of Mathematics and Computer Science, 1 M.

Kog˘alniceanu Street, 400084 Cluj-Napoca, Romania, e-mail: [email protected], [email protected].

(2)

Definition 1.1. Let x0 be a point of F(V OP). We say that x0 is an efficient solution for Problem (VOP) if there exists no point x∈F(V OP) such that

f x0

−f(x)∈C\{0}.

Remark 1.2. The point x0 is an efficient solution for Problem (VOP) if and only if

f x0

−f(x)∈/ C\{0}, for all x∈F(V OP).

Definition 1.3. Let x0 be a point of F(V OP). We say that x0 is a weak efficient solution for Problem (VOP) if there exists no point x∈F(V OP) such that

f x0

−f(x)∈intC.

Remark 1.4. The point x0 is a weak efficient solution for Problem (VOP) if and only if

f x0

−f(x)∈/ intC, for all x∈F(V OP).

IfCis the closed convex coneRp+ andKis the closed convex coneRm+,then problem (VOP), becomes the multicriteria optimization problem

(MOP)

v- min f(x) s.t. x∈X

g(x)50.

For solving vector optimization problem (VOP), there are various manners to approach. One of these manners is that for Problem (VOP) one attaches an- other optimization problem, problem whose solutions gives us the (information about) solutions of the initial problem (VOP).

If x0 is a feasible solution for (MOP) and f is differentiable at x0, C.R.

Bector, S. Chandra and C. Singh [3], attached to Problem (MOP), the problem (LMOP)

v- min

∇f x0 (x) s.t. x∈X

g(x)50,

and obtained connections of efficient solutions of the original problem (MOP) to the efficient solutions of the linearized multicriteria optimization problem (LMOP).

Ifx0 ∈F(M OP) is an interior point ofX, f is differentiable atx0, Antczak [2], proposed the following approximated multicriteria optimization problem (ηM OP)

v- min

∇f x0

η x, x0 s.t. x∈X

g(x)50,

whereη :X×X→Rn is a function, and obtained results to connect (MOP) and (ηMOP).

(3)

In this paper, assuming that x0 ∈ F(V OP) is an interior point of X, and η:X×X →Rn is a function, we attach to Problem (VOP) the problems:

a) assuming thatf is differentiable atx0 (FAVOP)

C- min f x0 +

∇f x0

η x, x0 s.t. x∈X

g(x)∈ −K, and b) assuming that g is differentiable atx0 (CAVOP)

C−min f(x)

s.t. x∈X

g x0 +

∇g x0

η x, x0

∈ −K.

2. NOTIONS AND PRELIMINARY RESULTS

In the last few years, attempts have been made to weaken the convexity hypotheses and thus to explore the existence of optimality conditions appli- cability. Various classes of generalized convex functions have been suggested for the purpose of weakening the convexity limitation of the results. Among these, the concept of an invex function proposed by Hanson [11] has received more attention. The name of invex (invariant convex) function was given by Craven [5].

Definition 2.1. LetX be a nonempty subset ofRn, x0 be an interior point of X, C be a closed convex cone inRp,f :X →Rp be a differentiable function at x0, andη :X×X→Rn be a function.

a) We say that the functionf is C-invexat x0 with respect to(w.r.t.) η if

(2.1) f(x)−f x0

∇f x0

η x, x0

∈C, for allx∈X.

b) We say that the functionf is C-incave at x0 with respect to (w.r.t.) η if

f(x)−f x0

∇f x0

η x, x0

∈ −C, for all x∈X.

Remark 2.2. The function f is C-incave at x0 w.r.t. η if anf only if the

functionf is (−C)-invex at x0 w.r.t. η.

Example 2.3. Letf :R2 →R2 the function defined by f(x) := x21+ sinπx22, x22+ sinπx31

, for all x= (x1, x2)∈R2.

a) The functionf isR2+-invex atx0 = (0,0) w.r.t. η :R2×R2 →R2 defined by

η(x, u) := 3πsinπx31,2πsinπx22 , for all (x, u) = ((x1, x2),(u1, u2))∈R2×R2.

b) Also, the function f is R2+-invex at x0 = (0,0) w.r.t. µ:R2×R2 → R2 defined by

µ(x, u) := π3sinπx31 −4,π2sinπx22 −7 ,

(4)

for all (x, u) = ((x1, x2),(u1, u2))∈R2×R2.

Let’s remark that µ(x, u)6= (0,0),for all (x, u)∈R2×R2.

c) Also, the function f is R2+-invex at x0 = (0,0) w.r.t. ζ :R2×R2 → R2 defined by

ζ(x, u) := π3sinπx31 −x21,π2 sinπx22 −x22 ,

for all (x, u) = ((x1, x2),(u1, u2))∈R2×R2. After the works of Hanson and Craven, other types of differentiable func- tions have appeared with the intent of generalizing invex function from differ- ent points of view.

Ben-Israel and Mond [4] defined the so-called pseudoinvex functions, gener- alizing pseudoconvex functions in the same way that invex functions generalize convex functions. Here we give the following notion of pseudoinvexity:

Definition 2.4. LetX be a nonempty subset ofRn, x0 be an interior point of X, K and L be two convex cones in Rp, f : X → Rp be a differentiable function at x0, and η:X×X →Rn be a function. We say that f is (K, L)- pseudoinvexat x0 with respect to (w.r.t.) η if, for each x∈X\{x0}with the property that

∇f x0

η x, x0

∈K, we have

f(x)−f x0

∈L.

Remark2.5. The notion ofK-pseudoinvexity is equivalent with the notion

of (K, K)-pseudoinvexity.

Definition 2.6. LetX be a nonempty subset ofRn, x0 be an interior point of X, K and L be two convex cones in Rp, f : X → R be a differentiable function at x0, and η:X×X →Rn be a function. We say that f is (K, L)- quasiinvex at x0 with respect to (w.r.t.) η if, for each x ∈X\{x0} with the property that

f x0

−f(x)∈K, we have

∇f x0

η x, x0

∈L.

Remark 2.7. The notion of K-quasiinvexity is equivalent with the notion

of (K,−K)-quasiinvexity.

3. THE MODIFIED CRITERIA FUNCTION OF VECTOR OPTIMIZATION PROBLEMS

In this section,X is a subset ofRn, x0 is an interior point ofX, f :X→Rp is a differentiable function atx0, C is a convex cone in Rp,K is a convex cone inRm, and g:X→Rm is a function.

(5)

Forη:X×X→Rn,we attach to Problem (V OP) the following optimiza- tion problem:

(FAVOP)

C- min f x0 +

∇f x0

η x, x0 s.t. x∈X

g(x)∈ −K.

Let

F(F AV OP) :={x∈X : g(x)∈ −K}, denote the set of all feasible solutions of Problem (FAVOP).

Obviously

F(F AV OP) =F(V OP). LetF :X→Rp the function defined by

F(x) =f x0 +

∇f x0

η x, x0

, for all x∈X, i.e. the criteria function of Problem (FAVOP).

Theorem3.1. Let X be a subset ofRn, x0 be an interior point ofX, K be a closed convex cone in Rm, C be a closed convex cone in Rp, g :X→Rm be a function, η : X×X → Rn such that η x0, x0

= 0 and f : X → Rp be a differentiable function at x0 and (−C\{0},−C\{0})-pseudoinvex at x0 w.r.t.

η.

If x0 is an efficient solution for(VOP), then x0 is an efficient solution for (FAVOP) .

Proof. Assume thatx0 is not an efficient solution for (FAVOP), then there exists a feasible solution x1 ∈F(F AV OP) such that

F x0

−F x1

∈C\{0}.

Since η x0, x0

= 0,we have

F x0

−F x1

= f x0

+

∇f x0

η x0, x0

−f x0

∇f x0

η x1, x0

=

=−

∇f x0

η x1, x0 , hence

∇f x0

η x1, x0

∈ −C\{0}.

But f is (−C\{0},−C\{0})-pseudoinvex atx0 w.r.tη and then f x1

−f x0

∈ −C\{0},

i.e. x0 is not an efficient solution for (VOP). The theorem is proved.

Remark 3.2. The hypothesis thatf is (−C\{0},−C\{0})-pseudoinvex at x0 w.r.t. η is essential, as seen in the following example.

(6)

Example 3.3. Let’s consider Problem (VOP) with X := R2, C := R2+, K :=R2+,and f :R2 →R2, g:R2 →R2 and η:R2×R2 →R2 the functions defined by

f(x) = x21+ sinπx22, x22+ sinπx31 , g(x) = x21−x2, x22−x1

, for all x= (x1, x2)∈R2,and

η(x, u) = π3sinπx31π3x22, π2sinπx224πx21 , for all (x, u) = ((x1, x2),(u1, u2))∈R2×R2.

The pointx0 = (0,0)∈F(V OP) is an efficient solution for Problem (VOP).

On the other hand, F(x) =f x0

+

5f(x0)

(η(x, x0)) = sinπx22 −2x21,sinπx31 −x22 , for all x= (x1, x2)∈R2,and for x1 = (1,1)∈F(V OP) =F(F AV OP),

F x0

−F x1

=

1,1−

3 2

∈C\{0}=R2+\{0}.

Consequently, x0 is not an efficient solution for (FAVOP).

Let’s remark that f is not (−C\{0},−C\{0})-pseudoinvex at x0 w.r.t. η, because

5f(x0)

(η(x1, x0)) =

−1,

3−1 2

∈ −C\{0}=−R2+\{0}

and

f x1

−f x0

=

2,1 +

3 2

∈ −C\{0}/ =−R2+\{0}.

Remark3.4. The hypothesis thatη x0, x0

= 0 is essential, as seen in the

following example.

Example 3.5. Let’s consider Problem (VOP) with X := R2, C := R2+, K :=R2+,and f :R2 →R2, g:R2 →R2 and η:R2×R2 →R2 the functions defined by

f(x) = (x1, x2), g(x) = x21−x2, x22−x1

, for all x= (x1, x2)∈R2+,and

η(x, u) =

x1+ (x1−1)2, x2+ (x2−1)2

,

for all (x, u) = ((x1, x2),(u1, u2)) ∈ R2 ×R2. The point x0 = (0,0) is an efficient solution for Problem (VOP). The function f is (−C)-invex at x0 w.r.t. η,because, for all (x1, x2)∈R2,we have:

f(x)−f x0

∇f(x0) η x, x0

=−

(1−x1)2,(1−x2)2

∈ −C.

It follows that f is (−C\{0},−C\{0})-pseudoinvex at x0 w.r.t. η.

(7)

Since, for eachx= (x1, x2)∈R2, F(x) =f x0

+

∇f x0

η x, x0

=

x1+ (x1−1)2, x2+ (x2−1)2 , we deduce that x0 is not an efficient solution for Problem (FAVOP). Why?

Because, forx1= 12,12

, we have F x0

−F x1

=f x0 +

∇f(x0) η x0, x0

−f x0

∇f(x0) η x1, x0

=

= 34,34

∈C\{0}.

Theorem3.6. Let X be a subset ofRn, x0 be an interior point ofX, K be a closed convex cone inRm, C be a closed convex cone with nonempty interior inRp, g:X →Rmbe a function,η:X×X→Rnsuch thatη x0, x0

= 0and f :X →Rp be a differentiable function atx0 and(−intC,−intC)-pseudoinvex at x0 w.r.t. η.

If x0 is a weak efficient solution for (VOP), then x0 is a weak efficient solution for (FAVOP).

Proof. The proof is similar to the proof of Theorem 3.1.

Theorem3.7. Let X be a subset ofRn, x0 be an interior point ofX, K be a closed convex cone in Rm, C be a closed convex cone in Rp, g :X→Rm be a function, η : X×X → Rn such that η x0, x0

= 0 and f : X → Rp be a differentiable function at x0 and (C\{0}, C\{0})-quasiinvex at x0 w.r.t. η.

If x0 is an efficient solution for (FAVOP), then x0 is an efficient solution for (VOP).

Proof. Assume that x0 is not an efficient solution for (VOP), then there exists a feasible solution x1 ∈F(V OP) such that

f x0

−f x1

∈C\{0}.

But f is (C\{0},−C\{0})-quasiinvex atx0 w.r.tη and hence

∇f x0

η x1, x0

∈ −C\{0}.

It follows that F x0

−F x1

=

=f x0 +

∇f x0

η x0, x0

−f x0

∇f x0

η x1, x0

∈C\{0}, because η x0, x0

= 0. Consequently, x0 is not an efficient solution for (FAVOP) which is a contradiction. The theorem is proved.

Remark3.8. In Theorem 3.7, the hypothesis thatf isC-invex atx0 w.r.t.

η is essential, as seen in the following example.

(8)

Example 3.9. Let’s consider Problem (VOP) with X := R2, C := R2+, K :=R2+,and f :R2 →R2, g:R2 →R2 and η:R2×R2 →R2 the functions defined by

f(x) = (x1, x2), g(x) = x21−x2, x22−x1

, for all x= (x1, x2)∈R2,and

η(x, u) =

x1+ (x1−1)2, x2+ (x2−1)2

,

for all (x, u) = ((x1, x2),(u1, u2))∈ R2×R2.For x0 = (0,0)∈F(V OP),we have

f(x)−f x0

∇f x0

η x, x0

=

=−

(x1−1)2,(x2−1)2

∈ −C =−R2+,

for all x∈R2.Consequently, the function f is not C-invex atx0 w.r.t. η.

Since, for eachx= (x1, x2)∈R2, F(x) =f x0

+

5f(x0)

(η(x, x0)) =

x1+ (x1−1)2, x2+ (x2−1)2

=

=

x1122

+34, x2122

+ 34

it follows that x1 = 12,12

∈ F(V OP) =F(F AV OP) is an efficient solution for Problem (FAVOP).

On the other hand, f x1

−f x0

= 12,12

∈C\{0}.

Consequently, x1,which is an efficient solution for Problem (FAVOP), is not an efficient solution for problem (VOP); the function f is not C-invex atx0

w.r.t. η.

Remark 3.10. In Theorem 3.7, the hypothesis thatη x0, x0

= 0 is essen-

tial, as seen in the following example.

Example 3.11. Let’s consider Problem (VOP) with X := R2, C := R2+, K :=R2+,and f :R2 →R2, g:R2 →R2 and η:R2×R2 →R2 the functions defined by

f(x) = x21+x1, x22+x2

, g(x) = x21−x2, x22−x1

, for all x= (x1, x2)∈R2,and

η(x, u) =

x1− x21+ 12

, x2− x22+ 12 , for all (x, u) = ((x1, x2),(u1, u2))∈R2×R2.

(9)

Forx0= (0,0)∈F(V OP),we have f(x)−f x0

∇f x0

η x, x0

=

= x21+ x21+ 1

, x22+ x22+ 1

∈C,

for all x∈R2.Consequently, the function f is C-invex atx0 w.r.t. η.

Since, for eachx= (x1, x2)∈R2, F(x) =f x0

+

5f(x0)

(η(x, x0)) =

x1− x21+ 12

, x2− x22+ 12

=

= −x41−4x31−6x21−3x1−1,−x42−4x32−6x22−3x2−1 ,

it follows that x1 = (1,1) ∈ F(V OP) = F(F AV OP) is an efficient solution for Problem (FAVOP).

On the other hand, f x0

−f x1

= (2,2)∈C\{0}.

Consequently, x1,which is an efficient solution for Problem (FAVOP), is not an efficient solution for problem (VOP).

Let’s remark that η x0, x0

= (−1,−1)6= (0,0).

Theorem 3.12. Let X be a subset of Rn, x0 be an interior point of X, K be a closed convex cone in Rm, C be a closed convex cone in Rp, g :X→Rm be a function, η :X×X →Rn such that η x0, x0

= 0 and f :X→Rp be a differentiable function at x0 and (intC,intC)-quasiinvex atx0 w.r.t. η.

If x0 is a weak efficient solution for (FAVOP), then x0 is a weak efficient solution for (VOP).

Proof. The proof is similar to the proof of Theorem 3.7.

4. THE MODIFIED CONSTRAINT FUNCTION OF VECTOR OPTIMIZATION PROBLEMS

In this section,X is a subset ofRn, x0 is an interior point ofX, f :X→Rp is a function, C is a convex cone in Rp, K is a convex cone in Rm, and g:X→Rm is a differentiable function atx0.

For η : X ×X → Rn, we attach to Problem (VOP) the following vector optimization problem:

(CAVOP)

C- min f(x) s.t. x∈X

g x0 +

∇g x0

η x, x0

∈ −K Let

F(CAV OP) :={x∈X: g x0 +

∇g x0

η x, x0

∈ −K}, denote the set of all feasible solutions of Problem (CAVOP).

(10)

Theorem4.1. Let X be a subset ofRn, x0 be an interior point ofX, K be a closed convex cone inRm, C be a closed convex cone inRp, η:X×X →Rn and f :X →R be two functions and g:X →Rm be a differentiable function at x0.

If the function g isK-incave atx0 w.r.t. η, then every feasible solution for Problem (CAVOP) is a feasible solution for Problem(VOP), i. e.

F(CAV OP)⊆ F(V OP).

Proof. Letx1 ∈ F(CAV OP),i.e.

g(x0) +

5g(x0)

(η(x1, x0))∈ −K.

Since g isK-incave atx0 w.r.t. η, we have g(x1)−g(x0)−

5g(x0)

(η(x1, x0))∈ −K From this, it follows

g(x1)∈ −K+{g x0 +

5g(x0)

(η(x1, x0))} ⊆ −K+ (−K) =−K,

hence x∈ F(V OP)

Example 4.2. Let’s consider Problem (VOP) withX=R2, C =K =R2+, and f :R2 →R2, g:R2 →R2 and η:R2×R2 →R2 the functions defined by

f(x) =

sin(x1+x42, x21(x2−7)2 , g(x) = x21−x2, x22−x1

, for all x= (x1, x2)∈R2 and

η(x, u) = (x1−u1, x2−u2) for all (x, u) = ((x1, x2),(u1, u2))∈R2×R2.

The function g is notR2+-incave atx0 = (0,0) w.r.t. η, because g(1,1)−g x0

∇g x0

η (1,1), x0

= (1,1)∈ −/ R2+

Since g x0

+

5g(x0)

(η(x, x0)) = (−x2,−x1), for all x= (x1, x2)∈R2, the set of feasible solutions for Problem (CAVOP) is F(CAV OP) =R2+.

Consequently

F(CAV OP) =R2+⊇F(V OP) ={(x1, x2) : x21−x250, x22−x1 50}.

Obviously, the point

x1 = (0,7)∈F(CAV OP) \ F(V OP).

The point x0 = (0,0) is an efficient solution for Problem (VOP) and x0 is not an efficient solution for Problem (CAVOP) because

f x0

−f(0,7) = (1,0)∈C\{0}=R2+\{0}.

(11)

Theorem4.3. Let X be a subset ofRn, x0 be an interior point ofX, K be a closed convex cone inRm, C be a closed convex cone inRp, η:X×X →Rn and f :X →R be two functions and g:X →Rm be a differentiable function at x0.

If the function g is K-invex at x0 w.r.t. η, then every feasible solution for Problem (VOP), is a feasible solution for Problem (CAVOP), i. e.

F(V OP)⊆ F(CAV OP).

Proof. Letx1 ∈ F(V OP), i.e. g(x1)∈ −K.Since g is K-invex atx0 w.r.t.

η we have

g(x1)−g(x0)−

5g(x0)

(η(x1, x0))∈K From this, it follows

g(x0) +

5g(x0)

(η(x1, x0))∈ −K+{g x1

} ⊆ −K+ (−K) =−K,

hence x1 ∈ F(CAV OP).

Example 4.4. Let’s consider Problem (VOP) with X := R2, C := R2+, K :=R2+,and f :R2 →R2, g:R2 →R2 and η:R2×R2 →R2 the functions defined by

f(x) =

sin(x1+x42, x21(x2−7)2 , g(x) = x21−x2, x22−x1

, for all x= (x1, x2)∈R2,and

η(x, u) = (x1−u1, x2−u2) for all (x, u) = ((x1, x2),(u1, u2))∈R2×R2.

We have

F(V OP) ={(x1, x2) : x21−x250, x22−x1 50} ⊆[0,1]×[0,1] ; the point x0 = (0,0) is an efficient solution for (VOP) and the function g is R2+-invex atx0 w.r.t. η.

Since g x0

+

5g(x0)

(η(x, x0)) = (−x2, −x1), for all x= (x1, x2)∈R2, the set of feasible solutions for Problem (CAVOP) is

F(CAV OP) =R2+⊇F(V OP).

Easy to remark that x0 is not an efficient solution for Problem (CAVOP) because

f x0

−f(0,7) = (1,0)∈C\{0}=R2+\{0}.

(12)

5. CONCLUSIONS

In this paper one shows how, under some hypotheses, in order to obtain a solution for a vector optimization problem it is sufficient to solve another vector optimization problem.

REFERENCES

[1] T. Antczak,Saddle Point Criteria and Duality in Multiobjective Programming via an η-Approximation Method, Anziam J.,47, pp. 155–172, 2005.

[2] T. Antczak,A new Approach to Multiobjective Programming with a Modified Objective Function, Journal of Global Optimization,27, pp. 485–495, 2003.

[3] C.R. Bector, S. Chandra and C. Singh,A Linearization Approach to Multiobjec- tive Programming Duality, Journal of Mathematical Analysis and Applications, 175, pp. 268–279, 1993.

[4] A. Ben-IsraelandB. Mond,What is Invexity?, Journal of the Australian Mathemat- ical Society, 28B, pp. 1–9, 1986.

[5] B.D. Craven,Invex Functions and Constrained local Minima, Bulletin of the Australian Mathematical Society,24, pp. 357–366, 1981.

[6] J.W. Chen, Y.J. Cho, J.K. KimandJ. Li,Multiobjective Optimization Problems with Modified Objective Functions and cone Constraints and Applications, Journal of Global Optimization, Doi10.1007/s10898-010-9539-3.

[7] D.I. Duca,On the Higher-Order in Nonlinear Programming in Complex Space, Seminar on Optimization Theory Cluj-Napoca, pp. 39–50, 1985, Preprint 85-5, Univ. Babe¸s- Bolyai, Cluj-Napoca, 1985.

[8] D.I. Duca,Multicriteria Optimization in Complex Space, House of the Book of Science, Cluj-Napoca, 2006.

[9] D.I. Duca, andE. Duca,Optimization Problems and η−Approximated Optimization Problems, Studia Univ. “Babe¸s-Bolyai”, Mathematica,54, no. 4, pp. 49–62, 2009.

[10] M. Hanchimi and B. Aghezzaf, Sufficiency and Duality in Differentiable Multiob- jective Programming Involving Generalized type I Functions, Journal of Mathematical Analysis and Applications,296, pp. 382–392, 2004.

[11] M.A. Hanson, On Sufficiency of Kuhn-Tucker Conditions, Journal of Mathematical Analysis and Applications,30, pp. 545–550, 1981.

[12] O.L. Mangasarian,Nonlinear Programming, McGraw-Hill Book Company, New York, NY, 1969.

[13] O.L. Mangasarian, Second- and Higher-Order Duality in Nonlinear Programming, Journal of Mathematical Analysis and Applications,51, pp. 607–620, 1975.

[14] D.H. Martin,The Essence of Invexity, Journal of Optimization Theory and Applica- tions,47, pp. 65–76, 1985.

[15] S.K. Mishra and K.K. Lai,Second Order Symmetric Duality in Multiobjective Pro- gramming Involving Generalized Cone-Invex Functions, European Journal of Opera- tional Research,178, no. 1, pp. 20–26, 2007.

[16] J. Zhang and B. Mond,Second Order B-Invexity and Duality in Mathematical Pro- gramming, Utilitas Mathematica,50, pp. 19–31, 1996.

Received by the editors: August 18, 2010.

Referințe

DOCUMENTE SIMILARE

Lups ¸a , Bicriteria transportation problems, in Research on Theory of Allure, Approximation, Convexity and Optimization, Editura SRIMA, Cluj–Napoca, Romania, 1999, 52–71.

Hackbusch, Integral Equations Theory and Numerical Treatment, Birkhäusor Verlag, Basol-Borlin, 1995.. Micula, Funclii spline Si aplicalii,

Thc sufficietncy parl, of thc thcolelrr is knolru, see.. Therefolc, ¿r,ll oll thJ:itrnplicetions from the tlirst part of thã proof can be rcversed, giving oÇf¡a.. rn

certain quasimonotonic optimization problems on the graphs, when, for instance, the set D consists of all the spanning trees or of all the ele- mentary paths between

Kassay: On noncooperative games, minimax theorems and equilibrium problems, in: Pareto Optimality, Game Theory and Equilibria, Series: Springer Optimization and

We also furnish an existence result for vector equilibria in absence of convexity assumptions, passing through the existence of approximate solutions of an optimization

• Exploitation: pull as much as possible the Pareto optimal arms of relevant scalarization functions.. Multi-armed bandits for

Also in that paper we have used the obtained multiplier rules to state necessary conditions for the local solutions of an abstract multiobjective optimal control prob-