• Nu S-Au Găsit Rezultate

# Multicriteria Decision Aiding:

N/A
N/A
Protected

Share "Multicriteria Decision Aiding: "

Copied!
71
0
0
Arată mai multe ( pagini)

Text complet

(1)

### Compensating and Non-Compensating Methods

Iryna Yevseyeva

Research Associate

School of Computing Science

Centre for Cybercrime and Computer Security (CCCS) http://cccs.ncl.ac.uk

Newcastle University, UK

Evolutionary Computation, July 18-24 (2015), Iasi, Romania

(2)

## Learning objectives

•  Basics of Multi Criteria Decision Aiding (MCDA)

•  What is the subject area of Multi Criteria Decision Aiding?

•  How MuliObjective Optimisation and MultiCriteria Decision Aiding/Analysis are related and differentiated?

•  Decision Maker’s Preferences

•  How different MCDA approaches model Decision Maker’s preferences?

•  Ways of expressing preferences of the Decision Maker.

•  MCDA Compensatory and Non-compensatory Approaches

•  MAVT vs. ELECTRE.

•  When is it appropriate to use each of the learned algorithms?

(3)

## Security decision making

### •  Fitting to existing model vs. adapting to new

(4)

low

medium

high

-4 700

800 900 1000

30%

20%

50%

60%

80%

90%

high medium low

L1

L3

L2

L4

CPUàmin Priceàmin

-Usability àmin

### Parallel Coordinate Diagram

All criteria are to be minimised

L2 dominates L1

-Popularity àmin

(5)

### MultiCriteria Decision Analysis/Aiding

•  In MultiObjective Optimisation (MOO)

•  addresses continuous and discrete combinatorial problems;

•  solutions/alternatives are defined by a set of constraints;

•  objective functions are then optimised in this region.

•  MultiCriteria Decision Analysis/Aiding (MCDA)

•  mainly discrete problems with small sets of alternatives;

•  for combinatorial problems find rules for decision making;

•  constrains are taken into account implicitly: into set of criteria and/or alternatives

•  Both MOO and MCDA

•  use evaluation of solutions/alternatives based on multiple criteria/objectives

•  and search for trade-off solution(s) (e.g. using Pareto

(6)

### MultiCriteria Decision Aiding/Analysis

•  Terminology:

•  Multiobjective -> MultiCriteria Decision Optimization Aiding/Analysis

•  MOO -> MCDA

•  Solutions (to be found) -> Alternatives (often given)

•  Objectives (given) -> Criteria (given)

•  Usually impersonal -> Decision maker (DM) DMs

•  Objective -> Subjective

(7)

### MCDA used after MOO

x1 x2

x3

Search space (decision space)

f2

Objective space (solution space)

Image under f

Part of Pareto front =

Set of interesting solutions

map performed by f

•  MCDA can be used in combination/before/after MOO, e.g.

concentrating the search on a smaller region of search space.

(8)

(9)

## Types of MCDA problems

•  To structure the problem: to identify alternatives, criteria (especially, if they are hierarchically structured);

•  To select one alternative among many;

•  To rank all alternatives;

•  To sort them into several (ordered) groups;

•  To select a portfolio of alternatives.

(10)

### MCDA Examples

•  To select an investment plan;

•  To rank students-participants of a “best assignment” competition;

•  To sort research projects to be funded or not;

•  To choose a team of employees;

•  To choose a new airport location;

•  To decide on giving credit to a client or not;

•  To rank universities by their scientific and academic performance;

•  To find best combination of assets for a financial portfolio.

(11)

(12)

(13)

(14)

## MCDA problems type: Combination

Cars a company prefers to buy

(15)

## Value-Focused Thinking

•  “When you are faced with a difficult decision situation, start by thinking about your values. Do not start by

thinking about alternatives, as that will limit you.”

•  Why each criterion is important? This helps to set priority and importance to criteria. E.g. thinking of why good

gasoline mileage is important for a car you may start thinking of more fundamental criteria, like operational costs, pollution, etc.

•  Ralph L. Keeney, Value-Focused Thinking: A Path to Creative Decision Making. Harvard University Press, Cambridge, MA, 1992.

(16)

## What is the best car?

Speed Cost

Criteria/objectives: Cost à min, Speed à max, Length à min

Luxury

Speedy

Tiny

Fancy

(17)

## What is the best anti-virus?

CPU Cost

Criteria/objectives: Cost à min, CPU à max, Usability à max

Usability

Speedy

“Fancy”

Luxury

Tiny

(18)

## Problem representation

•  Performance matrix

•  vj(ai) is performance value of alternative ai , i={1,…m} on criterion gj , j={1,…n} and kj is importance coefficient of criterion gj

g1(·) g2 (·) gn (·)

k k1 k2 kn

a1 v11(a1) v2(a1) vn(a1) a2 v1 (a2) v2(a2) vn(a2)

am v1(am) v2(am) vn(am)

(19)

## Example: What is the best car?

Cost min

Speed max

Length min

Importance 3 2 1

Tiny 10.000 100 2

Fancy 60.000 150 4

Speedy 100.000 200 4

Luxury 300.000 100 7

(20)

## Criteria scales

•  Ratio/Quantitative/Cardinal: There is information on difference between scale values with respect to a non- arbitrary origin, e.g. meters, kg, etc.

•  Ordinal/Qualitative/Verbal: There is order on the scale but no information on the difference between scale

values, e.g. excellent, good, satisfactory, bad.

•  Interval: There is information on difference between scale values with respect to an arbitrary origin, e.g., Fahrenheit and Celsius temperature scales.

(21)

## MCDA Approaches

•  Basic techniques: Dominance test, MaxiMin, Maximax, Lexicographic and Lexicographic semi-order methods, Simple Additive Weighting (SAW).

•  American schools: Multi-Attribute Utility Theory (MAUT) / Multi-Attribute Value Theory (MAVT) and Analytical

Hierarchy Process (AHP).

•  European school: Outranking-based approaches, such as ELECTRE, PROMETHEE;

•  Uncertainty: fuzzy sets, rough sets, Stochastic Multicriteria Acceptability Analysis (SMAA),Verbal analysis.

(22)

•  Simple and intuitive, but has drawbacks.

•  The alternatives are selected/ranked/sorted based on their weighted sum score:

S(ai)=∑kj*gj(ai),

•  where kj is gj’s criterion importance / weight;

gj(ai) is value of alternative ai on criterion gj.

•  All scales either to be minimised or to be maximised.

•  All scales are to be homogenised = normalised for commensurable comparison (roughly speaking to be able to aggregate kgs and meters).

(23)

## SAW: Normalisation

•  Ratio normalisation with all criteria scales in [0,1] interval:

•  For criterion to be maximised xnorm= x/xbest

•  For criterion to be minimised xnorm= xbest/x

•  Normalisation of ratio difference with all criteria scales in [0,1] interval:

•  For all criteria scales xnorm=x-xworst/xbest-xworst

•  Euclidean normalisation (non-linear transformation):

•  xnorm=x/√∑x2

•  Drawbacks:

•  Adding/removing alternatives may change ranking.

•  Difficult for qualitative scales.

•  SAW depends hardly on the normalisation used!

(24)

## Multi-Attribute Value Theory (MAVT)

•  Developed in late 60’s by Ralph Keeney and Hovard Raiffa. Similarly to Utility Theory (MAUT)* assumes

rational Decision Maker and adapts MAUT by modelling rigorous preference structure.

•  All criteria evaluations are aggregated into an overall score for each alternative: V(ai)=∑kj*gj(ai)

•  For each criterion gj a value function vj(gj(·)) should be constructed that reflects the subjective value of the

criterion for the decision maker.

•  MAVT preference relation is complete and transitive.

* Ralph Keeney and Hovard Raiffa (1976). Decisions with Multiple Objectives:

(25)

## Multi-Attribute Value Theory (MAVT)

•  For each gj a value function vj(gj(·)) should be

constructed that reflects the subjective criterion value of the decision maker.

•  Compare to SAW, where normalisation is impersonal.

•  vj(gj(ai)) or vj(ai) denotes the value or performance of the alternative ai on criterion gj.

•  The most common is additive form of aggregation for the global score (but there are more complex forms of the value function):

V(a)=f(v (a), v (a ), … , v (a ))

(26)

## MAVT: Preference relation

•  MAVT constructs a complete order on the set of alternatives.

•  Only preference and indifference relations are possible between alternatives:

•  vj(a)> vj(b) if and only if alternative a is considered to be better than alternative b on criterion gj (gj(a) is preferred to gj(b));

•  vj(a) = vj(b) if and only if alternative a is considered as good as alternative b on criterion gj (gj(a) is indifferent to gj(b)).

•  Transitivity of preference and indifference is assumed.

(27)

“Speedy”

Tiny

Fancy

>

>

(28)

“Speedy”

Tiny

Fancy

>

>

>

(29)

“Speedy”

Tiny

Fancy

>

>

>

(30)

## MAVT: Transitivity of preference

•  If a is preferred to b and b is preferred to c, then a is preferred to c.

•  Suppose that 3 judges are evaluating alternatives a, b, c.

•  Alex deems “Speedy” is better than “Fancy” and “Fancy” is better than “Luxury”

•  Mary deems “Fancy” is better than “Luxury” and “Luxury” is better than “Speedy”

•  John deems “Luxury” is better than “Speedy” and “Speedy” is better than “Fancy”

•  Voting claims 2:1 majority for “Speedy” being better than “Fancy”;

and a 2:1 majority claim “Fancy” is better than “Luxury”.

•  By transitivity “Speedy” should be better than “Luxury”, but a 2:1 majority dictates “Luxury” is better than “Speedy”.

(31)

=

=

Consequently

AND =

(32)

## MAVT: Transitivity of indifference

•  If a is indifferent to b and b is indifferent to c, then a is indifferent to c.

•  Alex is indifferent between drinking coffee with 3g of sugar (alternative a3) and drinking coffee with 4g of sugar (alternative a4).

•  Alex is indifferent between drinking coffee with 4g of sugar (alternative a4) and drinking coffee with 5g of sugar (alternative a5).

•  Similarly, he is indifferent between a5 and a6, between a6 and a7, and so on.

•  By transitivity Alex should be indifferent between, say, a3 and a7, but probably he would not.

This type of reasoning cannot be reproduced using value functions!

(33)

### Example: Selecting a new security manager

Security- management

skills

Intelligence Communica- tion skills

Work experience kj Most

important

2nd most important

2nd most important

2nd most important

a1 Poor High IQ Very good 8 years

a2 Very Good Very High IQ Poor 1 year

a3 Excellent Low IQ Satisfactory 1 year

a4 Good High IQ Good 4 years

(34)

### Example: Selecting a new security manager

Security- manage- ment skills

Intelligence Communi cation

skills

Work experi- ence

V(a) Conclusion

kj 0.4 0.2 0.2 0.2 TM twice more

important

a1 0.2 0.4 0.5 0.8 0.42 Experienced

a2 0.5 0.6 0.1 0.1 0.36 TM&I

a3 0.8 0.1 0.2 0.1 0.40 TM (Most

important crit.)

a4 0.4 0.4 0.4 0.4 0.40 Balanced

Compensation effect!

(35)

## MAVT: Constructing “values”

•  A value function reflects the subjective preferences of the client.

•  SAW uses normalisation, which is an impersonal operation.

•  Assuming transitivity of preference and indifference the client’s preferences can be defined by value functions.

•  The idea is to build a function that the client perceives as adequate to represent his or her judgment about

strengths of preference. If the client feels comfortable with the representation, then it is OK

•  There are different techniques for building criteria value functions, e.g. direct rating, curve fitting, bisection, and standard differences and extract importance coefficients,

(36)

## MAVT: Value function with Direct rating

•  Ask the client for a numerical value for each performance level on a given criterion’s scale.

•  Or ask the client to adjust a graphical representation of performance levels on a line segment representing

value.

•  In doing so the client should recall that

•  vj(a)> vj(b) if and only if alternative a is considered better than alternative b on criterion gj (gj(a) is preferred to gj(b));

•  vj(a)-vj(c) > vj(b)-vj(d) if and only if the strength of preference for a over c is higher than the strength of preference of b over d on

criterion g.

(37)

## Example: What is the best car?

Cost min

Speed max

Length min

Importance 3 2 1

Tiny 10.000 100 2

Fancy 60.000 150 4

Speedy 100.000 200 4

Luxury 300.000 100 7

(38)

## MAVT: Value function with Curve fitting

•  Ask the client to adjust the parameters defining a given curve

•  E.g. a negative exponential curve ranging from extremely concave to extremely convex cases

•  or a sigmoidal curve

(39)

## MAVT: Value function with Bisection

•  Ask the client to indicate a performance level that corresponds to the best value and set it to 1 and the worst value and set it to 0.

•  Ask the client to indicate a performance level that splits the interval in two in terms of value (such that changing from the 0 value performance to the chosen midpoint increases value as much as does changing from this midpoint to the 1 value performance).

•  Then the chosen midpoint corresponds to value 0.5.

•  Use the same process to bisect the intervals [0, 0.5] and [0.5, 1]. And so on.

(40)

## MAVT: Value function with differences

•  Define an improvement in a different criterion to serve as a comparison standard (like a ruler for measuring

distances)

•  Take an initial value xj1 in criterion gj

•  Ask the client to indicate a second value xj2 such that the increase of value of going from xj1 to xj2 is equal to the comparison standard defined before.

•  Ask the client to indicate a third value xi3 such that the increase of value of going from xj2 to xj3 is equal to the comparison standard defined before, and so on.

•  Then, v (x )-v (x ) = v (x )-v (x ) = ...

(41)

•  For aggregating criteria values, e.g. additive function can be used

V(ai)=f(v1(ai), v2(ai), … , vn(ai))

•  E.g.

V(ai)=∑k1v1(ai) + k2v2(ai)+···+knvn(ai),

where kj is scale coefficient for vj(·), such that kj >0 and

∑kj =1.

Note that scale coefficient alone does not present criterion importance!

•  Different weight elicitation technique, such as ratio with swings, Saaty scale with swings.

(42)

## MAVT: Construct scale coefficients

•  If r1 units (value to be asked) in v1(·) are worth the same as r2 units in v2(·), then we must have k2/k1 = r1/r2.

•  One can ask similar type of questions to obtain k2/k1 = r21, k3/k1 = r31,…, kn/k1 = rn1,

•  and then use the equality ∑kj =1 to determine the solution for the resulting system of equations.

(43)

### Example: Selecting a new security manager

Time-manage- ment skills

Intelligence V(a)

kj ? ?

a1 0.45 0.90 ?

a2 0.90 0.45 ?

a3 0.65 0.65 0.65

(44)

### Example: Selecting a new security manager

Time-manage- ment skills

Intelligence V(a)

kj 0.3 0.7

a1 0.45 0.90 0.765

a2 0.90 0.45 0.585

a3 0.65 0.65 0.65

Convexly dominated alternatives cannot win!

(45)

## MAVT Conclusions

•  Compared with SAW, MAVT constructs functions eliciting preferences rather than normalising performances.

•  There is an underlying compensation effect (in the sense that bad performances in some criteria may be

compensated by good performances in other criteria).

•  The mathematical aggregation is still a weighted sum, but the numbers have a meaning very different from SAW.

•  The result is therefore “tailored” to the preferences of a specific client, it is subjective.

•  Same DM interviewed at different moments of time may have different preference values.

(46)

## MAVT Conclusions

Pros:

•  The simplicity of additive aggregation

•  It matches the intuitive way people make aggregation

•  Rigorous way of obtaining commensurable scales Specific Feature:

•  Shows the role of “weights” as trade-offs, not intuitive importance Cons:

•  It can be difficult to explain the method and elicit answers (but this encourages thinking)

•  Requires strong independence conditions (but it may be possible to restructure the set of criteria)

•  Very poor performances can be compensated (but such alternatives can be eliminated before)

(47)

## Outranking approach

•  The development of ELECTRE family of methods started in ;ate 60’s by Bernard Roy and colleagues with

ELECTRE I for selection the best alternative task.

•  The approach evaluates outranking relation that is

neither necessarily complete (that is incomparability is possible), nor necessarily transitive.

•  Note that incomparability ≠ indifference.

•  All alternatives are compared to each other pairwisely, as in tournament.

(48)

## Basic ideas of ELECTRE

•  The outranking relation is constructed based on two concepts:

•  Concordance: If gj(a) is not worse than gj(b), then the criterion gj is concordant with a S b.

•  Discordance: If gj(a) is worse than gj(b), then the criterion gj is discordant with a S b.

•  “a outranks b” a S b if (concordance holds) there are enough arguments to decide that “a is at least as good as b”, and (no discordance holds) there is no essential argument to oppose this statement (discordance) (or, in other words, “a is not worse than b”).

•  Note that a criterion can be concordant and with both a S b and b S a, namely when gj(a)=gj(b).

•  Outranking relation is checked for all pairs of alternatives in both

(49)

## Basic ideas of ELECTRE

•  One of four preference situation can be established between two alternatives after evaluating outranking relation:

•  a P b Preference, if a S b and ¬b S a

•  b P a Preference, if b S a and ¬a S b

•  a R b Incomparable, if ¬a S b and ¬b S a

•  a I b Indifferent, if a S b and b S a

•  No need for computing global value for each alternative, but pairwise comparison.

•  Incomparability between alternatives is accepted.

(50)

## Conditions for ELECTRE application

•  Ordinal or interval scales not suitable for comparison of differences.

•  Strong heterogeneity of evaluations on criteria scales that is difficult to aggregate into unique criterion.

•  A loss in one criterion cannot be compensated by a gain in other criterion that requires use of a non-compensatory

aggregation procedures.

•  For at least one criterion small differences are not

significant in terms of preferences, but accumulation of several small differences may become significant that

requires introduction of thresholds that makes indifference to be intransitive.

(51)

=

BUT

AND =

(52)

## Preferences example

Security-

management skills g1

Intelligence

g2

Communication skills

g3

Work experience

g4

a1 0.2 0.6 0.5 0.6

a2 0.5 0.4 0.1 0.1

> a2 a1 a1 a1

a1 S a2: Weight of (g2,g3,g4) is big enough and opposition from criterion g1 is small

a2 S a1 → Weight of g1 is not big enough a S a and ¬a S a → a P a

(53)

## Indifference example

Security- management

skills g1

Intelligence

g2

Communication skills

g3

Work experience

g4

a1 0.2 0.4 0.5 0.6

a2 0.5 0.6 0.1 0.1

> a2 a2 a1 a1

a1 S a2 → : Weight of (g3,g4) is big enough and opposition from criterion (g1,g2) is small

a2 S a1: Weight of (g1,g2) is big enough and opposition from criterion (g3,g4) is small

(54)

## Incomparability example

Security- management

skills

Intelligence Communication skills

Work experience

a1 0.2 0.4 0.5 0.8

a2 0.5 0.6 0.1 0.1

> a2 a2 a1 a1

a1 S a2: Weight of (g3,g4) is big enough and opposition from criterion (g1,g2) is small

a2 S a1: Weight of (g1,g2) is big enough and opposition from criterion (g ,g ) is very big (g has veto effect)

(55)

## ELECTRE I (1967)

•  1. Construct a crisp outranking relation (tells outranks or not). For each pair of alternatives a and b check if a S b and if b S a.

•  2. Exploit the outranking relation for finding a kernel.

That is select a minimal set of candidates to become the most preferred alternative.

(56)

## ELECTRE I: Constructing S

•  Given (a,b), a S b if the following conditions are both true:

•  Concordance:

•  (non)Discordance:

•  where Δj(a,b) is the advantage of a over b on criterion gj kj is the of criterion gj (it is assumed that k1,...,kn ≥0 and

∑kj =1);

c(a,b) is the concordance index;

c is the concordance threshold;

v is the veto threshold of g .

(57)

## ELECTRE I: Constructing S

•  The concordance index is equal to the sum of the

weights of the criteria that agree with a S b (the criteria in which a is as good as b, or better).

•  The concordance condition holds if this sum (the total concordant weight) attains the required majority

threshold c.

•  The discordance condition for a S b holds if there is no discordant criterion in which b is better than a by a

difference greater than the criterion’s veto threshold.

(58)

## Finding kernel

•  Purpose: to select a minimal set of candidates to become the most preferred alternative.

Definition of kernel of a graph:

•  External stability: ∀

(justification for excluding alternatives outside of the kernel).

•  Internal stability : ∀

(absence of justification to exclude any alternative from the kernel).

•  Algorithm of finding kernel:

Find all alternatives not outranked by other alternatives.

(59)

## No kernel case

•  The existence of a unique kernel is guaranteed if the graph is acyclic.

•  A solution: to consider alternatives in a cycle as being indifferent, treating them as a single class inheriting incoming and outgoing arcs.

“Speedy”

Tiny

Fancy

>

>

>

(60)

## Weights and veto

•  The weight of criterion in ELECTRE methods shows the voting power which is in favor of outranking.

•  Weights do not depend on the ranges or encoding of scales.

•  They cannot be interpreted as substitution rates, e.g. as importance coefficients in MAVT.

•  Veto shows the level of difference on criterion values that is big enough to make outranking assertion “a outranks b” invalid.

(61)

## ELECTRE I: Credibility degree

•  Let Δj denote the advantage of an alternative a over another alternative b according to a criterion gj(·):

if criterion gj to be maximized if criterion gj to be minimized

•  For ELECTRE I, if Δj≥0, then criterion gj is fully concordant with aSb.

•  On the other hand, if Δj <0, even if the difference is almost zero, then gj(·) is not concordant with aSb.

•  For ELECTRE I, if -Δj≥vj then criterion gj opposes a veto to aSb, even if the threshold is surpassed by a negligible amount.

•  This is changed in ELECTRE III.

(62)

## ELECTRE I vs. MAVT

Pros:

•  No strong axioms and conditions to verify.

•  Works with any type of scales, including qualitative scales.

•  Importance coefficients kj truly reflect the “criteria” weights ( “voting power” analogy) independently of the scales.

•  Alerts for incomparabilities (alternatives that are too different).

Cons:

•  Specific to the problem of selecting the best (one)

alternative (it does not allow to rank the alternatives).

•  Exploitation difficulties (multiple alternatives in the kernel).

•  Lack of independence with regard to third alternatives.

•  Sudden transition from S to not S as data changes.

(63)

## Valued outranking relations

•  ELECTRE I works with crisp outranking relations: given a pair (a,b), the statement aSb is established to be true or false.

•  Crisp S means a Yes/No relation (either outranks or not).

•  In later versions, e.g. ELECTRE III, outranking can be partially true, computing a credibility degree for it.

•  Valued S means that a credibility degree for the outranking is computed in the interval [0,1].

(64)

## ELECTRE III: Concordance

•  Concordance index for each criterion gj defines how much does the criterion agree with a S b?

where

qj = indifference threshold (biggest difference that keeps two values on criterion gj indifferent).

pj = preference threshold (smallest difference between values on criterion gj that is enough to consider

(65)

## ELECTRE III: Discordance

•  Discordance index for each criterion gj defines how much does the criterion oppose a veto to a S b?

where

uj = non-discordance threshold (disadvantage for which a partial veto begins). Originally uj = pj .

v = veto threshold (disadvantage originating a total veto).

(66)

## ELECTRE III: Aggregation

•  The global concordance index given weights kj (and still assuming ∑kj =1) is a weighted sum:

•  The global discordance index is the maximum discordance:

(67)

## ELECTRE III: Credibility degree

•  Credibility degree for a S b.

•  Aggregation of concordance and discordance.

•  Credibility index for a S b:

•  Or

•  Or

(68)

## Pros vs. additive MAVT model

•  No strong axioms and conditions to verify.

•  Works with any type of scales, including qualitative.

•  Importance coefficients kj truly reflect the “criteria”

weights (“voting power” analogy) independently of the scales.

•  ELECTRE I alerts for incomparabilities (alternatives that are too different).

•  ELECTRE III allows putting a penalty on a very weak performances on some criterion.

(69)

## Cons vs. additive MAVT model

•  ELECTRE I:

•  Specific to the problem of selecting the best (one)

alternative (it does not allow to rank the alternatives).

•  Exploitation difficulties (multiple alternatives in the kernel).

•  Lack of independence with respect to third

alternatives (for instance, if a3 did not outrank a4 then alternatives [a1 , a2] would be in the kernel).

•  ELECTRE III:

•  Large number of parameters.

•  Relatively complex computations.

(70)

## How to select an MCDA method?

•  Check scales: ordinal or interval scales not suitable for comparison of differences in MAVT still can be treated in ELECTRE.

•  Check homogeneity/heterogeneity of evaluations on

criteria scales (may be difficult to aggregate into unique criterion).

•  Check if compensation allowed or not.

•  Accumulation of several small differences may become significant that may requires introduction of thresholds

(71)

## Literature

Ralph L. Keeney, Value-Focused Thinking: A Path to Creative Decision Making. Harvard University Press, Cambridge, MA, 1992.

Martin G. Rodgers, Michael Bruen, Lucien-Eves Maystre. Electre and Decision Support:

Methods and Applications in Engineering and Infrastructure Investment Springer, 2010.

Valerie Belton and Theodore Stewart, Multiple Criteria Decision Analysis: An Integrated Approach. Kluwer Academic Publishers, Boston, 2002.

Referințe

DOCUMENTE SIMILARE

The questionnaire was developed, on rating-scales items, which focused on demographic details, psychological symptoms related with the closure of their institutes,

However, Shockwave therapy impact on function in the pooled synthesis varied in different phases and on different scales, for WOMAC it showed significant improvement in the

First authenticates the user and then it checks the available quantity of ration goods in their corresponding ration shop.If there is available quantity in the

This ENGINE SPEED CONTROLLING MECHANISM WITH ALCOHOL DETECTOR Project can check whether the person who is driving is drunk or not (it is the main reason for

Fig 4 and Fig 5 represents: After turning on the location the user get the message fetching data in this process the application fetch the user current location and compare the

Pareto solution of a multicriteria linear programnring problem, some interesting properties of the Pareto transpoft plan set can be found for insrance in [7]... The

SANS gives an average conformation of graphene sheets in the nanocomposites, and scattering upon length scales from 125nm down to approximately 1 nm

interval and can be applied for example in the case of the integrand function having a singularity near the integration interval.. In the next two sections we

Both of the crystallite and particle size are increase with increasing the evaporation pressure and substrate temperature.. In addition, we check the ability of

The implications of the new findings is extremely important in nanoelectronics for controlling the quality of the amorphous materials and for developing micro-sources of energy,

Consider, for example, the so-called examination and didactic / expository questions (the questioner knows the answer, but wants to check the knowledge state of the addressee, or

The present study investigates the acquisition route of DOM in Spanish and Romanian, two languages whose DOM systems are constrained by the same semantic features of the object

(2003) .A new method for group decision support based on ELECTRE III methodology., European Journal of Operational Research, 148, 14-27.. (2004) .Multi-criteria decision analysis:

For this purpose, it is necessary to install scales at the entrance and exit of the national territory so that information on the mass of the means of transport

Paulsen, Representation of function algebras, abstract operator spaces and Banach space geometry, J.. Pisier, Operator spaces and similarity problems, Documenta

Human Computer Interaction (HCI) is an order of logical man-made brainpower that concentrations with the improvement of calculations that get contribution as exact information

In this paper we show that any real or complex quaternion algebra can be identified with a suitable matrix algebra, then via this representation,

Preprocessed tweets will be passed to SVM and Naïve Bayes model (see Developing the Model section) to calculate the probabilities of fetched tweets to check whether

With the help of these internet portal the employee can apply online and check the status of their application form and within the free time period the employee can

Results 70% of the college students do not check the label or quality label on junk food to maintain their health 65% of the college students are aware of the harmful

Conclusion: The use of milkfish scales chitosan has potential as an anti-inflammatory agent, stimulates the activity of fibroblasts, osteoblasts and differentiation of

According to (Directorate of Family Health, 2020) Postpartum Visit Services in the Covid-19 Pandemic, the second, third and fourth postpartum visits can be carried out

The tool for measuring interpersonal relations was developed by Guerney (1977), and the interpersonal relations scales that Mun (1980) developed and adapted for Korea's

Selectați limba dvs.

Site-ul web va fi tradus în limba pe care o selectați.

Limbi sugerate pentru dvs:

Alte: