**Multicriteria Decision Aiding: **

**Compensating and Non-Compensating ** **Methods **

Iryna Yevseyeva

Research Associate

School of Computing Science

Centre for Cybercrime and Computer Security (CCCS) http://cccs.ncl.ac.uk

Newcastle University, UK

Evolutionary Computation, July 18-24 (2015), Iasi, Romania

## Learning objectives

• Basics of Multi Criteria Decision Aiding (MCDA)

• What is the subject area of Multi Criteria Decision Aiding?

• How MuliObjective Optimisation and MultiCriteria Decision Aiding/Analysis are related and differentiated?

• Decision Maker’s Preferences

• How different MCDA approaches model Decision Maker’s preferences?

• Ways of expressing preferences of the Decision Maker.

• MCDA Compensatory and Non-compensatory Approaches

• MAVT vs. ELECTRE.

• When is it appropriate to use each of the learned algorithms?

## Security decision making

### • Examples:

### • Which of publicly available Wi-Fi shell I use?

### • How do I choose a new password?

### • Should I put this person’s USB stick in my laptop?

### • These decisions are about security vs.

### productivity trade-off

### • Examples from data mining:

### • Accuracy vs. cost trade-off;

### • Fitting to existing model vs. adapting to new

### knowledge trade-off.

low

medium

high

-4 700

800 900 1000

30%

20%

50%

60%

80%

90%

high medium low

L1

L3

L2

L4

CPUàmin Priceàmin

-Usability àmin

*Parallel Coordinate Diagram *

All criteria are to be minimised

L2 dominates L1

-Popularity àmin

### MultiCriteria Decision Analysis/Aiding

• In MultiObjective Optimisation (MOO)

• addresses continuous and discrete combinatorial problems;

• solutions/alternatives are defined by a set of constraints;

• objective functions are then optimised in this region.

• **MultiCriteria Decision Analysis/Aiding (MCDA) **

• mainly discrete problems with small sets of alternatives;

• for combinatorial problems find rules for decision making;

• constrains are taken into account implicitly: into set of criteria and/or alternatives

• Both MOO and MCDA

• use evaluation of solutions/alternatives based on multiple criteria/objectives

• and search for trade-off solution(s) (e.g. using Pareto

### MultiCriteria Decision Aiding/Analysis

• Terminology:

• Multiobjective -> MultiCriteria Decision Optimization Aiding/Analysis

• MOO -> MCDA

• Solutions (to be found) -> Alternatives (often given)

• Objectives (given) -> Criteria (given)

• Usually impersonal -> Decision maker (DM) DMs

• Objective -> Subjective

### MCDA used after MOO

x1 x2

x3

**Search space **
**(decision space) **

f2

**Objective space **
**(solution space) **

**Image **
**under f **

**Part of Pareto front = **

**Set of interesting solutions **

map performed by f

• MCDA can be used in combination/before/after MOO, e.g.

concentrating the search on a smaller region of search space.

## MCDA Difficulties

### • Modelling: well-defined problem is half-solved problem.

### • Preferences of the Decision Maker(s) (DMs) are crucial.

### • Preferences may be expressed in different form.

### • Different DMs may have different meaning for the same form of preference.

### • Same DM interviewed at different moments of time may have different preference values.

### • Multiple criteria evaluations should be aggregated.

### • Aggregation implies losing of some information.

## Types of MCDA problems

• To structure the problem: to identify alternatives, criteria (especially, if they are hierarchically structured);

• To select one alternative among many;

• To rank all alternatives;

• To sort them into several (ordered) groups;

• To select a portfolio of alternatives.

### MCDA Examples

• To select an investment plan;

• To rank students-participants of a “best assignment” competition;

• To sort research projects to be funded or not;

• To choose a team of employees;

• To choose a new airport location;

• To decide on giving credit to a client or not;

• To rank universities by their scientific and academic performance;

• To find best combination of assets for a financial portfolio.

## MCDA problems type: Choice

## MCDA problems type: Ranking

## MCDA problems type: Sorting

**Cars I like and ready **
**to buy **

**Cars I like but not **
**ready to buy **

**Cars I don**’**t like and **
**not ready to buy **

## MCDA problems type: Combination

**Cars a company **
**prefers to buy **

## Value-Focused Thinking

• “When you are faced with a difficult decision situation, start by thinking about your values. Do not start by

thinking about alternatives, as that will limit you.”

• Why each criterion is important? This helps to set priority and importance to criteria. E.g. thinking of why good

gasoline mileage is important for a car you may start thinking of more fundamental criteria, like operational costs, pollution, etc.

• Ralph L. Keeney, Value-Focused Thinking: A Path to Creative Decision Making. Harvard University Press, Cambridge, MA, 1992.

## What is the best car?

Speed Cost

Criteria/objectives: Cost à min, Speed à max, Length à min

“Luxury”

“Speedy”

“Tiny”

“Fancy”

## What is the best anti-virus?

CPU Cost

Criteria/objectives: Cost à min, CPU à max, Usability à max

Usability

“Speedy”

“Fancy”

“Luxury”

“Tiny”

## Problem representation

• Performance matrix

• v_{j}(a_{i}) is performance value of alternative a_{i }, i={1,…m} on criterion
g_{j }, j={1,…n} and k_{j} is importance coefficient of criterion g_{j }

g_{1}(·) g_{2 }(·) … g_{n }(·)

k k_{1 } k_{2 } … k_{n}

a_{1} v_{11}(a_{1}) v_{2}(a_{1}) … v_{n}(a_{1})
a_{2} v_{1} (a_{2}) v_{2}(a_{2}) … v_{n}(a_{2})

… … … … …

a_{m} v_{1}(a_{m}) v_{2}(a_{m}) … v_{n}(a_{m})

## Example: What is the best car?

Cost
*min *

Speed
*max *

Length
*min *

Importance 3 2 1

“Tiny” 10.000 100 2

“Fancy” 60.000 150 4

“Speedy” 100.000 200 4

“Luxury” 300.000 100 7

## Criteria scales

• Ratio/Quantitative/Cardinal: There is information on difference between scale values with respect to a non- arbitrary origin, e.g. meters, kg, etc.

• Ordinal/Qualitative/Verbal: There is order on the scale but no information on the difference between scale

values, e.g. excellent, good, satisfactory, bad.

• Interval: There is information on difference between scale values with respect to an arbitrary origin, e.g., Fahrenheit and Celsius temperature scales.

## MCDA Approaches

• Basic techniques: Dominance test, MaxiMin, Maximax, Lexicographic and Lexicographic semi-order methods, Simple Additive Weighting (SAW).

• American schools: Multi-Attribute Utility Theory (MAUT) / Multi-Attribute Value Theory (MAVT) and Analytical

Hierarchy Process (AHP).

• European school: Outranking-based approaches, such as ELECTRE, PROMETHEE;

• Uncertainty: fuzzy sets, rough sets, Stochastic Multicriteria Acceptability Analysis (SMAA),Verbal analysis.

## Simple Additive Weighting (SAW)

• Simple and intuitive, but has drawbacks.

• The alternatives are selected/ranked/sorted based on their weighted sum score:

S(a_{i})=∑k_{j}*g_{j}(a_{i}),

• where k_{j }is g_{j}’s criterion importance / weight;

g_{j}(a_{i}) is value of alternative a_{i }on criterion g_{j}.

• All scales either to be minimised or to be maximised.

• All scales are to be homogenised = normalised for commensurable comparison (roughly speaking to be able to aggregate kgs and meters).

## SAW: Normalisation

• Ratio normalisation with all criteria scales in [0,1] interval:

• For criterion to be maximised x_{norm}= x/x_{best }

• For criterion to be minimised x_{norm}= x_{best}/x

• Normalisation of ratio difference with all criteria scales in [0,1] interval:

• For all criteria scales x_{norm}=x-x_{worst}/x_{best}-x_{worst}

• Euclidean normalisation (non-linear transformation):

• x_{norm}=x/√∑x^{2}

• Drawbacks:

• Adding/removing alternatives may change ranking.

• Difficult for qualitative scales.

• SAW depends hardly on the normalisation used!

## Multi-Attribute Value Theory (MAVT)

• Developed in late 60’s by Ralph Keeney and Hovard Raiffa. Similarly to Utility Theory (MAUT)* assumes

rational Decision Maker and adapts MAUT by modelling rigorous preference structure.

• All criteria evaluations are aggregated into an overall
score for each alternative: V(a_{i})=∑k_{j}*g_{j}(a_{i})

• For each criterion g_{j} a value function v_{j}(g_{j}(·)) should be
constructed that reflects the subjective value of the

criterion for the decision maker.

• MAVT preference relation is complete and transitive.

* Ralph Keeney and Hovard Raiffa (1976). Decisions with Multiple Objectives:

## Multi-Attribute Value Theory (MAVT)

• For each g_{j} a value function v_{j}(g_{j}(·)) should be

constructed that reflects the subjective criterion value of the decision maker.

• Compare to SAW, where normalisation is impersonal.

• v_{j}(g_{j}(a_{i})) or v_{j}(a_{i}) denotes the value or performance of the
alternative a_{i }on criterion g_{j}.

• The most common is additive form of aggregation for the global score (but there are more complex forms of the value function):

V(a)=f(v (a), v (a ), … , v (a ))

## MAVT: Preference relation

• MAVT constructs a complete order on the set of alternatives.

• Only preference and indifference relations are possible between alternatives:

• *v*_{j}*(a)> v*_{j}*(b) if and only if alternative a is considered to be better *
than alternative b on criterion g_{j} (g_{j}*(a) is preferred to g*_{j}*(b)); *

• *v*_{j}*(a) = v*_{j}*(b) if and only if alternative a is considered as good as *
alternative b on criterion g_{j} (g_{j}*(a) is indifferent to g*_{j}*(b)). *

• Transitivity of preference and indifference is assumed.

## MAVT: Incomplete

“Speedy”

“Tiny”

“Fancy”

>

>

### ?

## MAVT: Completeness

“Speedy”

“Tiny”

“Fancy”

>

>

>

## MAVT: Intransitivity of preference

“Speedy”

“Tiny”

“Fancy”

>

>

>

## MAVT: Transitivity of preference

• If a is preferred to b and b is preferred to c, then a is preferred to c.

Condorcet’s paradox

• Suppose that 3 judges are evaluating alternatives a, b, c.

• Alex deems “Speedy” is better than “Fancy” and “Fancy” is better than “Luxury”

• Mary deems “Fancy” is better than “Luxury” and “Luxury” is better than “Speedy”

• John deems “Luxury” is better than “Speedy” and “Speedy” is better than “Fancy”

• Voting claims 2:1 majority for “Speedy” being better than “Fancy”;

and a 2:1 majority claim “Fancy” is better than “Luxury”.

• By transitivity “Speedy” should be better than “Luxury”, but a 2:1 majority dictates “Luxury” is better than “Speedy”.

## MAVT: Transitivity of indifference

=

=

**Consequently **

**AND ** =

## MAVT: Transitivity of indifference

• If a is indifferent to b and b is indifferent to c, then a is indifferent to c.

• Alex is indifferent between drinking coffee with 3g of sugar (alternative a3) and drinking coffee with 4g of sugar (alternative a4).

• Alex is indifferent between drinking coffee with 4g of sugar (alternative a4) and drinking coffee with 5g of sugar (alternative a5).

• Similarly, he is indifferent between a5 and a6, between a6 and a7, and so on.

• By transitivity Alex should be indifferent between, say, a3 and a7, but probably he would not.

This type of reasoning cannot be reproduced using value functions!

### Example: Selecting a new security manager

Security- management

skills

Intelligence Communica- tion skills

Work
experience
k_{j } Most

important

2nd most important

2nd most important

2nd most important

a_{1} Poor High IQ Very good 8 years

a_{2} Very Good Very High IQ Poor 1 year

a_{3 } Excellent Low IQ Satisfactory 1 year

a_{4} Good High IQ Good 4 years

### Example: Selecting a new security manager

Security- manage- ment skills

Intelligence Communi cation

skills

Work experi- ence

V(a) Conclusion

k_{j } 0.4 0.2 0.2 0.2 TM twice more

important

a_{1} 0.2 0.4 0.5 0.8 0.42 Experienced

a_{2} 0.5 0.6 0.1 0.1 0.36 ^{TM&I }

a_{3 } 0.8 0.1 0.2 0.1 0.40 ^{TM (Most }

important crit.)

a_{4} 0.4 0.4 0.4 0.4 0.40 ^{Balanced }

Compensation effect!

## MAVT: Constructing “values”

• A value function reflects the subjective preferences of the client.

• SAW uses normalisation, which is an impersonal operation.

• Assuming transitivity of preference and indifference the client’s preferences can be defined by value functions.

• The idea is to build a function that the client perceives as adequate to represent his or her judgment about

strengths of preference. If the client feels comfortable with the representation, then it is OK

• There are different techniques for building criteria value functions, e.g. direct rating, curve fitting, bisection, and standard differences and extract importance coefficients,

## MAVT: Value function with Direct rating

• Ask the client for a numerical value for each performance level on a given criterion’s scale.

• Or ask the client to adjust a graphical representation of performance levels on a line segment representing

value.

• In doing so the client should recall that

• *v*_{j}*(a)> v*_{j}*(b) if and only if alternative a is considered better than *
alternative b on criterion g_{j} (g_{j}*(a) is preferred to g*_{j}*(b)); *

• *v*_{j}*(a)-v*_{j}*(c) > v*_{j}*(b)-v*_{j}*(d) if and only if the strength of preference for a *
over c is higher than the strength of preference of b over d on

criterion g.

## Example: What is the best car?

Cost
*min *

Speed
*max *

Length
*min *

Importance 3 2 1

“Tiny” 10.000 100 2

“Fancy” 60.000 150 4

“Speedy” 100.000 200 4

“Luxury” 300.000 100 7

## MAVT: Value function with Curve fitting

• Ask the client to adjust the parameters defining a given curve

• E.g. a negative exponential curve ranging from extremely concave to extremely convex cases

• or a sigmoidal curve

## MAVT: Value function with Bisection

• Ask the client to indicate a performance level that corresponds to the best value and set it to 1 and the worst value and set it to 0.

• Ask the client to indicate a performance level that splits the interval in two in terms of value (such that changing from the 0 value performance to the chosen midpoint increases value as much as does changing from this midpoint to the 1 value performance).

• Then the chosen midpoint corresponds to value 0.5.

• Use the same process to bisect the intervals [0, 0.5] and [0.5, 1]. And so on.

## MAVT: Value function with differences

• Define an improvement in a different criterion to serve as a comparison standard (like a ruler for measuring

distances)

• Take an initial value x_{j1} in criterion g_{j }

• Ask the client to indicate a second value x_{j2} such that the
increase of value of going from x_{j1} to x_{j2} is equal to the
comparison standard defined before.

• Ask the client to indicate a third value x_{i3} such that the
increase of value of going from x_{j2} to x_{j3} is equal to the
comparison standard defined before, and so on.

• Then, v *(x* *)-v* *(x* *) = v* *(x* *)-v* *(x* *) = ... *

## MAVT: additive model

• For aggregating criteria values, e.g. additive function can be used

V(a_{i})=f(v_{1}(a_{i}), v_{2}(a_{i}), … , v_{n}(a_{i}))

• E.g.

V(a_{i})=∑k_{1}v_{1}(a_{i}) + k_{2}v_{2}(a_{i})+···+k_{n}v_{n}(a_{i}),

where k_{j }is scale coefficient for v_{j}(·), such that k_{j }>0 and

∑k_{j }=1.

Note that scale coefficient alone does not present criterion importance!

• Different weight elicitation technique, such as ratio with swings, Saaty scale with swings.

## MAVT: Construct scale coefficients

• **Indifferences involving trade-offs **

• If r_{1}* units (value to be asked) in v*_{1}*(·) are worth the same *
as r_{2}* units in v*_{2}*(·), then we must have k*_{2}*/k*_{1}* = r*_{1}*/r** _{2}*.

• One can ask similar type of questions to obtain
* k** _{2}*/k

_{1}*= r*

_{21}*, k*

*/k*

_{3}

_{1}*= r*

_{31}*,…, k*

*/k*

_{n}

_{1}*= r*

_{n1}*,*

• and then use the equality ∑k_{j }=1 to determine the
solution for the resulting system of equations.

### Example: Selecting a new security manager

Time-manage- ment skills

Intelligence V(a)

k_{j } ? ?

a_{1} 0.45 0.90 ?

a_{2} 0.90 0.45 ?

a_{3 } 0.65 0.65 0.65

### Example: Selecting a new security manager

Time-manage- ment skills

Intelligence V(a)

k_{j } 0.3 0.7

a_{1} 0.45 0.90 0.765

a_{2} 0.90 0.45 0.585

a_{3 } 0.65 0.65 0.65

Convexly dominated alternatives cannot win!

## MAVT Conclusions

• Compared with SAW, MAVT constructs functions eliciting preferences rather than normalising performances.

• There is an underlying compensation effect (in the sense that bad performances in some criteria may be

compensated by good performances in other criteria).

• The mathematical aggregation is still a weighted sum, but the numbers have a meaning very different from SAW.

• The result is therefore “tailored” to the preferences of a specific client, it is subjective.

• Same DM interviewed at different moments of time may have different preference values.

## MAVT Conclusions

Pros:

• The simplicity of additive aggregation

• It matches the intuitive way people make aggregation

• Rigorous way of obtaining commensurable scales Specific Feature:

• Shows the role of “weights” as trade-offs, not intuitive importance Cons:

• It can be difficult to explain the method and elicit answers (but this encourages thinking)

• Requires strong independence conditions (but it may be possible to restructure the set of criteria)

• Very poor performances can be compensated (but such alternatives can be eliminated before)

## Outranking approach

• The development of ELECTRE family of methods started in ;ate 60’s by Bernard Roy and colleagues with

ELECTRE I for selection the best alternative task.

• The approach evaluates outranking relation that is

neither necessarily complete (that is incomparability is possible), nor necessarily transitive.

• Note that incomparability ≠ indifference.

• All alternatives are compared to each other pairwisely, as in tournament.

## Basic ideas of ELECTRE

• The outranking relation is constructed based on two concepts:

• Concordance: If g_{j}*(a) is not worse than g*_{j}*(b), then the criterion g*_{j }is
concordant with a S b.

• Discordance: If g_{j}*(a) is worse than g*_{j}*(b), then the criterion g*_{j} is
discordant with a S b.

• “a outranks b” a S b if (concordance holds) there are enough arguments to decide that “a is at least as good as b”, and (no discordance holds) there is no essential argument to oppose this statement (discordance) (or, in other words, “a is not worse than b”).

• Note that a criterion can be concordant and with both a S b and
* b S a, namely when g*_{j}*(a)=g*_{j}*(b). *

• Outranking relation is checked for all pairs of alternatives in both

## Basic ideas of ELECTRE

• One of four preference situation can be established between two alternatives after evaluating outranking relation:

• *a P b Preference, if a S b and ¬b S a *

• *b P a Preference, if b S a and ¬a S b *

• *a R b Incomparable, if ¬a S b and ¬b S a *

• *a I b Indifferent, if a S b and b S a *

• No need for computing global value for each alternative, but pairwise comparison.

• Incomparability between alternatives is accepted.

## Conditions for ELECTRE application

• Ordinal or interval scales not suitable for comparison of differences.

• Strong heterogeneity of evaluations on criteria scales that is difficult to aggregate into unique criterion.

• A loss in one criterion cannot be compensated by a gain in other criterion that requires use of a non-compensatory

aggregation procedures.

• For at least one criterion small differences are not

significant in terms of preferences, but accumulation of several small differences may become significant that

requires introduction of thresholds that makes indifference to be intransitive.

## Intransitivity of indifference

≠

=

**BUT **

**AND ** =

## Preferences example

Security-

management skills
g_{1 }

Intelligence

g_{2 }

Communication skills

g_{3 }

Work experience

g_{4 }

a_{1} 0.2 0.6 0.5 0.6

a_{2} 0.5 0.4 0.1 0.1

> a_{2} a_{1} a_{1} a_{1}

*a*_{1}* S a** _{2}*: Weight of (g

_{2},g

_{3},g

_{4}) is “big enough” and opposition from criterion g

_{1}is “small”

*a*_{2}* S a** _{1}* → Weight of g

_{1 }is not “big enough”

*a*

*S a*and ¬a

*S a*→ a

*P a*

## Indifference example

Security- management

skills
g_{1 }

Intelligence

g_{2 }

Communication skills

g_{3 }

Work experience

g_{4 }

a_{1} 0.2 0.4 0.5 0.6

a_{2} 0.5 0.6 0.1 0.1

> a_{2} a_{2} a_{1} a_{1}

*a*_{1}* S a** _{2}* → : Weight of (g

_{3},g

_{4}) is “big enough” and opposition from criterion (g

_{1},g

_{2}) is “small”

*a*_{2}* S a** _{1}*: Weight of (g

_{1},g

_{2}) is “big enough” and opposition from criterion (g

_{3},g

_{4}) is “small”

## Incomparability example

Security- management

skills

Intelligence Communication skills

Work experience

a_{1} 0.2 0.4 0.5 0.8

a_{2} 0.5 0.6 0.1 0.1

> a_{2} a_{2} a_{1} a_{1}

*a*_{1}* S a** _{2}*: Weight of (g

_{3},g

_{4}) is “big enough” and opposition from criterion (g

_{1},g

_{2}) is “small”

*a*_{2}* S a** _{1}*: Weight of (g

_{1},g

_{2}) is “big enough” and opposition from criterion (g ,g ) is “very big” (g has veto effect)

## ELECTRE I (1967)

• 1. Construct a crisp outranking relation (tells outranks or not). For each pair of alternatives a and b check if a S b and if b S a.

• 2. Exploit the outranking relation for finding a kernel.

That is select a minimal set of candidates to become the most preferred alternative.

## ELECTRE I: Constructing S

• Given (a,b), a S b if the following conditions are both true:

• **Concordance: **

• (non)Discordance:

• where Δ_{j}*(a,b) is the advantage of a over b on criterion g*_{j }* k*_{j}* is the of criterion g** _{j }* (it is assumed that k

*,...,k*

_{1}

_{n}*≥0 and*

∑k_{j }=1);

* c(a,b) is the concordance index; *

* c is the concordance threshold; *

* v is the veto threshold of g* .

## ELECTRE I: Constructing S

• The concordance index is equal to the sum of the

weights of the criteria that agree with a S b (the criteria in which a is as good as b, or better).

• The concordance condition holds if this sum (the total concordant weight) attains the required majority

threshold c.

• The discordance condition for a S b holds if there is no discordant criterion in which b is better than a by a

difference greater than the criterion’s veto threshold.

## Finding kernel

• Purpose: to select a minimal set of candidates to become the most preferred alternative.

Definition of kernel of a graph:

• External stability: ∀

(justification for excluding alternatives outside of the kernel).

• Internal stability : ∀

(absence of justification to exclude any alternative from the kernel).

• Algorithm of finding kernel:

Find all alternatives not outranked by other alternatives.

## No kernel case

• The existence of a unique kernel is guaranteed if the graph is acyclic.

• A solution: to consider alternatives in a cycle as being indifferent, treating them as a single class inheriting incoming and outgoing arcs.

“Speedy”

“Tiny”

“Fancy”

>

>

>

## Weights and veto

• The weight of criterion in ELECTRE methods shows the voting power which is in favor of outranking.

• Weights do not depend on the ranges or encoding of scales.

• They cannot be interpreted as substitution rates, e.g. as importance coefficients in MAVT.

• Veto shows the level of difference on criterion values that is big enough to make outranking assertion “a outranks b” invalid.

## ELECTRE I: Credibility degree

• Let Δ_{j}* denote the advantage of an alternative a over *
another alternative b according to a criterion g_{j}*(·): *

if criterion g_{j }to be maximized
if criterion g_{j} to be minimized

• For ELECTRE I, if Δ* _{j}*≥0, then criterion g

_{j}*is fully*concordant with aSb.

• On the other hand, if Δ_{j}* <0, even if the difference is *
almost zero, then g_{j}*(·) is not concordant with aSb. *

• For ELECTRE I, if -Δ* _{j}*≥v

_{j}*then criterion g*

_{j}*opposes a veto*to aSb, even if the threshold is surpassed by a negligible amount.

• This is changed in ELECTRE III.

## ELECTRE I vs. MAVT

Pros:

• No strong axioms and conditions to verify.

• Works with any type of scales, including qualitative scales.

• Importance coefficients k_{j}* truly reflect the “criteria” weights *
( “voting power” analogy) independently of the scales.

• Alerts for incomparabilities (alternatives that are too different).

Cons:

• Specific to the problem of selecting the best (one)

alternative (it does not allow to rank the alternatives).

• Exploitation difficulties (multiple alternatives in the kernel).

• Lack of independence with regard to third alternatives.

• Sudden transition from S to not S as data changes.

## Valued outranking relations

• ELECTRE I works with crisp outranking relations: given a pair (a,b), the statement aSb is established to be true or false.

• Crisp S means a Yes/No relation (either outranks or not).

• In later versions, e.g. ELECTRE III, outranking can be partially true, computing a credibility degree for it.

• Valued S means that a credibility degree for the outranking is computed in the interval [0,1].

## ELECTRE III: Concordance

• Concordance index for each criterion g_{j} defines how
*much does the criterion agree with a S b? *

where

*q*_{j}* = indifference threshold (biggest difference that keeps *
two values on criterion g_{j}* indifferent). *

*p*_{j}* = preference threshold (smallest difference between *
values on criterion g_{j}* that is enough to consider *

## ELECTRE III: Discordance

• Discordance index for each criterion g_{j} defines how much
*does the criterion oppose a veto to a S b? *

where

*u*_{j}* = non-discordance threshold (disadvantage for which a *
partial veto begins). Originally u_{j}* = p** _{j }*.

*v = veto threshold (disadvantage originating a total veto). *

## ELECTRE III: Aggregation

• The global concordance index given weights k_{j}* (and still *
assuming ∑k_{j }=1) is a weighted sum:

• The global discordance index is the maximum discordance:

## ELECTRE III: Credibility degree

• Credibility degree for a S b.

• Aggregation of concordance and discordance.

• Credibility index for a S b:

• *Or *

• *Or *

## Pros vs. additive MAVT model

• No strong axioms and conditions to verify.

• Works with any type of scales, including qualitative.

• Importance coefficients k_{j}* truly reflect the “criteria” *

weights (“voting power” analogy) independently of the scales.

• ELECTRE I alerts for incomparabilities (alternatives that are too different).

• ELECTRE III allows putting a penalty on a very weak performances on some criterion.

## Cons vs. additive MAVT model

• ELECTRE I:

• Specific to the problem of selecting the best (one)

alternative (it does not allow to rank the alternatives).

• Exploitation difficulties (multiple alternatives in the
*kernel). *

• Lack of independence with respect to third

alternatives (for instance, if a_{3}* did not outrank a*_{4}* then *
*alternatives [a*_{1}* , a** _{2}*] would be in the kernel).

• ELECTRE III:

• Large number of parameters.

• Relatively complex computations.

## How to select an MCDA method?

• Check scales: ordinal or interval scales not suitable for comparison of differences in MAVT still can be treated in ELECTRE.

• Check homogeneity/heterogeneity of evaluations on

criteria scales (may be difficult to aggregate into unique criterion).

• Check if compensation allowed or not.

• Accumulation of several small differences may become significant that may requires introduction of thresholds

## Literature

Ralph L. Keeney, Value-Focused Thinking: A Path to Creative Decision Making. Harvard University Press, Cambridge, MA, 1992.

Martin G. Rodgers, Michael Bruen, Lucien-Eves Maystre. Electre and Decision Support:

Methods and Applications in Engineering and Infrastructure Investment Springer, 2010.

Valerie Belton and Theodore Stewart, Multiple Criteria Decision Analysis: An Integrated
*Approach. Kluwer Academic Publishers, Boston, 2002. *