Multicriteria Decision Aiding:
Compensating and Non-Compensating Methods
Iryna Yevseyeva
Research Associate
School of Computing Science
Centre for Cybercrime and Computer Security (CCCS) http://cccs.ncl.ac.uk
Newcastle University, UK
Evolutionary Computation, July 18-24 (2015), Iasi, Romania
Learning objectives
• Basics of Multi Criteria Decision Aiding (MCDA)
• What is the subject area of Multi Criteria Decision Aiding?
• How MuliObjective Optimisation and MultiCriteria Decision Aiding/Analysis are related and differentiated?
• Decision Maker’s Preferences
• How different MCDA approaches model Decision Maker’s preferences?
• Ways of expressing preferences of the Decision Maker.
• MCDA Compensatory and Non-compensatory Approaches
• MAVT vs. ELECTRE.
• When is it appropriate to use each of the learned algorithms?
Security decision making
• Examples:
• Which of publicly available Wi-Fi shell I use?
• How do I choose a new password?
• Should I put this person’s USB stick in my laptop?
• These decisions are about security vs.
productivity trade-off
• Examples from data mining:
• Accuracy vs. cost trade-off;
• Fitting to existing model vs. adapting to new
knowledge trade-off.
low
medium
high
-4 700
800 900 1000
30%
20%
50%
60%
80%
90%
high medium low
L1
L3
L2
L4
CPUàmin Priceàmin
-Usability àmin
Parallel Coordinate Diagram
All criteria are to be minimised
L2 dominates L1
-Popularity àmin
MultiCriteria Decision Analysis/Aiding
• In MultiObjective Optimisation (MOO)
• addresses continuous and discrete combinatorial problems;
• solutions/alternatives are defined by a set of constraints;
• objective functions are then optimised in this region.
• MultiCriteria Decision Analysis/Aiding (MCDA)
• mainly discrete problems with small sets of alternatives;
• for combinatorial problems find rules for decision making;
• constrains are taken into account implicitly: into set of criteria and/or alternatives
• Both MOO and MCDA
• use evaluation of solutions/alternatives based on multiple criteria/objectives
• and search for trade-off solution(s) (e.g. using Pareto
MultiCriteria Decision Aiding/Analysis
• Terminology:
• Multiobjective -> MultiCriteria Decision Optimization Aiding/Analysis
• MOO -> MCDA
• Solutions (to be found) -> Alternatives (often given)
• Objectives (given) -> Criteria (given)
• Usually impersonal -> Decision maker (DM) DMs
• Objective -> Subjective
MCDA used after MOO
x1 x2
x3
Search space (decision space)
f2
Objective space (solution space)
Image under f
Part of Pareto front =
Set of interesting solutions
map performed by f
• MCDA can be used in combination/before/after MOO, e.g.
concentrating the search on a smaller region of search space.
MCDA Difficulties
• Modelling: well-defined problem is half-solved problem.
• Preferences of the Decision Maker(s) (DMs) are crucial.
• Preferences may be expressed in different form.
• Different DMs may have different meaning for the same form of preference.
• Same DM interviewed at different moments of time may have different preference values.
• Multiple criteria evaluations should be aggregated.
• Aggregation implies losing of some information.
Types of MCDA problems
• To structure the problem: to identify alternatives, criteria (especially, if they are hierarchically structured);
• To select one alternative among many;
• To rank all alternatives;
• To sort them into several (ordered) groups;
• To select a portfolio of alternatives.
MCDA Examples
• To select an investment plan;
• To rank students-participants of a “best assignment” competition;
• To sort research projects to be funded or not;
• To choose a team of employees;
• To choose a new airport location;
• To decide on giving credit to a client or not;
• To rank universities by their scientific and academic performance;
• To find best combination of assets for a financial portfolio.
MCDA problems type: Choice
MCDA problems type: Ranking
MCDA problems type: Sorting
Cars I like and ready to buy
Cars I like but not ready to buy
Cars I don’t like and not ready to buy
MCDA problems type: Combination
Cars a company prefers to buy
Value-Focused Thinking
• “When you are faced with a difficult decision situation, start by thinking about your values. Do not start by
thinking about alternatives, as that will limit you.”
• Why each criterion is important? This helps to set priority and importance to criteria. E.g. thinking of why good
gasoline mileage is important for a car you may start thinking of more fundamental criteria, like operational costs, pollution, etc.
• Ralph L. Keeney, Value-Focused Thinking: A Path to Creative Decision Making. Harvard University Press, Cambridge, MA, 1992.
What is the best car?
Speed Cost
Criteria/objectives: Cost à min, Speed à max, Length à min
“Luxury”
“Speedy”
“Tiny”
“Fancy”
What is the best anti-virus?
CPU Cost
Criteria/objectives: Cost à min, CPU à max, Usability à max
Usability
“Speedy”
“Fancy”
“Luxury”
“Tiny”
Problem representation
• Performance matrix
• vj(ai) is performance value of alternative ai , i={1,…m} on criterion gj , j={1,…n} and kj is importance coefficient of criterion gj
g1(·) g2 (·) … gn (·)
k k1 k2 … kn
a1 v11(a1) v2(a1) … vn(a1) a2 v1 (a2) v2(a2) … vn(a2)
… … … … …
am v1(am) v2(am) … vn(am)
Example: What is the best car?
Cost min
Speed max
Length min
Importance 3 2 1
“Tiny” 10.000 100 2
“Fancy” 60.000 150 4
“Speedy” 100.000 200 4
“Luxury” 300.000 100 7
Criteria scales
• Ratio/Quantitative/Cardinal: There is information on difference between scale values with respect to a non- arbitrary origin, e.g. meters, kg, etc.
• Ordinal/Qualitative/Verbal: There is order on the scale but no information on the difference between scale
values, e.g. excellent, good, satisfactory, bad.
• Interval: There is information on difference between scale values with respect to an arbitrary origin, e.g., Fahrenheit and Celsius temperature scales.
MCDA Approaches
• Basic techniques: Dominance test, MaxiMin, Maximax, Lexicographic and Lexicographic semi-order methods, Simple Additive Weighting (SAW).
• American schools: Multi-Attribute Utility Theory (MAUT) / Multi-Attribute Value Theory (MAVT) and Analytical
Hierarchy Process (AHP).
• European school: Outranking-based approaches, such as ELECTRE, PROMETHEE;
• Uncertainty: fuzzy sets, rough sets, Stochastic Multicriteria Acceptability Analysis (SMAA),Verbal analysis.
Simple Additive Weighting (SAW)
• Simple and intuitive, but has drawbacks.
• The alternatives are selected/ranked/sorted based on their weighted sum score:
S(ai)=∑kj*gj(ai),
• where kj is gj’s criterion importance / weight;
gj(ai) is value of alternative ai on criterion gj.
• All scales either to be minimised or to be maximised.
• All scales are to be homogenised = normalised for commensurable comparison (roughly speaking to be able to aggregate kgs and meters).
SAW: Normalisation
• Ratio normalisation with all criteria scales in [0,1] interval:
• For criterion to be maximised xnorm= x/xbest
• For criterion to be minimised xnorm= xbest/x
• Normalisation of ratio difference with all criteria scales in [0,1] interval:
• For all criteria scales xnorm=x-xworst/xbest-xworst
• Euclidean normalisation (non-linear transformation):
• xnorm=x/√∑x2
• Drawbacks:
• Adding/removing alternatives may change ranking.
• Difficult for qualitative scales.
• SAW depends hardly on the normalisation used!
Multi-Attribute Value Theory (MAVT)
• Developed in late 60’s by Ralph Keeney and Hovard Raiffa. Similarly to Utility Theory (MAUT)* assumes
rational Decision Maker and adapts MAUT by modelling rigorous preference structure.
• All criteria evaluations are aggregated into an overall score for each alternative: V(ai)=∑kj*gj(ai)
• For each criterion gj a value function vj(gj(·)) should be constructed that reflects the subjective value of the
criterion for the decision maker.
• MAVT preference relation is complete and transitive.
* Ralph Keeney and Hovard Raiffa (1976). Decisions with Multiple Objectives:
Multi-Attribute Value Theory (MAVT)
• For each gj a value function vj(gj(·)) should be
constructed that reflects the subjective criterion value of the decision maker.
• Compare to SAW, where normalisation is impersonal.
• vj(gj(ai)) or vj(ai) denotes the value or performance of the alternative ai on criterion gj.
• The most common is additive form of aggregation for the global score (but there are more complex forms of the value function):
V(a)=f(v (a), v (a ), … , v (a ))
MAVT: Preference relation
• MAVT constructs a complete order on the set of alternatives.
• Only preference and indifference relations are possible between alternatives:
• vj(a)> vj(b) if and only if alternative a is considered to be better than alternative b on criterion gj (gj(a) is preferred to gj(b));
• vj(a) = vj(b) if and only if alternative a is considered as good as alternative b on criterion gj (gj(a) is indifferent to gj(b)).
• Transitivity of preference and indifference is assumed.
MAVT: Incomplete
“Speedy”
“Tiny”
“Fancy”
>
>
?
MAVT: Completeness
“Speedy”
“Tiny”
“Fancy”
>
>
>
MAVT: Intransitivity of preference
“Speedy”
“Tiny”
“Fancy”
>
>
>
MAVT: Transitivity of preference
• If a is preferred to b and b is preferred to c, then a is preferred to c.
Condorcet’s paradox
• Suppose that 3 judges are evaluating alternatives a, b, c.
• Alex deems “Speedy” is better than “Fancy” and “Fancy” is better than “Luxury”
• Mary deems “Fancy” is better than “Luxury” and “Luxury” is better than “Speedy”
• John deems “Luxury” is better than “Speedy” and “Speedy” is better than “Fancy”
• Voting claims 2:1 majority for “Speedy” being better than “Fancy”;
and a 2:1 majority claim “Fancy” is better than “Luxury”.
• By transitivity “Speedy” should be better than “Luxury”, but a 2:1 majority dictates “Luxury” is better than “Speedy”.
MAVT: Transitivity of indifference
=
=
Consequently
AND =
MAVT: Transitivity of indifference
• If a is indifferent to b and b is indifferent to c, then a is indifferent to c.
• Alex is indifferent between drinking coffee with 3g of sugar (alternative a3) and drinking coffee with 4g of sugar (alternative a4).
• Alex is indifferent between drinking coffee with 4g of sugar (alternative a4) and drinking coffee with 5g of sugar (alternative a5).
• Similarly, he is indifferent between a5 and a6, between a6 and a7, and so on.
• By transitivity Alex should be indifferent between, say, a3 and a7, but probably he would not.
This type of reasoning cannot be reproduced using value functions!
Example: Selecting a new security manager
Security- management
skills
Intelligence Communica- tion skills
Work experience kj Most
important
2nd most important
2nd most important
2nd most important
a1 Poor High IQ Very good 8 years
a2 Very Good Very High IQ Poor 1 year
a3 Excellent Low IQ Satisfactory 1 year
a4 Good High IQ Good 4 years
Example: Selecting a new security manager
Security- manage- ment skills
Intelligence Communi cation
skills
Work experi- ence
V(a) Conclusion
kj 0.4 0.2 0.2 0.2 TM twice more
important
a1 0.2 0.4 0.5 0.8 0.42 Experienced
a2 0.5 0.6 0.1 0.1 0.36 TM&I
a3 0.8 0.1 0.2 0.1 0.40 TM (Most
important crit.)
a4 0.4 0.4 0.4 0.4 0.40 Balanced
Compensation effect!
MAVT: Constructing “values”
• A value function reflects the subjective preferences of the client.
• SAW uses normalisation, which is an impersonal operation.
• Assuming transitivity of preference and indifference the client’s preferences can be defined by value functions.
• The idea is to build a function that the client perceives as adequate to represent his or her judgment about
strengths of preference. If the client feels comfortable with the representation, then it is OK
• There are different techniques for building criteria value functions, e.g. direct rating, curve fitting, bisection, and standard differences and extract importance coefficients,
MAVT: Value function with Direct rating
• Ask the client for a numerical value for each performance level on a given criterion’s scale.
• Or ask the client to adjust a graphical representation of performance levels on a line segment representing
value.
• In doing so the client should recall that
• vj(a)> vj(b) if and only if alternative a is considered better than alternative b on criterion gj (gj(a) is preferred to gj(b));
• vj(a)-vj(c) > vj(b)-vj(d) if and only if the strength of preference for a over c is higher than the strength of preference of b over d on
criterion g.
Example: What is the best car?
Cost min
Speed max
Length min
Importance 3 2 1
“Tiny” 10.000 100 2
“Fancy” 60.000 150 4
“Speedy” 100.000 200 4
“Luxury” 300.000 100 7
MAVT: Value function with Curve fitting
• Ask the client to adjust the parameters defining a given curve
• E.g. a negative exponential curve ranging from extremely concave to extremely convex cases
• or a sigmoidal curve
MAVT: Value function with Bisection
• Ask the client to indicate a performance level that corresponds to the best value and set it to 1 and the worst value and set it to 0.
• Ask the client to indicate a performance level that splits the interval in two in terms of value (such that changing from the 0 value performance to the chosen midpoint increases value as much as does changing from this midpoint to the 1 value performance).
• Then the chosen midpoint corresponds to value 0.5.
• Use the same process to bisect the intervals [0, 0.5] and [0.5, 1]. And so on.
MAVT: Value function with differences
• Define an improvement in a different criterion to serve as a comparison standard (like a ruler for measuring
distances)
• Take an initial value xj1 in criterion gj
• Ask the client to indicate a second value xj2 such that the increase of value of going from xj1 to xj2 is equal to the comparison standard defined before.
• Ask the client to indicate a third value xi3 such that the increase of value of going from xj2 to xj3 is equal to the comparison standard defined before, and so on.
• Then, v (x )-v (x ) = v (x )-v (x ) = ...
MAVT: additive model
• For aggregating criteria values, e.g. additive function can be used
V(ai)=f(v1(ai), v2(ai), … , vn(ai))
• E.g.
V(ai)=∑k1v1(ai) + k2v2(ai)+···+knvn(ai),
where kj is scale coefficient for vj(·), such that kj >0 and
∑kj =1.
Note that scale coefficient alone does not present criterion importance!
• Different weight elicitation technique, such as ratio with swings, Saaty scale with swings.
MAVT: Construct scale coefficients
• Indifferences involving trade-offs
• If r1 units (value to be asked) in v1(·) are worth the same as r2 units in v2(·), then we must have k2/k1 = r1/r2.
• One can ask similar type of questions to obtain k2/k1 = r21, k3/k1 = r31,…, kn/k1 = rn1,
• and then use the equality ∑kj =1 to determine the solution for the resulting system of equations.
Example: Selecting a new security manager
Time-manage- ment skills
Intelligence V(a)
kj ? ?
a1 0.45 0.90 ?
a2 0.90 0.45 ?
a3 0.65 0.65 0.65
Example: Selecting a new security manager
Time-manage- ment skills
Intelligence V(a)
kj 0.3 0.7
a1 0.45 0.90 0.765
a2 0.90 0.45 0.585
a3 0.65 0.65 0.65
Convexly dominated alternatives cannot win!
MAVT Conclusions
• Compared with SAW, MAVT constructs functions eliciting preferences rather than normalising performances.
• There is an underlying compensation effect (in the sense that bad performances in some criteria may be
compensated by good performances in other criteria).
• The mathematical aggregation is still a weighted sum, but the numbers have a meaning very different from SAW.
• The result is therefore “tailored” to the preferences of a specific client, it is subjective.
• Same DM interviewed at different moments of time may have different preference values.
MAVT Conclusions
Pros:
• The simplicity of additive aggregation
• It matches the intuitive way people make aggregation
• Rigorous way of obtaining commensurable scales Specific Feature:
• Shows the role of “weights” as trade-offs, not intuitive importance Cons:
• It can be difficult to explain the method and elicit answers (but this encourages thinking)
• Requires strong independence conditions (but it may be possible to restructure the set of criteria)
• Very poor performances can be compensated (but such alternatives can be eliminated before)
Outranking approach
• The development of ELECTRE family of methods started in ;ate 60’s by Bernard Roy and colleagues with
ELECTRE I for selection the best alternative task.
• The approach evaluates outranking relation that is
neither necessarily complete (that is incomparability is possible), nor necessarily transitive.
• Note that incomparability ≠ indifference.
• All alternatives are compared to each other pairwisely, as in tournament.
Basic ideas of ELECTRE
• The outranking relation is constructed based on two concepts:
• Concordance: If gj(a) is not worse than gj(b), then the criterion gj is concordant with a S b.
• Discordance: If gj(a) is worse than gj(b), then the criterion gj is discordant with a S b.
• “a outranks b” a S b if (concordance holds) there are enough arguments to decide that “a is at least as good as b”, and (no discordance holds) there is no essential argument to oppose this statement (discordance) (or, in other words, “a is not worse than b”).
• Note that a criterion can be concordant and with both a S b and b S a, namely when gj(a)=gj(b).
• Outranking relation is checked for all pairs of alternatives in both
Basic ideas of ELECTRE
• One of four preference situation can be established between two alternatives after evaluating outranking relation:
• a P b Preference, if a S b and ¬b S a
• b P a Preference, if b S a and ¬a S b
• a R b Incomparable, if ¬a S b and ¬b S a
• a I b Indifferent, if a S b and b S a
• No need for computing global value for each alternative, but pairwise comparison.
• Incomparability between alternatives is accepted.
Conditions for ELECTRE application
• Ordinal or interval scales not suitable for comparison of differences.
• Strong heterogeneity of evaluations on criteria scales that is difficult to aggregate into unique criterion.
• A loss in one criterion cannot be compensated by a gain in other criterion that requires use of a non-compensatory
aggregation procedures.
• For at least one criterion small differences are not
significant in terms of preferences, but accumulation of several small differences may become significant that
requires introduction of thresholds that makes indifference to be intransitive.
Intransitivity of indifference
≠
=
BUT
AND =
Preferences example
Security-
management skills g1
Intelligence
g2
Communication skills
g3
Work experience
g4
a1 0.2 0.6 0.5 0.6
a2 0.5 0.4 0.1 0.1
> a2 a1 a1 a1
a1 S a2: Weight of (g2,g3,g4) is “big enough” and opposition from criterion g1 is “small”
a2 S a1 → Weight of g1 is not “big enough” a S a and ¬a S a → a P a
Indifference example
Security- management
skills g1
Intelligence
g2
Communication skills
g3
Work experience
g4
a1 0.2 0.4 0.5 0.6
a2 0.5 0.6 0.1 0.1
> a2 a2 a1 a1
a1 S a2 → : Weight of (g3,g4) is “big enough” and opposition from criterion (g1,g2) is “small”
a2 S a1: Weight of (g1,g2) is “big enough” and opposition from criterion (g3,g4) is “small”
Incomparability example
Security- management
skills
Intelligence Communication skills
Work experience
a1 0.2 0.4 0.5 0.8
a2 0.5 0.6 0.1 0.1
> a2 a2 a1 a1
a1 S a2: Weight of (g3,g4) is “big enough” and opposition from criterion (g1,g2) is “small”
a2 S a1: Weight of (g1,g2) is “big enough” and opposition from criterion (g ,g ) is “very big” (g has veto effect)
ELECTRE I (1967)
• 1. Construct a crisp outranking relation (tells outranks or not). For each pair of alternatives a and b check if a S b and if b S a.
• 2. Exploit the outranking relation for finding a kernel.
That is select a minimal set of candidates to become the most preferred alternative.
ELECTRE I: Constructing S
• Given (a,b), a S b if the following conditions are both true:
• Concordance:
• (non)Discordance:
• where Δj(a,b) is the advantage of a over b on criterion gj kj is the of criterion gj (it is assumed that k1,...,kn ≥0 and
∑kj =1);
c(a,b) is the concordance index;
c is the concordance threshold;
v is the veto threshold of g .
ELECTRE I: Constructing S
• The concordance index is equal to the sum of the
weights of the criteria that agree with a S b (the criteria in which a is as good as b, or better).
• The concordance condition holds if this sum (the total concordant weight) attains the required majority
threshold c.
• The discordance condition for a S b holds if there is no discordant criterion in which b is better than a by a
difference greater than the criterion’s veto threshold.
Finding kernel
• Purpose: to select a minimal set of candidates to become the most preferred alternative.
Definition of kernel of a graph:
• External stability: ∀
(justification for excluding alternatives outside of the kernel).
• Internal stability : ∀
(absence of justification to exclude any alternative from the kernel).
• Algorithm of finding kernel:
Find all alternatives not outranked by other alternatives.
No kernel case
• The existence of a unique kernel is guaranteed if the graph is acyclic.
• A solution: to consider alternatives in a cycle as being indifferent, treating them as a single class inheriting incoming and outgoing arcs.
“Speedy”
“Tiny”
“Fancy”
>
>
>
Weights and veto
• The weight of criterion in ELECTRE methods shows the voting power which is in favor of outranking.
• Weights do not depend on the ranges or encoding of scales.
• They cannot be interpreted as substitution rates, e.g. as importance coefficients in MAVT.
• Veto shows the level of difference on criterion values that is big enough to make outranking assertion “a outranks b” invalid.
ELECTRE I: Credibility degree
• Let Δj denote the advantage of an alternative a over another alternative b according to a criterion gj(·):
if criterion gj to be maximized if criterion gj to be minimized
• For ELECTRE I, if Δj≥0, then criterion gj is fully concordant with aSb.
• On the other hand, if Δj <0, even if the difference is almost zero, then gj(·) is not concordant with aSb.
• For ELECTRE I, if -Δj≥vj then criterion gj opposes a veto to aSb, even if the threshold is surpassed by a negligible amount.
• This is changed in ELECTRE III.
ELECTRE I vs. MAVT
Pros:
• No strong axioms and conditions to verify.
• Works with any type of scales, including qualitative scales.
• Importance coefficients kj truly reflect the “criteria” weights ( “voting power” analogy) independently of the scales.
• Alerts for incomparabilities (alternatives that are too different).
Cons:
• Specific to the problem of selecting the best (one)
alternative (it does not allow to rank the alternatives).
• Exploitation difficulties (multiple alternatives in the kernel).
• Lack of independence with regard to third alternatives.
• Sudden transition from S to not S as data changes.
Valued outranking relations
• ELECTRE I works with crisp outranking relations: given a pair (a,b), the statement aSb is established to be true or false.
• Crisp S means a Yes/No relation (either outranks or not).
• In later versions, e.g. ELECTRE III, outranking can be partially true, computing a credibility degree for it.
• Valued S means that a credibility degree for the outranking is computed in the interval [0,1].
ELECTRE III: Concordance
• Concordance index for each criterion gj defines how much does the criterion agree with a S b?
where
qj = indifference threshold (biggest difference that keeps two values on criterion gj indifferent).
pj = preference threshold (smallest difference between values on criterion gj that is enough to consider
ELECTRE III: Discordance
• Discordance index for each criterion gj defines how much does the criterion oppose a veto to a S b?
where
uj = non-discordance threshold (disadvantage for which a partial veto begins). Originally uj = pj .
v = veto threshold (disadvantage originating a total veto).
ELECTRE III: Aggregation
• The global concordance index given weights kj (and still assuming ∑kj =1) is a weighted sum:
• The global discordance index is the maximum discordance:
ELECTRE III: Credibility degree
• Credibility degree for a S b.
• Aggregation of concordance and discordance.
• Credibility index for a S b:
• Or
• Or
Pros vs. additive MAVT model
• No strong axioms and conditions to verify.
• Works with any type of scales, including qualitative.
• Importance coefficients kj truly reflect the “criteria”
weights (“voting power” analogy) independently of the scales.
• ELECTRE I alerts for incomparabilities (alternatives that are too different).
• ELECTRE III allows putting a penalty on a very weak performances on some criterion.
Cons vs. additive MAVT model
• ELECTRE I:
• Specific to the problem of selecting the best (one)
alternative (it does not allow to rank the alternatives).
• Exploitation difficulties (multiple alternatives in the kernel).
• Lack of independence with respect to third
alternatives (for instance, if a3 did not outrank a4 then alternatives [a1 , a2] would be in the kernel).
• ELECTRE III:
• Large number of parameters.
• Relatively complex computations.
How to select an MCDA method?
• Check scales: ordinal or interval scales not suitable for comparison of differences in MAVT still can be treated in ELECTRE.
• Check homogeneity/heterogeneity of evaluations on
criteria scales (may be difficult to aggregate into unique criterion).
• Check if compensation allowed or not.
• Accumulation of several small differences may become significant that may requires introduction of thresholds
Literature
Ralph L. Keeney, Value-Focused Thinking: A Path to Creative Decision Making. Harvard University Press, Cambridge, MA, 1992.
Martin G. Rodgers, Michael Bruen, Lucien-Eves Maystre. Electre and Decision Support:
Methods and Applications in Engineering and Infrastructure Investment Springer, 2010.
Valerie Belton and Theodore Stewart, Multiple Criteria Decision Analysis: An Integrated Approach. Kluwer Academic Publishers, Boston, 2002.