• Nu S-Au Găsit Rezultate

Probability of a Markov Chain

N/A
N/A
Protected

Academic year: 2022

Share "Probability of a Markov Chain"

Copied!
59
0
0

Text complet

(1)

Hidden Markov Models

Based on

• “Foundations of Statistical NLP” by C. Manning & H.

Sch¨utze, ch. 9, MIT Press, 2002

• “Biological Sequence Analysis”, R. Durbin et al., ch. 3 and 11.6, Cambridge University Press, 1998

(2)

PLAN

1 Markov Models

Markov assumptions 2 Hidden Markov Models

3 Fundamental questions for HMMs

3.1 Probability of an observation sequence:

the Forward algorithm, the Backward algorithm

3.2 Finding the “best” sequence: the Viterbi algorithm 3.3 HMM parameter estimation:

the Forward-Backward (EM) algorithm 4 HMM extensions

5 Applications

(3)

1 Markov Models (generally)

Markov Models are used to model a sequence of ran- dom variables in which each element depends on pre- vious elements.

X = hX1 . . . XTi Xt ∈ S = {s1, . . . , sN}

X is also called a Markov Process or Markov Chain.

S = set of states

Π = initial state probabilities πi = P(X1 = si); PN

i=1 πi = 1 A = transition probabilities:

aij = P(Xt+1 = sj|Xt = si); PN

j=1 aij = 1 ∀i

(4)

Markov assumptions

• Limited Horizon:

P(Xt+1 = si|X1 . . . Xt) = P(Xt+1 = si|Xt) (first-order Markov model)

• Time Invariance: P(Xt+1 = sj|Xt = si) = pij ∀t

Probability of a Markov Chain

P(X1 . . . XT) = P(X1)P(X2|X1)P(X3|X1X2) . . . P(XT|X1X2 . . . XT−1)

= P(X1)P(X2|X1)P(X3|X2) . . . P(XT|XT−1)

= πX1ΠTt=1−1aXtXt+1

(5)

A 1st Markov chain example: DNA

(from [Durbin et al., 1998])

A T

C G

Note:

Here we leave transition

probabilities unspecified.

(6)

A 2nd Markov chain example:

CpG islands in DNA sequences

Maximum Likelihood estimation of parameters using real data (+ and -)

a+st = c+st P

t c+st

ast = cst P

t cst

+ A C G T

A 0.180 0.274 0.426 0.120 C 0.171 0.368 0.274 0.188 G 0.161 0.339 0.375 0.125 T 0.079 0.355 0.384 0.182

− A C G T

A 0.300 0.205 0.285 0.210 C 0.322 0.298 0.078 0.302 G 0.248 0.246 0.298 0.208 T 0.177 0.239 0.292 0.292

(7)

Using log likelihoood (log-odds) ratios for discrimination

S(x) = log2P(x | model +) P(x | model −) =

L

X

i=1

log2a+xi−1xi ax

i−1xi

=

L

X

i=1

βxi1xi

β A C G T

A −0.740 0.419 0.580 −0.803 C −0.913 0.302 1.812 −0.685 G −0.624 0.461 0.331 −0.730 T −1.169 0.573 0.393 −0.679

(8)

2 Hidden Markov Models

K = output alphabet = {k1, . . . , kM} B = output emission probabilities:

bijk = P(Ot = k|Xt = si, Xt+1 = sj)

Notice that bijk does not depend on t.

In HMMs we only observe a probabilistic function of the state sequence: hO1 . . . OTi

When the state sequence hX1 . . . XTi is also observable:

Visible Markov Model (VMM)

Remark:

In all our subsequent examples bijk is independent of j.

(9)

A program for a HMM

t = 1;

start in state si with probability πi (i.e., X1 = i);

forever do

move from state si to state sj with prob. aij (i.e., Xt+1 = j);

emit observation symbol Ot = k with probability bijk; t = t + 1;

(10)

A 1st HMM example: CpG islands

(from [Durbin et al., 1998])

A+

A

T+ G+

C+

T

C G

Notes:

1. In addition to the tran- sitions shown, there is also a complete set of transitions within each set (+ respec- trively -).

2. Transition probabilities in this model are set so that within each group they are close to the transition proba- bilities of the original model, but there is also a small chance of switching into the other component. Over- all, there is more chance of switching from ’+’ to ’-’ than viceversa.

(11)

A 2nd HMM example: The occasionally dishonest casino

(from [Durbin et al., 1998])

1: 1/6 2: 1/6 3: 1/6 4: 1/6 5: 1/6 6: 1/6

1: 1/10 3: 1/10 4: 1/10 5: 1/10 6: 1/2 2: 1/10

F L

0.99 0.01 0.95

0.05

0.9

0.1

(12)

A 2rd HMM example: The crazy soft drink machine

(from [Manning & Sch¨utze, 2000])

Preference Ice tea Coke

Preference

πCP=1

P(Coke) = 0.6 Ice tea = 0.1 Lemon = 0.3

Ice tea = 0.7 Lemon = 0.2 P(Coke) = 0.1

0.3

0.5

0.5 0.7

(13)

(from [Eddy, 2004])

(14)

3 Three fundamental questions for HMMs

1. Probability of an Observation Sequence:

Given a model µ = (A, B,Π) over S, K, how do we (effi- ciently) compute the likelihood of a particular sequence, P(O|µ)?

2. Finding the “Best” State Sequence:

Given an observation sequence and a model, how do we choose a state sequence (X1, . . . , XT+1) to best explain the observation sequence?

3. HMM Parameter Estimation:

Given an observation sequence (or corpus thereof ), how do we acquire a model µ = (A, B,Π) that best explains the data?

(15)

3.1 Probability of an observation sequence

P(O|X, µ) = ΠTt=1P(Ot|Xt, Xt+1, µ) = bX1X2O1bX2X3O2 . . . bXTXT+1OT P(O, µ) = X

X

P(O|X, µ)P(X, µ) = X

X1...XT+1

πX1ΠTt=1aXtXt+1bXtXt+1Ot Complexity : (2T + 1)NT+1, too inefficient

better : use dynamic prog. to store partial results αi(t) = P(O1O2 . . . Ot−1, Xt = si|µ).

(16)

3.1.1 Probability of an observation sequence:

The Forward algorithm

1. Initialization: αi(1) = πi, for 1 ≤ i ≤ N 2. Induction: αj(t + 1) = PN

i=1 αi(t)aijbijOt, 1 ≤ t ≤ T, 1 ≤ j ≤ N 3. Total: P(O|µ) = PN

i=1 αi(T + 1). Complexity: 2N2T

(17)

Proof of induction step:

αj(t + 1) = P(O1O2 . . . Ot−1Ot, Xt+1 = j|µ)

=

N

X

i=1

P(O1O2 . . . Ot−1Ot, Xt = i, Xt+1 = j|µ)

=

N

X

i=1

P(Ot, Xt+1 = j|O1O2 . . . Ot−1, Xt = i, µ)P(O1O2 . . . Ot−1, Xt = i|µ)

=

N

X

i=1

P(O1O2 . . . Ot−1, Xt = i|µ)P(Ot, Xt+1 = j|O1O2. . . Ot−1, Xt = i, µ)

=

N

X

i=1

αi(t)P(Ot, Xt+1 = j|Xt = i, µ)

=

N

X

i=1

αi(t)P(Ot|Xt = i, Xt+1 = j, µ)P(Xt+1 = j|Xt = i, µ) =

N

X

i=1

αi(t)bijOtaij

(18)

Closeup of the Forward update step

a 1j b

1jOt

a 2j b 2jO

t

b NjO

t

a Nj

j µ

t+1 t

P(O ... O , X = s | )1

t−1 µ

1 t i

N

αN(t) s s2 α2(t) α1(t) s1

t t+1

P(O ... O , X = s | )

sj αj (t+1)

(19)

Trellis

Each node (si, t) stores informa- tion about paths through si at time t.

s1

sN

s2

s3

1 2 Time t T+1

State

(20)

3.1.2 Probability of an observation sequence:

The Backward algorithm

βi(t) = P(Ot . . . OT|Xt = i, µ) 1. Initialization: βi(T + 1) = 1, for 1 ≤ i ≤ N 2. Induction: βi(t) = PN

j=1 aijbijOtβj(t + 1), 1 ≤ t ≤ T, 1 ≤ i ≤ N 3. Total: P(O|µ) = PN

i=1 πiβi(1) Complexity: 2N2T

(21)

Induction:

βi(t) = P(OtOt+1 . . . OT|Xt = i, µ)

=

N

X

j=1

P(OtOt+1 . . . OT, Xt+1 = j|Xt = i, µ)

=

N

X

j=1

P(OtOt+1 . . . OT|Xt = i, Xt+1 = j, µ)P(Xt+1 = j|Xt = i, µ)

=

N

X

j=1

P(Ot+1 . . . OT|Ot, Xt = i, Xt+1 = j, µ)P(Ot|Xt = i, Xt+1 = j, µ)aij

=

N

X

j=1

P(Ot+1 . . . OT|Xt+1 = j, µ)bijOtaij =

N

X

j=1

βj(t + 1)bijOtaij

Total:P(O|µ) =

N

X

i=1

P(O1O2 . . . OT|X1 = i, µ)P(X1 = i|µ) =

N

X

i=1

βi(1)πi

(22)

Combining Forward and Backward probabilities

P(O, Xt = i|µ) = αi(t)βi(t) P(O|µ) =

N

X

i=1

αi(t)βi(t) for 1 t T + 1

Proofs:

P(O, Xt = i|µ) = P(O1. . . OT, Xt = i|µ)

= P(O1. . . Ot−1, Xt = i, Ot. . . OT|µ)

= P(O1. . . Ot−1, Xt = i|µ)P(Ot. . . OT|O1. . . Ot−1, Xt = i, µ)

= αi(t)P(Ot. . . OT|Xt = i, µ)

= αi(t)βi(t)

P(O|µ) =

N

X

i=1

P(O, Xt = i|µ) =

N

X

i=1

αi(t)βi(t)

Note: The “total” forward and backward formulae are special cases of the above one (for t = T + 1 and respectively t = 1).

(23)

3.2.1 Posterior decoding

One way to find the most likely state sequence underlying the observation sequence: choose the states individually

γi(t) = P(Xt = i|O, µ) Xˆt = argmax

1≤i≤N

γi(t) for 1 ≤ t ≤ T + 1 Computing γi(t):

γi(t) = P(Xt = i|O, µ) = P(Xt = i, O|µ)

P(O|µ) = αi(t)βi(t) PN

j=1 αj(t)βj(t)

Remark:

Xˆ maximizes the expected number of states that will be guessed cor- rectly. However, it may yield a quite unlikely/unnatural state se- quence.

(24)

Note

Sometimes not the state itself is of interest, but some other property derived from it.

For instance, in the CpG islands example, let g be a function defined on the set of states: g takes the value 1 for A+, C+, G+, T+ and 0 for A, C, G, T.

Then

X

j

P(πt = sj | O)g(sj)

designates the posterior probability that the symbol Ot come from a state in the + set.

Thus it is possible to find the most probable label of the state at each position in the output sequence O.

(25)

3.2.2 Finding the “best” state sequence The Viterbi algorithm

Compute the probability of the most likely path argmax

X

P(X|O, µ) = argmax

X

P(X, O|µ) through a node in the trellis

δi(t) = max

X1...Xt1

P(X1. . . Xt−1, O1 . . . Ot−1, Xt = si|µ) 1. Initialization: δj(1) = πj, for 1 j N

2. Induction: (see the similarity with the Forward algorithm) δj(t + 1) = max1≤i≤N δi(t)aijbijOt, 1 t T, 1 j N

ψj(t + 1) = argmax1≤i≤N δi(t)aijbijOt, 1 t T, 1 j N 3. Termination and readout of best path:

P(X, O|µ) = maxˆˆ 1≤i≤N δi(T + 1)

XˆˆT+1 = argmax1≤i≤N δi(T + 1), Xˆˆt = ψXˆˆt+1(t + 1)

(26)

Example:

Variable calculations for the crazy soft drink ma- chine HMM

Output lemon ice tea coke

t 1 2 3 4

αCP(t) 1.0 0.21 0.0462 0.021294 αIP(t) 0.0 0.09 0.0378 0.010206 P(o1 . . . ot−1) 1.0 0.3 0.084 0.0315

βCP(t) 0.0315 0.045 0.6 1.0 βCP(t) 0.029 0.245 0.1 1.0 P(o1 . . . oT) 0.0315

γCP(t) 1.0 0.3 0.88 0.676 γIP(t) 0.0 0.7 0.12 0.324

Xˆt CP IP CP CP

δCP(t) 1.0 0.21 0.0315 0.01323 δIP(t) 0.0 0.09 0.0315 0.00567

ψCP(t) CP IP CP

ψIP(t) CP IP CP

Xˆˆt CP IP CP CP

P(Xˆˆ) 0.019404

(27)

3.3 HMM parameter estimation

Given a single observation sequence for training, we want to find the model (parameters) µ = (A, B, π) that best explains the observed data.

Under Maximum Likelihood Estimation, this means:

argmax

µ

P(Otraining|µ)

There is no known analytic method for doing this.

However we can choose µ so as to locally maximize P(Otraining|µ) by an iterative hill-climbing algorithm:

Forward-Backward (or: Baum-Welch), which is a spe- cial case of the EM algorithm.

(28)

3.3.1 The Forward-Backward algorithm The idea

• Assume some (perhaps randomly chosen) model parame- ters. Calculate the probability of the observed data.

• Using the above calculation, we can see which transitions and signal emissions were probably used the most; by in- creasing the probabily of these, we will get a higher prob- ability of the observed sequence.

• Iterate, hopefully arriving at an optimal parameter setting.

(29)

The Forward-Backward algorithm: Expectations

Define the probability of traversing a certain arc at time t, given the ob- servation sequence O

pt(i, j) = P(Xt = i, Xt+1 = j|O, µ)

pt(i, j) = P(Xt = i, Xt+1 = j, O|µ)

P(O|µ) = αi(t)aijbijOtβj(t + 1) PN

m=1αm(t)βm(t)

= αi(t)aijbijOtβj(t + 1) PN

m=1

PN

n=1 αm(t)amnbmnOtβn(t + 1) Summing over t:

PT

t=1 pt(i, j) = expected number of transitions from si to sj in O PN

j=1

PT

t=1 pt(i, j) = expected number of transitions from si in O

(30)

. . .

. . .

s

i

s

j

a

ij

b

ijO

t

β

j

(t+1) t+1 t

α

i

(t)

t−1 t

(31)

The Forward-Backward algorithm: Re-estimation

From µ = (A, B,Π), derive µˆ = ( ˆA, B,ˆ Π):ˆ

ˆ

πi =

PN

j=1 p1(i, j) PN

l=1

PN

j=1 p1(l, j) =

N

X

j=1

p1(i, j) = γi(1)

ˆ

aij =

PT

t=1 pt(i, j) PN

l=1

PT

t=1 pt(i, l) ˆbijk =

P

t:Ot=k, 1≤t≤T pt(i, j) PT

t=1 pt(i, j)

(32)

The Forward-Backward algorithm: Justification

Theorem (Baum-Welch): P(O|µ)ˆ ≥ P(O|µ)

Note 1: However, it does not necessarily converge to a global optimum.

Note 2: There is a straightforward extension of the algorithm that deals with multiple observation sequences (i.e., a cor- pus).

(33)

Example: Re-estimation of HMM parameters

The crazy soft drink machine, after one EM iteration on the sequence O = (Lemon, Ice-tea, Coke)

Preference Ice tea Coke

Preference

πCP=1

0.4514

0.1951

0.5486 0.8049

P(Coke) = 0.4037 Ice tea = 0.1376 Lemon = 0.4587

Lemon = 0 P(Coke) = 0.1463 Ice tea = 0.8537

On this HMM, we obtained P(O) = 0.1324, a significant improvement on the initial P(O) = 0.0315.

(34)

3.3.2 HMM parameter estimation: Viterbi version

Objective: maximize P(O | Π(O), µ), where

Π(O) is the Viterbi path for the sequence O Idea:

Instead of estimating the parameters aij, bijk using the ex- pected values of hidden variables (pt(i, j)),

estimate them (as Maximum Likelihood), based on the computed Viterbi path.

Note:

In practice, this method performs poorer than the Forward-Backward (Baum-Welch) main version. However it is widely used, especially when the HMM used is pri- marily intended to produce Viterbi paths.

(35)

3.3.3 Proof of the Baum-Welch theorem...

3.3.3.1 ...In the general EM setup (not only that of HMM)

Assume

some statistical model determined by parameters θ the observed quantities x,

and some missing data y that determines/influences the probability of x.

The aim is to find the model (in fact, the value of the parameter θ) that maximises the log likelihood

log P(x | θ) = logX

y

P(x, y | θ)

Given a valid model θt, we want to estimate a new and better model θt+1, i.e. one for which

logP(x | θt+1) > logP(x | θt)

(36)

P(x, y | θ) = P(y | x, θ)P(x | θ) logP(x | θ) = logP(x, y | θ) logP(y | x, θ) By multiplying the last equality by P(y | x, θt) and summing over y,

it follows (since P

y P(y | x, θt) = 1):

log P(x | θ) = X

y

P(y | x, θt) logP(x, y | θ) X

y

P(y | x, θt) logP(y | x, θ) The first sum will be denoted Q(θ | θt).

Since we want P(y | x, θ) larger than P(y | x, θt), the difference logP(x | θ) logP(x | θt) = Q(θ | θt) Q(θt | θt) + X

y

P(y | x, θt) log P(y | x, θt) P(y | x, θ) should be positive.

Note that the last sum is the relative entropy of P(y | x, θt) with respect to P(y | x, θ), therefore it is non-negative. So,

log P(x | θ) logP(x | θt) Q(θ | θt) Q(θt | θt)

with equality only if θ = θt, or if P(x | θ) = P(x | θt) for some other θ 6= θt.

(37)

Taking θt+1 = argmaxθQ(θ | θt) will imply logP(x | θt+1) logP(x | θt) 0.

(If θt+1 = θt, the maximum has been reached.) Note: The function Q(θ | θt) def.= P

y P(y | x, θt) logP(x, y | θ) is an average of logP(x, y | θ) over the distribution of y obtained with the current set of parameters θt. This [LC: average] can be expressed as a function of θ in which the constants are expectation values in the old model. (See details in the sequel.)

The (backbone of ) EM algorithm:

initialize θ to some arbitrary value θ0; until a certain stop criterion is met, do:

xxx E-step: compute the expectations E[y | x, θt]; calculate the Q function;

xxx M-step: compute θt+1 = argmaxθQ(θ | θt).

Note: Since the likelihood increases at each iteration, the procedure will always reach a local (or maybe global) maximum asymptotically as t → ∞.

(38)

Note:

For many models, such as HMM, both of these steps can be carried out analytically.

If the second step cannot be carried out exactly, we can use some numerical optimisation technique to maximise Q.

In fact, it is enough to make Q(θt+1 | θt) > Q(θt | θt), thus getting generalised EM algorithms. See [Dempster, Laird, Rubin, 1977], [Meng, Rubin, 1992], [Neal, Hinton, 1993].

(39)

3.3.3.2 Derivation of EM steps for HMM

In this case, the ‘missing data’ are the state paths π. We want to maximize Q(θ | θt) = X

π

P | x, θt) logP(x, π | θ)

For a given path, each parameter of the model will appear some number of times in P(x, π | θ), computed as usual. We will note this number Akl(π) for transitions and Ek(b, π) for emissions. Then,

P(x, π | θ) = ΠMk=1Πb[ek(b)]Ek(b,π)ΠMk=0ΠMl=1aAklkl(π)

By taking the logarithm in the above formula, it follows

Q(θ | θt) = X

π

P | x, θt) ×

" M X

k=1

X

b

Ek(b, π) logek(b) +

M

X

k=0 M

X

l=1

Akl(π) logakl

#

(40)

The expected values Akl and Ek(b) can be written as expectations of Akl(π) and Ek(b, π) with respect to P | x, θt):

Ek(b) = X

π

P | x, θt)Ek(b, π) and Akl = X

π

P | x, θt)Akl(π) Therefore,

Q(θ | θt) =

M

X

k=1

X

b

Ek(b) logek(b) +

M

X

k=0 M

X

l=1

Akl logakl To maximise, let us look first at the A term.

The difference between this term for a0ij = Aij

P

k Aik

and for any other aij is

M

X

k=0 M

X

l=1

Akl log a0kl akl

=

M

X

k=0

X

l

Akl

! M X

l=1

a0kl log a0kl akl

The last sum is a relative entropy, and thus it is larger than 0 unless akl = a0kl. This proves that the maximum is at a0kl.

Exactly the same procedure can be used for the E term.

(41)

For the HMM, the E-step of the EM algorithm consists of calcu- lating the expectations Akl and Ek(b). This is done by using the Forward and Backward probabilities. This completely determines the Q function, and the maximum is expressed directly in terms of these numbers.

Therefore, the M-step just consists of plugging Akl and Ek(b) into the re-estimation formulae for akl and ek(b). (See formulae (3.18) in the R. Durbin et al. BSA book.)

(42)

4 HMM extensions

• Null (epsilon) emissions

• Initialization of parameters: improve chances of reaching global optimum

• Parameter tying: help coping with data sparseness

• Linear interpolation of HMMs

• Variable-Memory HMMs

• Acquiring HMM topologies from data

(43)

5 Some applications of HMMs

◦ Speech Recognition

• Text Processing: Part Of Speech Tagging

• Probabilistic Information Retrieval

◦ Bioinformatics: genetic sequence analysis

(44)

5.1 Part Of Speech (POS) Tagging

Sample POS tags for the Brown/Penn Corpora

AT article

BEZ is

IN preposition

JJ adjective

JJR adjective: comparative

MD modal

NN noun: singular or mass NNP noun: singular proper PERIOD .:?!

PN personal pronoun

RB adverb

RBR adverb: comparative

TO to

VB verb: base form VBD verb: past tense

VBG verb: present participle, gerund VBN verb: past participle

VBP verb: non-3rd singular present VBZ verb: 3rd singular present

WDT wh-determiner (what, which)

(45)

POS Tagging: Methods

[Charniak, 1993] Frequency-based: 90% accuracy

now considered baseline performance [Schmid, 1994] Decision lists; artificial neural networks [Brill, 1995] Transformation-based learning

[Brants, 1998] Hidden Markov Modelss [Chelba &

Jelinek, 1998] lexicalized probabilistic parsing (the best!)

(46)

A fragment of a HMM for POS tagging

(from [Charniak, 1997])

πdet=1

P(large) = 0.004 small = 0.005

P(a) = 0.245 the = 0.586

P(house) = 0.001 stock = 0.001

det noun

adj 0.218 0.45

0.475 0.016

(47)

Using HMMs for POS tagging

argmax

t1...n

P(t1...n|w1...n) = argmax

t1...n

P(w1...n|t1...n)P(t1...n) P(w1...n)

= argmax

t1...n

P(w1...n|t1...n)P(t1...n)

using the two Markov assumptions

= argmax

t1...n

Πni=1P(wi|tini=1P(ti|ti−1)

Supervised POS Tagging:

MLE estimations: P(w|t) = C(w,t)C(t) , P(t′′|t) = CC(t(t,t)′′)

(48)

The Treatment of Unknown Words:

• use apriori uniform distribution over all tags:

error rate 40% ⇒ 20%

• feature-based estimation [ Weishedel et al., 1993 ]:

P((w|t) = Z1 P(unknown word | t)P(Capitalized | t)P(Ending | t)

• using both roots and suffixes [Charniak, 1993]

Smoothing:

P (t|w) =

C(w)+kC(t,w)+1

w [Church, 1988]

where kw is the number of possible tags for w

P (t

′′

|t

) = (1 − ǫ)

C(tC(t,t)′′)

+ ǫ

[Charniak et al., 1993]

(49)

Fine-tuning HMMs for POS tagging

See [ Brants, 1998 ]

(50)

5.2 The Google PageRank Algorithm

A Markov Chain worth no. 5 on Forbes list!

(2 × 18.5 billion USD, as of November 2007)

(51)

“Sergey Brin and Lawrence Page introduced Google in 1998, a time when the pace at which the web was growing began to oustrip the ability of current search engines to yield usable results.

In developing Google, they wanted to improve the design of search engines by moving it into a more open, academic environment.

In addition, they felt that the usage of statistics for their search engine would provide an interesting data set for research.”

From David Austin, “How Google finds your needle in the web’s haystack”, Monthly Essays on Mathematical Topics, 2006.

(52)

Notations

Let n = the number of pages on Internet, and H and A two n ×n matrices defined by

hij =

1 if page j points to page i (notation: Pj Bi) 0 otherwise

aij =

1 if page i contains no outgoing links 0 otherwise

α [0; 1] (this is a parameter that was initially set to 0.85) The transition matrix of the Google Markov Chain is

G = α(H + A) + 1 α n · 1

where 1 is the n ×n matrix whose entries are all 1

(53)

The significance of G is derived from:

• the Random Surfer model

• the definition the (relative) importance of a page: com- bining votes from the pages that point to it

I(Pi) = X

Pj∈Bi

I(Pj) lj

where lj is the number of links pointing out from P j.

(54)

The PageRank algorithm

[Brin & Page, 1998]

G is a stochastic matrix (gij [0; 1], Pn

i=1 gij = 1),

therefore λ1 the greatest eigenvalue of G is 1, and G has a stationary vector I (i.e., GI = I).

G is also primitive (| λ2 |< 1, where λ2 is the second eigenvalue of G) and irreducible (I > 0).

From the matrix calculus it follows that

I can be computed using the power method:

if I1 = GI0, I2 = GI1, . . . , Ik = GIk−1 then Ik I. I gives the relative importance of pages.

(55)

Suggested readings

“Using Google’s PageRank algorithm to identify important attributes of genes”, G.M. Osmani, S.M. Rahman, 2006

(56)

ADDENDA

Formalisation of HMM algorithms in

“Biological Sequence Analysis” [ Durbin et al, 1998 ]

Note

A begin state was introduced. The transition probability a0k from this begin state to state k can be thought as the probability of starting in state k.

An end state is assumed, which is the reason for ak0 in the termination step.

If ends are not modelled, this ak0 will disappear.

For convenience we label both begin and end states as 0. There is no conflict because you can only transit out of the begin state and only into the end state, so variables are not used more than once.

The emission probabilities are considered independent of the origin state.

(Thus te emission of (pairs of ) symbols can be seen as being done when reaching the non-end states.) The begin and end states are silent.

(57)

Forward:

1. Initialization (i = 0): f0(0) = 1;fk(0) = 0, for k > 0 2. Induction (i = 1 . . . L): fl(i) = el(xi) P

k fk(i − 1)akl 3. Total: P(x) = P

k fk(L)ak0. Backward:

1. Initialization (i = L): bk(L) = ak0, for all k 2. Induction (i = L − 1, . . . ,1: bk(i) = P

l aklel(xi+1)bl(i + 1) 3. Total: P(x) = P

l a0lel(x1)bl(1)

Combining f and b: P(πk, x) = fk(i)bk(i)

(58)

Viterbi:

1. Initialization (i = 0): v0(0) = 1;vk(0) = 0, for k > 0 2. Induction (i = 1 . . . L):

vl(i) = el(xi) maxk(vk(i − 1)akl);

ptri(l) = argmaxk vk(i − 1)akl)

3. Termination and readout of best path:

P(x, π) = maxk(vk(L)ak0);

πL = argmaxk vk(L)ak0, and πi−1 = ptrii), for i = L . . .1.

(59)

Baum-Welch:

1. Initialization: Pick arbitrary model parameters 2. Induction:

For each sequence j = 1. . . n calculate fkj(i) and bjk(i) for sequence j using the forward and respectively backward algorithms.

Calculate the expected number of times each transition of emission is used, given the training sequences:

Akl = X

j

1 P(xj)

X

i

fkj(i)aklel(xji+1)bjl(i+ 1)

Ekl = X

j

1 P(xj)

X

{i|xji=b}

fkj(i)bjk(i)

Calculate the new model parameters:

akl = Akl

P

l Akl

and ek(b) = Ek(b) P

b Ek(b) Calculate the new log likelihood of the model.

3. Termination:

Stop is the change in log likelihood is less than some predefined threshold or the maximum number of iterations is exceeded.

Referințe

DOCUMENTE SIMILARE

Micula, on et'en degree polvnontiul .rpline.litnuiotts rtÍt¡ ap¡tlica- líons to numerical solution of differential equation.r u,irh reiar(te¿ argunrcqt,

An important step in development of adjusting model was to split the experimental data into 3 main subsets: a training set (used to estimate the network model parameters),

The present study was aimed with the formulation of niosomes of aceclofenac followed by the evaluating parameters such as drug content, entrapment efficiency, particle size,

Keywords: Electric arc furnace, random differential equation, Ornstein-Uhlenbeck process, Monte Carlo method, polynomial chaos

The most impor- tant point is that, with complete data, the maximum-likelihood parameter learning problem for a Bayesian network decomposes into separate learning problems, one for

In that situation, one tries to find a

• If we knew which component generated each data point, the maximum likelihood solution would involve fitting. each component to the

De¸si ˆın ambele cazuri de mai sus (S ¸si S ′ ) algoritmul Perceptron g˘ ase¸ste un separator liniar pentru datele de intrare, acest fapt nu este garantat ˆın gazul general,