• Nu S-Au Găsit Rezultate

An Introduction to Hidden Markov Models

N/A
N/A
Protected

Academic year: 2022

Share "An Introduction to Hidden Markov Models"

Copied!
25
0
0

Text complet

(1)

Fachgebiet Datenbanksysteme und Informationsmanagement Technische Universität Berlin

An Introduction to Hidden Markov Models

Max Heimel

(2)

Agenda

Motivation

■ An Introduction to Hidden Markov Models

□ What is a Hidden Markov Model?

■ Algorithms, Algorithms, Algorithms

□ What are the main problems for HMMs?

□ What are the algorithms to solve them?

■ Hidden Markov Models for Apache Mahout

□ A short overview

■ Outlook

□ Hidden Markov Models and Map Reduce

□ Take-Home Messages

(3)

■ Pattern recognition: finding structure in sequences.

Motivation

(4)

■ Demonstrate, how sequences can be modeled

□ Using so-called Markov chains.

■ Present the statistical tool of Hidden Markov Models

□ A tool to find underlying processes to a given sequence.

■ Give an understanding of the main problems associated with Hidden Markov Models

□ And the corresponding applications.

■ Present the Apache Mahout implementation of Hidden Markov Models

■ Give an outlook on implementing Hidden Markov Models for Map/Reduce

Goals of this talk

(5)

Agenda

■ Motivation

An Introduction to Hidden Markov Models

What is a Hidden Markov Model?

■ Applications for Hidden Markov Models

□ What are the main problems for HMMs?

□ What are the algorithms to solve them?

■ Hidden Markov Models for Apache Mahout

□ A short overview

■ Outlook

□ Hidden Markov Models and Map Reduce

□ Take-Home Messages

Contains Math

(6)

■ Markov Chains model sequential processes.

■ Consider a discrete random variable q with states .

■ State of q changes randomly in discrete time steps.

■ Transition probability depends only on the k previous states.

□ Markov Property

Markov Chains I

h

i

q

1

h

j

q

2

h

k

q q

3

Random transition

Probability depends only on the prior states

time

Markov Chain

(7)

■ Most simplest Markov chain:

□ Transition Probability depends only on the previous state (i.e. k=1):

□ Transition Probability is time invariant:

■ In this case, the Markov chain is defined by:

□ An Matrix Tcontaining state change probabilities:

□ An n-dimensional vector containing initial state probabilities:

□ Since and K contain probabilities, they have to be normalized:

Markov Chains II

(8)

■ Markov Chains are used to model sequences of states.

■ Consider the weather:

□ Each day can either be rainy or sunny.

□ If a day is rainy, there is a 60% chance the next day will also be rainy.

□ If a day is sunny, there is a 80% chance the next day will also be sunny.

□ We can now model the weather as a Markov Chain:

■ Examples for the use of Markov chains are:

□ Google Page Rank

□ Random Number Generation, Random Text Generation

□ Queuing Theory

□ Modeling DNA sequences

□ Physical processes from Thermodynamics & statistical Mechanics

Markov Chains III

(9)

■ Now consider a „hidden“ Markov chain

□ We can not directly observe the states of the hidden Markov chain.

□ However: we can observe effects of the „hidden states“.

■ Hidden Markov Models (HMMs) are used to model such a situation:

□ Consider a Markov chain and a random – not necessarily discrete - variable p.

□ The state of p is chosen randomly, based only on the current state of q.

Hidden Markov Models I

h

i

q

1

h

j

q

2

h

k

q

3

p

1

p

2

p

3

hidden variable q

observed variable p

(10)

■ The “simplest” HMM has a discrete observable variable p

□ The states of q are take from the set

■ In this case, the HMM is defined by the following parameters:

□ The matrix T and vector of the underlying Markov Chain.

□ An matrix O containing output probabilities:

□ Again, O needs to be normalized:

■ Consider a prisoner in solitary confinement:

□ The prisoner cannot directly observe the weather.

□ However: he can observe the condition of the boots of the prison guards.

Hidden Markov Models II

rainy sunny

dirty boots clean boots

0,6 0,8

0,4

0,2

0,2 0,8 0,9 0,1

(11)

Agenda

■ Motivation

■ An Introduction to Hidden Markov Models

□ What is a Hidden Markov Model?

Applications for Hidden Markov Models

What are the main problems for HMMs?

What are the algorithms to solve them?

■ Hidden Markov Models for Apache Mahout

□ A short overview

■ Outlook

□ Hidden Markov Models and Map Reduce

□ Take-Home Messages

(12)

Problems for Hidden Markov Models

Problem Given Wanted

Evaluation

Model,

Observed Sequence

Likelihood the model produced the observed sequence

Decoding

Model,

Observed Sequence

Most likely hidden sequence

Learning (unsupervised)

Observed Sequence Most likely model that produced the observed sequence

Learning (supervised)

Observed- & Hidden sequence

Most likely model that produced the observed & hidden sequence.

Hidden sequence

Observed sequence Model

(13)

■ Compute the likelihood that a given model M produced a given observation sequence O.

■ The likelihood can be efficiently calculated using dynamic programming:

□ Forward algorithm:

Reproduce the observation through the HMM, computing:

□ Backward algorithm:

Backtrace the observation through the HMM, computing:

■ Typical application:

□ Selecting the most likely out of several competing models.

□ Customer behavior modeling: select the most likely customer profile

□ Physics: select the most likely thermodynamics process

Evaluation

(14)

■ Compute the most likely sequence of hidden states for a given model M and a given observation sequence O.

■ The most likely hidden path can be computed efficiently using the Viterbi- algorithm

□ The Viterbi algorithm is based on the Forward Algorithm.

□ It traces the most likely hidden states while reproducing the output sequence.

■ Typical Applications:

□ POS tagging (observed: sentence, hidden: part-of-speech tags)

□ Speech recognition (observed: frequencies, hidden: phonemes)

□ Handwritten letter recognition (observed: pen patterns, hidden: letters)

□ Genome Analysis (observed: genome sequence, hidden: “structures”)

Decoding

(15)

1. Supervised Learning

□ Given an observation sequence O and the corresponding hidden sequence H, compute the most likely model M that produces those sequences:

□ Solved using “instance counting”

Count the hidden state transitions and output state emissions.

Use the relative frequencies as estimate for transition probabilities of M.

2. Unsupervised Learning

□ Given an observation sequence O, compute the most likely model M that produces this sequence:

□ Solved using the Baum-Welch algorithm

Rather expensive iterative algorithm, but produces guaranteed EM result.

Requires a Forward step and a Backward Step through the model per iteration.

□ Alternative: Viterbi training

Not as expensive as Baum-Welch, but does not guarantee EM result

Requires only a Forward step through the model per iteration.

Learning

(16)

Agenda

■ Motivation

■ An Introduction to Hidden Markov Models

□ What is a Hidden Markov Model?

■ Applications for Hidden Markov Models

□ What are the main problems for HMMs?

□ What are the algorithms to solve them?

Hidden Markov Models for Apache Mahout

A short overview

■ Outlook

□ Hidden Markov Models and Map Reduce

□ Take-Home Messages

(17)

■ Apache Mahout will contain an implementation of Hidden Markov Models in its upcoming 0.4 release.

□ The implementation is currently sequential (i.e. not Map/Reduce enabled).

□ The implementation covers Hidden Markov models with discrete output states.

■ The overall implementation structure is given by three main and two helper classes:

HmmModel

Container for HMM parameters

HmmTrainer

Methods to train a HmmModel from given observations

HmmEvaluator

Methods to analyze (evaluate / decode) a given HmmModel

HmmUtils

Helper methods, e.g. to validate and normalize a HmmModel

HmmAlgorithms

Helper methods containing implementations of Forward, Backward and Viterbi algorithm.

Hidden Markov Models for Apache Mahout

(18)

■ HmmModel is the main class for defining a Hidden Markov Model.

□ It contains the transition matrix K, emission matrix O and initial probability vector π.

■ Construction from given Mahout matrices K, O and Mahout vector pi:

□ HmmModel model = new HmmModel(K, O, pi);

■ Construction of a random model with n hidden and m observable states:

□ HmmModel model = new HmmModel(n, m);

■ Offers serialization and deserialzation from/to JSON:

□ HmmModel model = HmmModel.fromJson(String json);

□ String json = model.toJson();

HmmModel

(19)

■ Offers a collection of learning algorithms.

■ Supervised learning from hidden and observed state sequences:

□ HmmModel trainSupervised(int hiddenStates, int observedStates, int[] hiddenSequence, int[] observedSequence, double pseudoCount);

■ Unsupervised learning using the Viterbi algorithm:

□ HmmModel trainViterbi(HmmModel initialModel,

int[] observedSequence, double pseudoCount, double epsilon, int maxIterations, boolean scaled);

■ Unsupervised learning using the Baum-Welch algorithm:

□ HmmModel trainBaumWelch(HmmModel initialModel, int[] observedSequence, double epsilon,

int maxIterations, boolean scaled);

HmmTrainer

Used to avoid zero probabilities

Use log-scaled implementation – slower but numerically more stable

Use log-scaled implementation – slower but numerically more stable

(20)

■ Offers algorithms to evaluate an HmmModel

■ Generating a sequence of output states from the given model:

□ int[] predict(HmmModel model, int steps);

■ Computing the model likelihood for a given observation:

□ double modelLikelihood(HmmModel model, int[] observations, boolean scaled);

■ Compute most likely hidden path for given model and observation:

□ int[] decode(HmmModel model, int[] observations, boolean scaled);

HMMEvaluator

Use log-scaled implementation – slower but numerically more stable

Use log-scaled implementation – slower but numerically more stable

(21)

Agenda

■ Motivation

■ An Introduction to Hidden Markov Models

□ What is a Hidden Markov Model?

■ Applications for Hidden Markov Models

□ What are the main problems for HMMs?

□ What are the algorithms to solve them?

■ Hidden Markov Models for Apache Mahout

□ A short overview

Outlook

Hidden Markov Models and Map Reduce

Take-Home Messages

(22)

■ How can we make HMMs Map Reduce enabled?

■ Problem:

□ All the presented algorithms are highly sequential!

□ There is no easy way of parallelizing them.

■ However:

□ Hidden Markov Models are often compact (n, m not “too large”)

□ The most typical application on a trained HMM is decoding, which can be performed fairly efficient on a single machine.

 Trained HMMs can typically be efficiently used within Mappers/Reducers.

□ The most expensive – and data intensive – application on HMMs is training.

 Main Goal: parallelize HMM training.

□ Approaches to parallelizing learning:

For supervised learning: trivial, only need to count state changes.

For unsupervised learning: tricky, ongoing research :

» Merging of trained HMMs on subsequences

» Alternative representations allows training via parallelizable algorithms (e.g. SVD)

Outlook

(23)

■ Hidden Markov Models (HMMs) are a statistical tool to model processes that produce observations based on a hidden state sequence:

□ HMMs consist of a discrete hidden variable that randomly and sequentially changes its state and a random observable variable.

□ Hidden state change probability depends only on the kprior hidden states

A typical value for kis 1.

□ Probability of the observable variable depends only on current hidden state.

■ Three main problems for HMMs: evaluation, decoding and training:

□ Evaluation: Likelihood a given model generated a given observed sequence.

Forward Algorithm

□ Decoding: Most likely hidden sequence for a given observed sequence and model.

Viterbi Algorithm

□ Training: Most likely model that generated a given observed sequence (unsupervised) or a given observed and hidden sequence (supervised).

Baum-Welch Algorithm

Take-Home Messages I

(24)

■ HMMs can be applied whenever an underlying process generates sequential data:

□ Speech Recognition, Handwritten Letter Recognition, Part-of-speech tagging, Genome Analysis, Customer Behavior Analysis, Context aware Search, …

■ Mahout contains a HMM implementation in its upcoming 0.4 release.

□ Three main classes: HmmModel, HmmTrainer, HmmEvaluator

□ HmmModel is a container class for representing model parameters.

□ HmmTrainer contains implementations for the learning problem.

□ HmmEvaluator contains implementations for the evaluation and decoding problem.

■ Porting HMMs to Map/Reduce is non-trivial

□ Typically, HMMs can be used within a Mapper/Reducer.

□ Most data-intensive task for HMMs: training

□ Porting HMM training to Map/Reduce is ongoing research.

Take-Home Messages II

(25)

Thank you!

The end… my only friend, the end

Referințe

DOCUMENTE SIMILARE

To reduce the ìrolitme of calculations in case of a big number of installations (the order of hund- reds) the numerical simulation method can be used in'which

An item-item approach models the preference of a user to an item based on ratings of similar items by the same user.. Nearest-neighbors methods enjoy considerable popularity due

Hull (1998) concludes that Makuva is rather an Austronesian language that is closely related to the Meher language spoken on the island of Kisar off the north coast of East Timor

The book opens with an introduction that provides a brief historical orientation to the background and methods of philosophical investigation and then discusses how the study

Motivation: Protein homology detection and sequence alignment are at the basis of protein structure prediction, function prediction and evolutionary analysis.. This work

The algorithm calculates prob, v_path, and v_prob where prob is the total probability of all paths from the start to the current state, v_path is the Viterbi path, and v_prob is the

Since endophytes behave like pathogens (except that endophytes do not elicit disease symptoms) by their activities such as stimulating the host plant to produce

Keywords: algebraic specification, integration, concurrent systems, object- oriented specification, hidden algebra, CCS, temporal logics, model checking..