Current Mood: chilly
Blogs I Commented On:
Rant:
I was always psyched going to McDonald's as a kid, thinking it would be so awesome to eat there everyday. Now I have fulfilled that silly dream, but it's just not as cool anymore since that's the place where I do most of my studying. At least I can take comfort in epic unlimited drink refills to keep me hydrated. Beat that, library!
Speaking of McDonald's, have any of you swung by the place during lunch time on the weekdays? It sometimes feels like I'm back in Asia, with the amount of Chinese and Korean I hear around the place at that time. McDonald's on University Drive: the Little Asia of College Station.
Summary:
Hidden Markov Models (HMMs) are defined as being a doubly stochastic process with an underlying stochastic process that is not observable, but can only be observed through another set of stochastic processes that produce the sequence of observed symbols. Elements of HMMs consist of the following:
1. There are a finite number, say N, of states in the model.
2. At each clock time, t, a new state is entered based upon a transition probability distribution, which depends on the previous state.
3. After each transition is made, an observation symbol is produced according to a probability distribution which depends on the current states.
The “Urn and Ball” model illustrates a concrete example of an HMM in action. In this mode, there are:
* N urns, each filled with a large number of colored balls
* M possible colors for each ball
* an observation sequence:
> choose one of N urns (according to initial probability distribution
> select ball from initial urn
> record color from ball
> choose new urn based on transition probability distribution of new urn
A formal notation of a discrete observation HMM consists of the following:
* T = length of observation sequence (total number of clock times)
* N = number of states (urns) in the model
* M = number of observation symbols (colors)
* Q = {q_1, q_2, … , q_N}, states (urns)
* V = {v_1, v_2, … , v_M}, discrete set of possible symbol observations (colors)
* A = {a_i,j}, a_i,j = Pr (q_j at t+1 | q_i at ), state transition probability distribution
* B = {b_j(k)}, b_j(k) = Pr(v_k at t | q_j at t), observation symbol probability distribution in state j
* pi = {pi_i}, pi_i = Pr(q_i at t=1), initial state distribution
For an observation sequence, O = O_1 O_2 … O_T:
1. Choose an initial state, i_1, according to the initial state distribution, pi.
2. Choose O_t according to b_i_t(k), the symbol probability distribution in state i_t.
3. Choose i_t+1 according to {a_i_t,_i_t+1}, i_t+1 = 1,2,…,N, the state transition probability distribution for state i_t.
4. Set t=t+1; return to step 3 if t < T; otherwise, terminate procedure.
HMMs are represented by the symbol lambda = (A, B, pi), and specified as a choice of the number of states N, the number of discrete symbols M, and the specification of A, B, and pi. Three problems for HMMs are:
1. Evaluation Problem - Given a model and a sequence of observations, how do we “score” or evaluate the model?
2. Estimation Problem – How do we uncover the hidden part of the model (i.e., the state sequence)?
3. Training Problem – How do we optimize the model parameters to best describe how an observed sequence came about?
Discussion:
This is a somewhat decent paper to suggest for introducing HMMs. I really liked the “Urn and Ball” example as a simple and concrete way to describe an HMM structure. On the other hand, the coin examples used to illustrate the HMM execution could used some more clarification. I would just focus on the first half of the paper get a feeling of HMMs, and while focusing on the second part to get a good understanding of how to begin implementing one. That’s assuming if anyone can even make out what it says…
0 comments:
Post a Comment