R: Derive maximum likelihood hidden state sequence using Viterbi...
Derive maximum likelihood hidden state sequence using Viterbi algorithm
Given read counts and HMM parameters (optimized by nbh_em), derive the sequence of hidden states that maximizes the joint likelihood of observed and latent data.
nbh_vit(count, TRANS, alpha, beta)
A vector of integers, conceptaully representative of the read counts within bins of chromosome.
Optimized transition probability matrix, a squared matrix of probabilities (0 ≤ p ≤ 1) with row and column length equal to that of alpha and beta and row sum and column sum both equal to 1 (within some numerical deviation of 1e-6).
Optimized shape parameter of the NB as a vector of positive values with length equal to that of beta and the row/column of TRANS.
Optimized inverse scale parameter of the NB as a vector of positive values with length equal to that of beta and the row/column of TRANS.
Given a K-state HMM with NB emission (NBH), the goal is to find the latent states corresponding to the observed data that maximize the joint likelihood ln p(X, Z) = ln p(x_1, …, x_N, z_1, …, z_N). The optimal solution is obtained via Viterbi algorithm, which essentially belongs to the more general framework of Dynamic Programming.
Briefly, starting from the second node of the Markov chain, we select state of the first node that maximizes ln p(x_1, x_2, z_2 | z_1) for every state of z_2. Then, we move on to the next node and the next until reaching to the last node. In the end, we make choice for the state of the last node that together leads to the maximum ln p(X, Z). Finally, we backtrack to find the choices of states in all of the intermeidate nodes to form the final solution.
A list containing:
ML sequence of latent states
Log-likelihood corresponding to the latents states class
The function is expected to run after learning the model parameters of HMM using nbh_em and (optionally) disambiguating the multihits using nbh_vit. However, nothing prevents user from running it with a random set of HMM parameters. Also, note that Viterbi algorithm finds the most probable sequence of states, which is not the same as maximizing the posterior probabilities for all the individual latent variables. For instance, a observed data point may be classified as from state 2 in the most probable chain in spite its marginal posterior probability for state 2 is zero.
Rabiner, L. R. (1989). A tutorial on hidden Markov models and selected applications in speech recognition (Vol. 77, pp. 257-286). Presented at the Proceedings of the IEEE. doi:10.1109/5.18626
Christopher Bishop. Pattern recognition and machine learning. Number 605-631 in Information Science and Statisitcs. Springer Science, 2006.
X. L. Meng, D. B. Rubin, Maximum likelihood estimation via the
ECM algorithm: A general framework, Biometrika, 80(2):267-278 (1993).
J. A. Fessler, A. O. Hero, Space-alternating generalized expectation-maximization algorithm, IEEE Tr. on Signal Processing, 42(10):2664 -2677 (1994).