Last data update: 2014.03.03

R: Viterbi Algorithm for Hidden Markov Model
ViterbiR Documentation

Viterbi Algorithm for Hidden Markov Model

Description

Provides methods for the generic function Viterbi. This predicts the most likely sequence of Markov states given the observed dataset. There is currently no method for objects of class "mmpp".

Usage

## S3 method for class 'dthmm'
Viterbi(object, ...)
## S3 method for class 'mmglm0'
Viterbi(object, ...)
## S3 method for class 'mmglm1'
Viterbi(object, ...)
## S3 method for class 'mmglmlong1'
Viterbi(object, ...)

Arguments

object

an object with class "dthmm", "mmglm0", "mmglm1" or "mmglmlong1".

...

other arguments.

Details

The purpose of the Viterbi algorithm is to globally decode the underlying hidden Markov state at each time point. It does this by determining the sequence of states (k_1^*, ..., k_n^*) which maximises the joint distribution of the hidden states given the entire observed process, i.e.

(k_1^*, ..., k_n^*) = argmax Pr{ C_1=k_1, ..., C_n=k_n | X^{(n)}=x^{(n)} } .

The algorithm has been taken from Zucchini (2005), however, we calculate sums of the logarithms of probabilities rather than products of probabilities. This lessens the chance of numerical underflow. Given that the logarithmic function is monotonically increasing, the argmax will still be the same. Note that argmax can be evaluated with the R function which.max.

Determining the a posteriori most probable state at time i is referred to as local decoding, i.e.

k_i^* = argmax Pr{ C_i=k | X^{(n)}=x^{(n)} } .

Note that the above probabilities are calculated by the function Estep, and are contained in u[i,j] (output from Estep), i.e. k_i^* is simply which.max(u[i,]).

The code for the methods "dthmm", "mmglm0", "mmglm1" or "mmglmlong1" can be viewed by typing Viterbi.dthmm, Viterbi.mmglm0, Viterbi.mmglm1 or Viterbi.mmglmlong1, respectively, on the R command line.

Value

A vector of length n containing integers (1, ..., m) representing the hidden Markov states for each node of the chain.

References

Cited references are listed on the HiddenMarkov manual page.

Examples

Pi <- matrix(c(1/2, 1/2,   0,   0,   0,
               1/3, 1/3, 1/3,   0,   0,
                 0, 1/3, 1/3, 1/3,   0,
                 0,   0, 1/3, 1/3, 1/3,
                 0,   0,   0, 1/2, 1/2),
             byrow=TRUE, nrow=5)
delta <- c(0, 1, 0, 0, 0)
lambda <- c(1, 4, 2, 5, 3)
m <- nrow(Pi)

x <- dthmm(NULL, Pi, delta, "pois", list(lambda=lambda), discrete=TRUE)
x <- simulate(x, nsim=2000)

#------  Global Decoding  ------

states <- Viterbi(x)
states <- factor(states, levels=1:m)

#  Compare predicted states with true states
#  p[j,k] = Pr{Viterbi predicts state k | true state is j}
p <- matrix(NA, nrow=m, ncol=m)
for (j in 1:m){
    a <- (x$y==j)
    p[j,] <- table(states[a])/sum(a)
}
print(p)

#------  Local Decoding  ------

#   locally decode at i=100

print(which.max(Estep(x$x, Pi, delta, "pois", list(lambda=lambda))$u[100,]))

#---------------------------------------------------
#   simulate a beta HMM

Pi <- matrix(c(0.8, 0.2,
               0.3, 0.7),
             byrow=TRUE, nrow=2)
delta <- c(0, 1)

y <- seq(0.01, 0.99, 0.01)
plot(y, dbeta(y, 2, 6), type="l", ylab="Density", col="blue")
points(y, dbeta(y, 6, 2), type="l", col="red")

n <- 100
x <- dthmm(NULL, Pi, delta, "beta",
           list(shape1=c(2, 6), shape2=c(6, 2)))
x <- simulate(x, nsim=n)

#   colour denotes actual hidden Markov state
plot(1:n, x$x, type="l", xlab="Time", ylab="Observed Process")
points((1:n)[x$y==1], x$x[x$y==1], col="blue", pch=15)
points((1:n)[x$y==2], x$x[x$y==2], col="red", pch=15)

states <- Viterbi(x)
#   mark the wrongly predicted states
wrong <- (states != x$y)
points((1:n)[wrong], x$x[wrong], pch=1, cex=2.5, lwd=2)

Results


R version 3.3.1 (2016-06-21) -- "Bug in Your Hair"
Copyright (C) 2016 The R Foundation for Statistical Computing
Platform: x86_64-pc-linux-gnu (64-bit)

R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.

R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.

Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.

> library(HiddenMarkov)
> png(filename="/home/ddbj/snapshot/RGM3/R_CC/result/HiddenMarkov/Viterbi.Rd_%03d_medium.png", width=480, height=480)
> ### Name: Viterbi
> ### Title: Viterbi Algorithm for Hidden Markov Model
> ### Aliases: Viterbi Viterbi.dthmm Viterbi.mmglm0 Viterbi.mmglm1
> ###   Viterbi.mmglmlong1
> ### Keywords: methods optimize
> 
> ### ** Examples
> 
> Pi <- matrix(c(1/2, 1/2,   0,   0,   0,
+                1/3, 1/3, 1/3,   0,   0,
+                  0, 1/3, 1/3, 1/3,   0,
+                  0,   0, 1/3, 1/3, 1/3,
+                  0,   0,   0, 1/2, 1/2),
+              byrow=TRUE, nrow=5)
> delta <- c(0, 1, 0, 0, 0)
> lambda <- c(1, 4, 2, 5, 3)
> m <- nrow(Pi)
> 
> x <- dthmm(NULL, Pi, delta, "pois", list(lambda=lambda), discrete=TRUE)
> x <- simulate(x, nsim=2000)
> 
> #------  Global Decoding  ------
> 
> states <- Viterbi(x)
> states <- factor(states, levels=1:m)
> 
> #  Compare predicted states with true states
> #  p[j,k] = Pr{Viterbi predicts state k | true state is j}
> p <- matrix(NA, nrow=m, ncol=m)
> for (j in 1:m){
+     a <- (x$y==j)
+     p[j,] <- table(states[a])/sum(a)
+ }
> print(p)
           [,1]       [,2]       [,3]       [,4]       [,5]
[1,] 0.81063123 0.07641196 0.04651163 0.00000000 0.06644518
[2,] 0.18644068 0.57203390 0.02118644 0.10381356 0.11652542
[3,] 0.43318966 0.15301724 0.11853448 0.04094828 0.25431034
[4,] 0.05818966 0.31250000 0.02370690 0.40732759 0.19827586
[5,] 0.14046823 0.13043478 0.13377926 0.13712375 0.45819398
> 
> #------  Local Decoding  ------
> 
> #   locally decode at i=100
> 
> print(which.max(Estep(x$x, Pi, delta, "pois", list(lambda=lambda))$u[100,]))
[1] 2
> 
> #---------------------------------------------------
> #   simulate a beta HMM
> 
> Pi <- matrix(c(0.8, 0.2,
+                0.3, 0.7),
+              byrow=TRUE, nrow=2)
> delta <- c(0, 1)
> 
> y <- seq(0.01, 0.99, 0.01)
> plot(y, dbeta(y, 2, 6), type="l", ylab="Density", col="blue")
> points(y, dbeta(y, 6, 2), type="l", col="red")
> 
> n <- 100
> x <- dthmm(NULL, Pi, delta, "beta",
+            list(shape1=c(2, 6), shape2=c(6, 2)))
> x <- simulate(x, nsim=n)
> 
> #   colour denotes actual hidden Markov state
> plot(1:n, x$x, type="l", xlab="Time", ylab="Observed Process")
> points((1:n)[x$y==1], x$x[x$y==1], col="blue", pch=15)
> points((1:n)[x$y==2], x$x[x$y==2], col="red", pch=15)
> 
> states <- Viterbi(x)
> #   mark the wrongly predicted states
> wrong <- (states != x$y)
> points((1:n)[wrong], x$x[wrong], pch=1, cex=2.5, lwd=2)
> 
> 
> 
> 
> 
> dev.off()
null device 
          1 
>