Last data update: 2014.03.03

R: Algorithm for Decoding Hidden Markov Models (local)
local_decoding_algorithmR Documentation

Algorithm for Decoding Hidden Markov Models (local)

Description

The function decodes a hidden Markov model into a most likely sequence of hidden states. Different to the Viterbi_algorithm, this algorithm determines the most likely hidden state for each time point seperately.

Usage

local_decoding_algorithm(x, m, delta, gamma, distribution_class, 
      distribution_theta, discr_logL = FALSE, discr_logL_eps = 0.5)

Arguments

x

a vector object containing the time-series of observations that are assumed to be realizations of the (hidden Markov state dependent) observation process of the model.

m

a (finite) number of states in the hidden Markov chain.

delta

a vector object containing values for the marginal probability distribution of the m states of the Markov chain at the time point t=1.

gamma

a matrix (ncol=nrow=m) containing values for the transition matrix of the hidden Markov chain.

distribution_class

a single character string object with the abbreviated name of the m observation distributions of the Markov dependent observation process. The following distributions are supported by this algorithm: Poisson (pois); generalized Poisson (genpois); normal (norm); geometric (geom).

distribution_theta

a list object containing the parameter values for the m observation distributions that are dependent on the hidden Markov state.

discr_logL

a logical object. It is TRUE if the discrete log-likelihood shall be calculated (for distribution_class="norm" instead of the general log-likelihood). Default is FALSE.

discr_logL_eps

a single numerical value to approximately determine the discrete log-likelihood for a hidden Markov model based on nomal distributions (for "norm"). The default value is 0.5.

Value

local_decoding_algorithm returns a list containing the following two components:

state_probabilities

a (T,m)-matrix (when T indicates the length/size of the observation time-series and m the number of states of the HMM) containing probabilities (conditional probability of a state i=1,...,m at a time point t=1,...,T given all observations x) calculated by the algorithm. See MacDonald & Zucchini (2009, Paragraph 5.3.1) for further details.

decoding

a numerical vector containing the locally most likely sequence of hidden states as decoded by the local_decoding_algorithm.

Author(s)

The basic algorithm for a Poisson-HMM can be found in MacDonald & Zucchini (2009, Paragraph A.2.6). Extension and implementation by Vitali Witowski (2013).

References

MacDonald, I. L., Zucchini, W. (2009) Hidden Markov Models for Time Series: An Introduction Using R, Boca Raton: Chapman & Hall.

See Also

Viterbi_algorithm, HMM_decoding

Examples


################################################################
### Fictitious observations ####################################
################################################################

x <- c(1,16,19,34,22,6,3,5,6,3,4,1,4,3,5,7,9,8,11,11,
  14,16,13,11,11,10,12,19,23,25,24,23,20,21,22,22,18,7,
  5,3,4,3,2,3,4,5,4,2,1,3,4,5,4,5,3,5,6,4,3,6,4,8,9,12,
  9,14,17,15,25,23,25,35,29,36,34,36,29,41,42,39,40,43,
  37,36,20,20,21,22,23,26,27,28,25,28,24,21,25,21,20,21,
  11,18,19,20,21,13,19,18,20,7,18,8,15,17,16,13,10,4,9,
  7,8,10,9,11,9,11,10,12,12,5,13,4,6,6,13,8,9,10,13,13,
  11,10,5,3,3,4,9,6,8,3,5,3,2,2,1,3,5,11,2,3,5,6,9,8,5,
  2,5,3,4,6,4,8,15,12,16,20,18,23,18,19,24,23,24,21,26,
  36,38,37,39,45,42,41,37,38,38,35,37,35,31,32,30,20,39,
  40,33,32,35,34,36,34,32,33,27,28,25,22,17,18,16,10,9,
  5,12,7,8,8,9,19,21,24,20,23,19,17,18,17,22,11,12,3,9,
  10,4,5,13,3,5,6,3,5,4,2,5,1,2,4,4,3,2,1) 


### Train hidden Markov model for m=4

m_trained_HMM <- 
    HMM_training(x = x, 
      min_m = 4, 
      max_m = 4, 
      distribution_class = "pois")$trained_HMM_with_selected_m


### Decode the trained HMM using the local-decoding algorithm 
### to get the locally most likely sequence of hidden states 
### for the time-series of observations

local_decoding <- 
    local_decoding_algorithm(x = x, 
       m = m_trained_HMM$m, 
       delta = m_trained_HMM$delta, 
       gamma = m_trained_HMM$gamma, 
       distribution_class = m_trained_HMM$distribution_class, 
       distribution_theta = m_trained_HMM$distribution_theta)


### Most likely sequence of hidden states

print(local_decoding$decoding)

plot(local_decoding$decoding)

Results


R version 3.3.1 (2016-06-21) -- "Bug in Your Hair"
Copyright (C) 2016 The R Foundation for Statistical Computing
Platform: x86_64-pc-linux-gnu (64-bit)

R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.

R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.

Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.

> library(HMMpa)
> png(filename="/home/ddbj/snapshot/RGM3/R_CC/result/HMMpa/local_decoding_algorithm.Rd_%03d_medium.png", width=480, height=480)
> ### Name: local_decoding_algorithm
> ### Title: Algorithm for Decoding Hidden Markov Models (local)
> ### Aliases: local_decoding_algorithm
> 
> ### ** Examples
> 
> 
> ################################################################
> ### Fictitious observations ####################################
> ################################################################
> 
> x <- c(1,16,19,34,22,6,3,5,6,3,4,1,4,3,5,7,9,8,11,11,
+   14,16,13,11,11,10,12,19,23,25,24,23,20,21,22,22,18,7,
+   5,3,4,3,2,3,4,5,4,2,1,3,4,5,4,5,3,5,6,4,3,6,4,8,9,12,
+   9,14,17,15,25,23,25,35,29,36,34,36,29,41,42,39,40,43,
+   37,36,20,20,21,22,23,26,27,28,25,28,24,21,25,21,20,21,
+   11,18,19,20,21,13,19,18,20,7,18,8,15,17,16,13,10,4,9,
+   7,8,10,9,11,9,11,10,12,12,5,13,4,6,6,13,8,9,10,13,13,
+   11,10,5,3,3,4,9,6,8,3,5,3,2,2,1,3,5,11,2,3,5,6,9,8,5,
+   2,5,3,4,6,4,8,15,12,16,20,18,23,18,19,24,23,24,21,26,
+   36,38,37,39,45,42,41,37,38,38,35,37,35,31,32,30,20,39,
+   40,33,32,35,34,36,34,32,33,27,28,25,22,17,18,16,10,9,
+   5,12,7,8,8,9,19,21,24,20,23,19,17,18,17,22,11,12,3,9,
+   10,4,5,13,3,5,6,3,5,4,2,5,1,2,4,4,3,2,1) 
> 
> 
> ### Train hidden Markov model for m=4
> 
> m_trained_HMM <- 
+     HMM_training(x = x, 
+       min_m = 4, 
+       max_m = 4, 
+       distribution_class = "pois")$trained_HMM_with_selected_m
[1] "HMM with m = 4"
[[1]]
[1] "##############################"

$m
[1] "train (EM) HMM with m = 4"

$distribution
[1] "pois"

$iteration
[1] 1

$logL
[1] -759.8965

[[6]]
[1] "##############################"

[[1]]
[1] "##############################"

$m
[1] "train (EM) HMM with m = 4"

$distribution
[1] "pois"

$iteration
[1] 2

$logL
[1] -735.2429

[[6]]
[1] "##############################"

[[1]]
[1] "##############################"

$m
[1] "train (EM) HMM with m = 4"

$distribution
[1] "pois"

$iteration
[1] 3

$logL
[1] -734.133

[[6]]
[1] "##############################"

[[1]]
[1] "##############################"

$m
[1] "train (EM) HMM with m = 4"

$distribution
[1] "pois"

$iteration
[1] 4

$logL
[1] -734.002

[[6]]
[1] "##############################"

[[1]]
[1] "##############################"

$m
[1] "train (EM) HMM with m = 4"

$distribution
[1] "pois"

$iteration
[1] 5

$logL
[1] -733.9805

[[6]]
[1] "##############################"

[[1]]
[1] "##############################"

$m
[1] "train (EM) HMM with m = 4"

$distribution
[1] "pois"

$iteration
[1] 6

$logL
[1] -733.9757

[[6]]
[1] "##############################"

[[1]]
[1] "##############################"

$m
[1] "train (EM) HMM with m = 4"

$distribution
[1] "pois"

$iteration
[1] 7

$logL
[1] -733.9744

[[6]]
[1] "##############################"

[[1]]
[1] "##############################"

$m
[1] "train (EM) HMM with m = 4"

$distribution
[1] "pois"

$iteration
[1] 8

$logL
[1] -733.9739

[[6]]
[1] "##############################"

$x
  [1]  1 16 19 34 22  6  3  5  6  3  4  1  4  3  5  7  9  8 11 11 14 16 13 11 11
 [26] 10 12 19 23 25 24 23 20 21 22 22 18  7  5  3  4  3  2  3  4  5  4  2  1  3
 [51]  4  5  4  5  3  5  6  4  3  6  4  8  9 12  9 14 17 15 25 23 25 35 29 36 34
 [76] 36 29 41 42 39 40 43 37 36 20 20 21 22 23 26 27 28 25 28 24 21 25 21 20 21
[101] 11 18 19 20 21 13 19 18 20  7 18  8 15 17 16 13 10  4  9  7  8 10  9 11  9
[126] 11 10 12 12  5 13  4  6  6 13  8  9 10 13 13 11 10  5  3  3  4  9  6  8  3
[151]  5  3  2  2  1  3  5 11  2  3  5  6  9  8  5  2  5  3  4  6  4  8 15 12 16
[176] 20 18 23 18 19 24 23 24 21 26 36 38 37 39 45 42 41 37 38 38 35 37 35 31 32
[201] 30 20 39 40 33 32 35 34 36 34 32 33 27 28 25 22 17 18 16 10  9  5 12  7  8
[226]  8  9 19 21 24 20 23 19 17 18 17 22 11 12  3  9 10  4  5 13  3  5  6  3  5
[251]  4  2  5  1  2  4  4  3  2  1

$m
[1] 4

$logL
[1] -733.9739

$AIC
[1] 1529.948

$BIC
[1] 1640.329

$delta
[1] 1.000000e+00 6.467102e-12 2.681984e-39 1.869155e-98

$gamma
             [,1]         [,2]       [,3]         [,4]
[1,] 9.371685e-01 5.109318e-02 0.01173831 1.239794e-32
[2,] 4.597015e-02 8.893127e-01 0.06471713 3.651488e-20
[3,] 2.636410e-02 4.946516e-02 0.89286806 3.130269e-02
[4,] 1.493758e-12 3.941635e-21 0.05290459 9.470954e-01

$distribution_class
[1] "pois"

$distribution_theta
$distribution_theta$lambda
[1]  4.103977 10.146465 21.080724 35.555718


$estimated_mean_values
[1]  4.103977 10.146465 21.080724 35.555718

> 
> 
> ### Decode the trained HMM using the local-decoding algorithm 
> ### to get the locally most likely sequence of hidden states 
> ### for the time-series of observations
> 
> local_decoding <- 
+     local_decoding_algorithm(x = x, 
+        m = m_trained_HMM$m, 
+        delta = m_trained_HMM$delta, 
+        gamma = m_trained_HMM$gamma, 
+        distribution_class = m_trained_HMM$distribution_class, 
+        distribution_theta = m_trained_HMM$distribution_theta)
> 
> 
> ### Most likely sequence of hidden states
> 
> print(local_decoding$decoding)
  [1] 1 3 3 3 3 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3
 [38] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 3 3 3 3 3 4 4 4
 [75] 4 4 4 4 4 4 4 4 4 4 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 2 2
[112] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1
[149] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 3 3 3 3 3 3 3 3 3 3 3
[186] 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 3 3 3 3 3 3 2 2 2
[223] 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1
[260] 1
> 
> plot(local_decoding$decoding)
> 
> 
> 
> 
> 
> 
> dev.off()
null device 
          1 
>