Last data update: 2014.03.03

Data Source

R Release (3.2.3)
CranContrib
BioConductor
All

Data Type

Packages
Functions
Images
Data set

Classification

Results 1 - 10 of 10 found.
[1] < 1 > [1]  Sort:

backward (Package: HMM) : Computes the backward probabilities

The backward-function computes the backward probabilities. The backward probability for state X and observation at time k is defined as the probability of observing the sequence of observations e_k+1, ... ,e_n under the condition that the state at time k is X. That is:
b[X,k] := Prob(E_k+1 = e_k+1, ... , E_n = e_n | X_k = X).
Where E_1...E_n = e_1...e_n is the sequence of observed emissions and X_k is a random variable that represents the state at time k.
● Data Source: CranContrib
● Keywords: methods
● Alias: backward
● 0 images

baumWelch (Package: HMM) : Inferring the parameters of a Hidden Markov Model via the Baum-Welch algorithm

For an initial Hidden Markov Model (HMM) and a given sequence of observations, the Baum-Welch algorithm infers optimal parameters to the HMM. Since the Baum-Welch algorithm is a variant of the Expectation-Maximisation algorithm, the algorithm converges to a local solution which might not be the global optimum.
● Data Source: CranContrib
● Keywords: methods
● Alias: baumWelch
● 0 images

dishonestCasino (Package: HMM) : Example application for Hidden Markov Models

The dishonest casino gives an example for the application of Hidden Markov Models. This example is taken from Durbin et. al. 1999: A dishonest casino uses two dice, one of them is fair the other is loaded. The probabilities of the fair die are (1/6,...,1/6) for throwing ("1",...,"6"). The probabilities of the loaded die are (1/10,...,1/10,1/2) for throwing ("1",..."5","6"). The observer doesn't know which die is actually taken (the state is hidden), but the sequence of throws (observations) can be used to infer which die (state) was used.
● Data Source: CranContrib
● Keywords: design
● Alias: dishonestCasino
1 images

forward (Package: HMM) : Computes the forward probabilities

The forward-function computes the forward probabilities. The forward probability for state X up to observation at time k is defined as the probability of observing the sequence of observations e_1, ... ,e_k and that the state at time k is X. That is:
f[X,k] := Prob(E_1 = e_1, ... , E_k = e_k , X_k = X).
Where E_1...E_n = e_1...e_n is the sequence of observed emissions and X_k is a random variable that represents the state at time k.
● Data Source: CranContrib
● Keywords: methods
● Alias: forward
● 0 images

HMM (Package: HMM) :

Modelling, analysis and inference with discrete time and discrete space Hidden Markov Models.
● Data Source: CranContrib
● Keywords: package
● Alias: HMM
● 0 images

initHMM (Package: HMM) : Initialisation of HMM's

This function initialises a general discrete time and discrete space Hidden Markov Model (HMM). A HMM consists of an alphabet of states and emission symbols. A HMM assumes that the states are hidden from the observer, while only the emissions of the states are observable. The HMM is designed to make inference on the states through the observation of emissions. The stochastics of the HMM is fully described by the initial starting probabilities of the states, the transition probabilities between states and the emission probabilities of the states.
● Data Source: CranContrib
● Keywords: models
● Alias: initHMM
● 0 images

posterior (Package: HMM) : Computes the posterior probabilities for the states

This function computes the posterior probabilities of being in state X at time k for a given sequence of observations and a given Hidden Markov Model.
● Data Source: CranContrib
● Keywords: methods
● Alias: posterior
● 0 images

simHMM (Package: HMM) : Simulate states and observations for a Hidden Markov Model

Simulates a path of states and observations for a given Hidden Markov Model.
● Data Source: CranContrib
● Keywords: models
● Alias: simHMM
● 0 images

viterbi (Package: HMM) : Computes the most probable path of states

The Viterbi-algorithm computes the most probable path of states for a sequence of observations for a given Hidden Markov Model.
● Data Source: CranContrib
● Keywords: methods
● Alias: viterbi
● 0 images

viterbiTraining (Package: HMM) : Inferring the parameters of a Hidden Markov Model via Viterbi-training

For an initial Hidden Markov Model (HMM) and a given sequence of observations, the Viterbi-training algorithm infers optimal parameters to the HMM. Viterbi-training usually converges much faster than the Baum-Welch algorithm, but the underlying algorithm is theoretically less justified. Be careful: The algorithm converges to a local solution which might not be the optimum.
● Data Source: CranContrib
● Keywords: methods
● Alias: viterbiTraining
● 0 images