This page

Citing

Please consider citing the scikit-learn.

9.6.3. sklearn.hmm.GMMHMM

class sklearn.hmm.GMMHMM(n_components=1, n_mix=1, startprob=None, transmat=None, startprob_prior=None, transmat_prior=None, gmms=None, cvtype=None)

Hidden Markov Model with Gaussin mixture emissions

See also

GaussianHMM
HMM with Gaussian emissions

Examples

>>> from sklearn.hmm import GMMHMM
>>> GMMHMM(n_components=2, n_mix=10, cvtype='diag')
... 
GMMHMM(cvtype='diag',
    gmms=[GMM(cvtype='diag', n_components=10), GMM(cvtype='diag', n_components=10)],
    n_components=2, n_mix=10, startprob=array([ 0.5,  0.5]),
    startprob_prior=1.0,
    transmat=array([[ 0.5,  0.5],
       [ 0.5,  0.5]]),
    transmat_prior=1.0)

Attributes

transmat Matrix of transition probabilities.
startprob Mixing startprob for each state.
n_components int (read-only) Number of states in the model.
gmms: array of GMM objects, length ‘n_components`   GMM emission distributions for each state

Methods

eval(X) Compute the log likelihood of X under the HMM.
decode(X) Find most likely state sequence for each point in X using the Viterbi algorithm.
rvs(n=1) Generate n samples from the HMM.
init(X) Initialize HMM parameters from X.
fit(X) Estimate HMM parameters from X using the Baum-Welch algorithm.
predict(X) Like decode, find most likely state sequence corresponding to X.
score(X) Compute the log likelihood of X under the model.
__init__(n_components=1, n_mix=1, startprob=None, transmat=None, startprob_prior=None, transmat_prior=None, gmms=None, cvtype=None)

Create a hidden Markov model with GMM emissions.

Parameters :

n_components : int

Number of states.

decode(obs, maxrank=None, beamlogprob=-inf)

Find most likely state sequence corresponding to obs.

Uses the Viterbi algorithm.

Parameters :

obs : array_like, shape (n, n_features)

List of n_features-dimensional data points. Each row corresponds to a single data point.

maxrank : int

Maximum rank to evaluate for rank pruning. If not None, only consider the top maxrank states in the inner sum of the forward algorithm recursion. Defaults to None (no rank pruning). See The HTK Book for more details.

beamlogprob : float

Width of the beam-pruning beam in log-probability units. Defaults to -numpy.Inf (no beam pruning). See The HTK Book for more details.

Returns :

viterbi_logprob : float

Log probability of the maximum likelihood path through the HMM

states : array_like, shape (n,)

Index of the most likely states for each observation

See also

eval
Compute the log probability under the model and posteriors
score
Compute the log probability under the model
eval(obs, maxrank=None, beamlogprob=-inf)

Compute the log probability under the model and compute posteriors

Implements rank and beam pruning in the forward-backward algorithm to speed up inference in large models.

Parameters :

obs : array_like, shape (n, n_features)

Sequence of n_features-dimensional data points. Each row corresponds to a single point in the sequence.

maxrank : int

Maximum rank to evaluate for rank pruning. If not None, only consider the top maxrank states in the inner sum of the forward algorithm recursion. Defaults to None (no rank pruning). See The HTK Book for more details.

beamlogprob : float

Width of the beam-pruning beam in log-probability units. Defaults to -numpy.Inf (no beam pruning). See The HTK Book for more details.

Returns :

logprob : array_like, shape (n,)

Log probabilities of the sequence obs

posteriors: array_like, shape (n, n_components) :

Posterior probabilities of each state for each observation

See also

score
Compute the log probability under the model
decode
Find most likely state sequence corresponding to a obs
fit(obs, n_iter=10, thresh=0.01, params='ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz', init_params='ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz', maxrank=None, beamlogprob=-inf, **kwargs)

Estimate model parameters.

An initialization step is performed before entering the EM algorithm. If you want to avoid this step, set the keyword argument init_params to the empty string ‘’. Likewise, if you would like just to do an initialization, call this method with n_iter=0.

Parameters :

obs : list

List of array-like observation sequences (shape (n_i, n_features)).

n_iter : int, optional

Number of iterations to perform.

thresh : float, optional

Convergence threshold.

params : string, optional

Controls which parameters are updated in the training process. Can contain any combination of ‘s’ for startprob, ‘t’ for transmat, ‘m’ for means, and ‘c’ for covars, etc. Defaults to all parameters.

init_params : string, optional

Controls which parameters are initialized prior to training. Can contain any combination of ‘s’ for startprob, ‘t’ for transmat, ‘m’ for means, and ‘c’ for covars, etc. Defaults to all parameters.

maxrank : int, optional

Maximum rank to evaluate for rank pruning. If not None, only consider the top maxrank states in the inner sum of the forward algorithm recursion. Defaults to None (no rank pruning). See “The HTK Book” for more details.

beamlogprob : float, optional

Width of the beam-pruning beam in log-probability units. Defaults to -numpy.Inf (no beam pruning). See “The HTK Book” for more details.

Notes

In general, logprob should be non-decreasing unless aggressive pruning is used. Decreasing logprob is generally a sign of overfitting (e.g. a covariance parameter getting too small). You can fix this by getting more training data, or decreasing covars_prior.

predict(obs, **kwargs)

Find most likely state sequence corresponding to obs.

Parameters :

obs : array_like, shape (n, n_features)

List of n_features-dimensional data points. Each row corresponds to a single data point.

maxrank : int

Maximum rank to evaluate for rank pruning. If not None, only consider the top maxrank states in the inner sum of the forward algorithm recursion. Defaults to None (no rank pruning). See The HTK Book for more details.

beamlogprob : float

Width of the beam-pruning beam in log-probability units. Defaults to -numpy.Inf (no beam pruning). See The HTK Book for more details.

Returns :

states : array_like, shape (n,)

Index of the most likely states for each observation

predict_proba(obs, **kwargs)

Compute the posterior probability for each state in the model

Parameters :

obs : array_like, shape (n, n_features)

List of n_features-dimensional data points. Each row corresponds to a single data point.

See eval() for a list of accepted keyword arguments. :

Returns :

T : array-like, shape (n, n_components)

Returns the probability of the sample for each state in the model.

rvs(n=1, random_state=None)

Generate random samples from the model.

Parameters :

n : int

Number of samples to generate.

Returns :

obs : array_like, length n

List of samples

score(obs, maxrank=None, beamlogprob=-inf)

Compute the log probability under the model.

Parameters :

obs : array_like, shape (n, n_features)

Sequence of n_features-dimensional data points. Each row corresponds to a single data point.

maxrank : int

Maximum rank to evaluate for rank pruning. If not None, only consider the top maxrank states in the inner sum of the forward algorithm recursion. Defaults to None (no rank pruning). See The HTK Book for more details.

beamlogprob : float

Width of the beam-pruning beam in log-probability units. Defaults to -numpy.Inf (no beam pruning). See The HTK Book for more details.

Returns :

logprob : array_like, shape (n,)

Log probabilities of each data point in obs

See also

eval
Compute the log probability under the model and posteriors
decode
Find most likely state sequence corresponding to a obs
set_params(**params)

Set the parameters of the estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The former have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns :self :
startprob

Mixing startprob for each state.

transmat

Matrix of transition probabilities.