This page

scikits.learn.decomposition.ProbabilisticPCA

class scikits.learn.decomposition.ProbabilisticPCA(n_components=None, copy=True, whiten=False)

Additional layer on top of PCA that adds a probabilistic evaluation

Principal component analysis (PCA)

Linear dimensionality reduction using Singular Value Decomposition of the data and keeping only the most significant singular vectors to project the data to a lower dimensional space.

This implementation uses the scipy.linalg implementation of the singular value decomposition. It only works for dense arrays and is not scalable to large dimensional data.

The time complexity of this implementation is O(n ** 3) assuming n ~ n_samples ~ n_features.

Parameters :

n_components: int, none or string :

Number of components to keep. if n_components is not set all components are kept:

n_components == min(n_samples, n_features)

if n_components == ‘mle’, Minka’s MLE is used to guess the dimension

if 0 < n_components < 1, select the number of components such that

the explained variance ratio is greater than n_components

copy: bool :

If False, data passed to fit are overwritten

whiten: bool, optional :

When True (False by default) the components_ vectors are divided by n_samples times singular values to ensure uncorrelated outputs with unit component-wise variances.

Whitening will remove some information from the transformed signal (the relative variance scales of the components) but can sometime improve the predictive accuracy of the downstream estimators by making there data respect some hard-wired assumptions.

Notes

For n_components=’mle’, this class uses the method of Thomas P. Minka: Automatic Choice of Dimensionality for PCA. NIPS 2000: 598-604

Due to implementation subtleties of the Singular Value Decomposition (SVD), which is used in this implementation, running fit twice on the same matrix can lead to principal components with signs flipped (change in direction). For this reason, it is important to always use the same estimator object to transform data in a consistent fashion.

Examples

>>> import numpy as np
>>> from scikits.learn.decomposition import PCA
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> pca = PCA(n_components=2)
>>> pca.fit(X)
PCA(copy=True, n_components=2, whiten=False)
>>> print pca.explained_variance_ratio_
[ 0.99244289  0.00755711]

Attributes

Methods

__init__(n_components=None, copy=True, whiten=False)
fit(X, y=None, homoscedastic=True)

Additionally to PCA.fit, learns a covariance model

Parameters :

X: array of shape(n_samples, n_dim) :

The data to fit

homoscedastic: bool, optional, :

If True, average variance across remaining dimensions

fit_transform(X, y=None, **params)

Fit the model from data in X.

Parameters :

X: array-like, shape (n_samples, n_features) :

Training vector, where n_samples in the number of samples and n_features is the number of features.

Returns :

X_new array-like, shape (n_samples, n_components) :

inverse_transform(X)

Return an input X_original whose transform would be X

Note: if whitening is enabled, inverse_transform does not compute the exact inverse operation as transform.

score(X)

Return a score associated to new data

Parameters :

X: array of shape(n_samples, n_dim) :

The data to test

Returns :

ll: array of shape (n_samples), :

log-likelihood of each row of X under the current model

transform(X)

Apply the dimension reduction learned on the train data.