8.5.2. sklearn.decomposition.ProbabilisticPCA¶
- class sklearn.decomposition.ProbabilisticPCA(n_components=None, copy=True, whiten=False)¶
Additional layer on top of PCA that adds a probabilistic evaluationPrincipal component analysis (PCA)
Linear dimensionality reduction using Singular Value Decomposition of the data and keeping only the most significant singular vectors to project the data to a lower dimensional space.
This implementation uses the scipy.linalg implementation of the singular value decomposition. It only works for dense arrays and is not scalable to large dimensional data.
The time complexity of this implementation is O(n ** 3) assuming n ~ n_samples ~ n_features.
Parameters : n_components : int, None or string
Number of components to keep. if n_components is not set all components are kept:
n_components == min(n_samples, n_features)
if n_components == ‘mle’, Minka’s MLE is used to guess the dimension if 0 < n_components < 1, select the number of components such that the amount of variance that needs to be explained is greater than the percentage specified by n_components
copy : bool
If False, data passed to fit are overwritten
whiten : bool, optional
When True (False by default) the components_ vectors are divided by n_samples times singular values to ensure uncorrelated outputs with unit component-wise variances.
Whitening will remove some information from the transformed signal (the relative variance scales of the components) but can sometime improve the predictive accuracy of the downstream estimators by making there data respect some hard-wired assumptions.
See also
Notes
For n_components=’mle’, this class uses the method of Thomas P. Minka: Automatic Choice of Dimensionality for PCA. NIPS 2000: 598-604
Due to implementation subtleties of the Singular Value Decomposition (SVD), which is used in this implementation, running fit twice on the same matrix can lead to principal components with signs flipped (change in direction). For this reason, it is important to always use the same estimator object to transform data in a consistent fashion.
Examples
>>> import numpy as np >>> from sklearn.decomposition import PCA >>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]]) >>> pca = PCA(n_components=2) >>> pca.fit(X) PCA(copy=True, n_components=2, whiten=False) >>> print pca.explained_variance_ratio_ [ 0.99244... 0.00755...]
Attributes
components_ array, [n_components, n_features] Components with maximum variance. explained_variance_ratio_ array, [n_components] Percentage of variance explained by each of the selected components. k is not set then all components are stored and the sum of explained variances is equal to 1.0 Methods
fit(X[, y, homoscedastic]) Additionally to PCA.fit, learns a covariance model fit_transform(X[, y]) Fit the model with X and apply the dimensionality reduction on X. get_params([deep]) Get parameters for the estimator inverse_transform(X) Transform data back to its original space, i.e., score(X[, y]) Return a score associated to new data set_params(**params) Set the parameters of the estimator. transform(X) Apply the dimensionality reduction on X. - __init__(n_components=None, copy=True, whiten=False)¶
- fit(X, y=None, homoscedastic=True)¶
Additionally to PCA.fit, learns a covariance model
Parameters : X : array of shape(n_samples, n_dim)
The data to fit
homoscedastic : bool, optional,
If True, average variance across remaining dimensions
- fit_transform(X, y=None, **params)¶
Fit the model with X and apply the dimensionality reduction on X.
Parameters : X : array-like, shape (n_samples, n_features)
Training data, where n_samples in the number of samples and n_features is the number of features.
Returns : X_new : array-like, shape (n_samples, n_components)
- get_params(deep=True)¶
Get parameters for the estimator
Parameters : deep: boolean, optional :
If True, will return the parameters for this estimator and contained subobjects that are estimators.
- inverse_transform(X)¶
Transform data back to its original space, i.e., return an input X_original whose transform would be X
Parameters : X : array-like, shape (n_samples, n_components)
New data, where n_samples in the number of samples and n_components is the number of components.
Returns : X_original array-like, shape (n_samples, n_features) :
Notes
If whitening is enabled, inverse_transform does not compute the exact inverse operation as transform.
- score(X, y=None)¶
Return a score associated to new data
Parameters : X: array of shape(n_samples, n_dim) :
The data to test
Returns : ll: array of shape (n_samples), :
log-likelihood of each row of X under the current model
- set_params(**params)¶
Set the parameters of the estimator.
The method works on simple estimators as well as on nested objects (such as pipelines). The former have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.
Returns : self :
- transform(X)¶
Apply the dimensionality reduction on X.
Parameters : X : array-like, shape (n_samples, n_features)
New data, where n_samples in the number of samples and n_features is the number of features.
Returns : X_new : array-like, shape (n_samples, n_components)