This documentation is for scikit-learn version 0.10Other versions

Citing

If you use the software, please consider citing scikit-learn.

This page

8.5.5. sklearn.decomposition.KernelPCA

class sklearn.decomposition.KernelPCA(n_components=None, kernel='linear', gamma=0, degree=3, coef0=1, alpha=1.0, fit_inverse_transform=False, eigen_solver='auto', tol=0, max_iter=None)

Kernel Principal component analysis (KPCA)

Non-linear dimensionality reduction through the use of kernels.

Parameters :

n_components: int or None :

Number of components. If None, all non-zero components are kept.

kernel: “linear” | “poly” | “rbf” | “sigmoid” | “precomputed” :

Kernel. Default: “linear”

degree : int, optional

Degree for poly, rbf and sigmoid kernels. Default: 3.

gamma : float, optional

Kernel coefficient for rbf and poly kernels. Default: 1/n_features.

coef0 : float, optional

Independent term in poly and sigmoid kernels.

alpha: int :

Hyperparameter of the ridge regression that learns the inverse transform (when fit_inverse_transform=True). Default: 1.0

fit_inverse_transform: bool :

Learn the inverse transform. (i.e. learn to find the pre-image of a point) Default: False

eigen_solver: string [‘auto’|’dense’|’arpack’] :

Select eigensolver to use. If n_components is much less than the number of training samples, arpack may be more efficient than the dense eigensolver.

tol: float :

convergence tolerance for arpack. Default: 0 (optimal value will be chosen by arpack)

max_iter : int

maximum number of iterations for arpack Default: None (optimal value will be chosen by arpack)

Attributes

lambdas_, alphas_: Eigenvalues and eigenvectors of the centered kernel matrix
dual_coef_: Inverse transform matrix
X_transformed_fit_: Projection of the fitted data on the kernel principal components
References:  
Kernel PCA was intoduced in: Bernhard Schoelkopf, Alexander J. Smola, and Klaus-Robert Mueller. 1999. Kernel principal component analysis. In Advances in kernel methods, MIT Press, Cambridge, MA, USA 327-352.

Methods

fit(X[, y]) Fit the model from data in X.
fit_transform(X[, y]) Fit the model from data in X and transform X.
inverse_transform(X) Transform X back to original space.
set_params(**params) Set the parameters of the estimator.
transform(X) Transform X.
__init__(n_components=None, kernel='linear', gamma=0, degree=3, coef0=1, alpha=1.0, fit_inverse_transform=False, eigen_solver='auto', tol=0, max_iter=None)
fit(X, y=None)

Fit the model from data in X.

Parameters :

X: array-like, shape (n_samples, n_features) :

Training vector, where n_samples in the number of samples and n_features is the number of features.

Returns :

self : object

Returns the instance itself.

fit_transform(X, y=None, **params)

Fit the model from data in X and transform X.

Parameters :

X: array-like, shape (n_samples, n_features) :

Training vector, where n_samples in the number of samples and n_features is the number of features.

Returns :

X_new: array-like, shape (n_samples, n_components) :

inverse_transform(X)

Transform X back to original space.

Parameters :

X: array-like, shape (n_samples, n_components) :

Returns :

X_new: array-like, shape (n_samples, n_features) :

**References:** :

“Learning to Find Pre-Images”, G BakIr et al, 2004. :

set_params(**params)

Set the parameters of the estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The former have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns :self :
transform(X)

Transform X.

Parameters :X: array-like, shape (n_samples, n_features) :
Returns :X_new: array-like, shape (n_samples, n_components) :