This documentation is for scikit-learn version 0.11-gitOther versions

Citing

If you use the software, please consider citing scikit-learn.

This page

8.15.1. sklearn.manifold.LocallyLinearEmbedding

class sklearn.manifold.LocallyLinearEmbedding(n_neighbors=5, out_dim=2, reg=0.001, eigen_solver='auto', tol=1e-06, max_iter=100, method='standard', hessian_tol=0.0001, modified_tol=1e-12, neighbors_algorithm='auto')

Locally Linear Embedding

Parameters :

n_neighbors : integer

number of neighbors to consider for each point.

out_dim : integer

number of coordinates for the manifold

reg : float

regularization constant, multiplies the trace of the local covariance matrix of the distances.

eigen_solver : string, {‘auto’, ‘arpack’, ‘dense’}

auto : algorithm will attempt to choose the best method for input data

arpack : use arnoldi iteration in shift-invert mode.

For this method, M may be a dense matrix, sparse matrix, or general linear operator.

dense : use standard dense matrix operations for the eigenvalue

decomposition. For this method, M must be an array or matrix type. This method should be avoided for large problems.

tol : float, optional

Tolerance for ‘arpack’ method Not used if eigen_solver==’dense’.

max_iter : integer

maximum number of iterations for the arpack solver. Not used if eigen_solver==’dense’.

method : string [‘standard’ | ‘hessian’ | ‘modified’]

standard : use the standard locally linear embedding algorithm.

see reference [1]

hessian : use the Hessian eigenmap method. This method requires

n_neighbors > out_dim * (1 + (out_dim + 1) / 2. see reference [2]

modified : use the modified locally linear embedding algorithm.

see reference [3]

ltsa : use local tangent space alignment algorithm

see reference [4]

hessian_tol : float, optional

Tolerance for Hessian eigenmapping method. Only used if method == ‘hessian’

modified_tol : float, optional

Tolerance for modified LLE method. Only used if method == ‘modified’

neighbors_algorithm : string [‘auto’|’brute’|’kd_tree’|’ball_tree’]

algorithm to use for nearest neighbors search, passed to neighbors.NearestNeighbors instance

Attributes

embedding_vectors_ array-like, shape [out_dim, n_samples] Stores the embedding vectors
reconstruction_error_ float Reconstruction error associated with embedding_vectors_
nbrs_ NearestNeighbors object Stores nearest neighbors instance, including BallTree or KDtree if applicable.

Methods

fit(X[, y]) Compute the embedding vectors for data X
fit_transform(X[, y]) Compute the embedding vectors for data X and transform X.
set_params(**params) Set the parameters of the estimator.
transform(X) Transform new points into embedding space.
__init__(n_neighbors=5, out_dim=2, reg=0.001, eigen_solver='auto', tol=1e-06, max_iter=100, method='standard', hessian_tol=0.0001, modified_tol=1e-12, neighbors_algorithm='auto')
fit(X, y=None)

Compute the embedding vectors for data X

Parameters :

X : array-like of shape [n_samples, n_features]

training set.

Returns :

self : returns an instance of self.

fit_transform(X, y=None)

Compute the embedding vectors for data X and transform X.

Parameters :

X : array-like of shape [n_samples, n_features]

training set.

Returns :

X_new: array-like, shape (n_samples, out_dim) :

set_params(**params)

Set the parameters of the estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The former have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns :self :
transform(X)

Transform new points into embedding space.

Parameters :X : array-like, shape = [n_samples, n_features]
Returns :X_new : array, shape = [n_samples, out_dim]

Notes

Because of scaling performed by this method, it is discouraged to use it together with methods that are not scale-invariant (like SVMs)