This documentation is for scikit-learn version 0.10Other versions

Citing

If you use the software, please consider citing scikit-learn.

This page

8.5.13. sklearn.decomposition.fastica

sklearn.decomposition.fastica(X, n_components=None, algorithm='parallel', whiten=True, fun='logcosh', fun_prime='', fun_args={}, max_iter=200, tol=0.0001, w_init=None)

Perform Fast Independent Component Analysis.

Parameters :

X : array-like, shape = [n_samples, n_features]

Training vector, where n_samples is the number of samples and n_features is the number of features.

n_components : int, optional

Number of components to extract. If None no dimension reduction is performed.

algorithm : {‘parallel’, ‘deflation’}, optional

Apply a parallel or deflational FASTICA algorithm.

whiten: boolean, optional :

If true perform an initial whitening of the data. Do not set to false unless the data is already white, as you will get incorrect results. If whiten is true, the data is assumed to have already been preprocessed: it should be centered, normed and white.

fun : string or function, optional

The functional form of the G function used in the approximation to neg-entropy. Could be either ‘logcosh’, ‘exp’, or ‘cube’. You can also provide your own function but in this case, its derivative should be provided via argument fun_prime

fun_prime : empty string (‘’) or function, optional

See fun.

fun_args: dictionary, optional :

If empty and if fun=’logcosh’, fun_args will take value {‘alpha’ : 1.0}

max_iter: int, optional :

Maximum number of iterations to perform

tol: float, optional :

A positive scalar giving the tolerance at which the un-mixing matrix is considered to have converged

w_init: (n_components, n_components) array, optional :

Initial un-mixing array of dimension (n.comp,n.comp). If None (default) then an array of normal r.v.’s is used

source_only: boolean, optional :

if True, only the sources matrix is returned

Returns :

K: (n_components, p) array or None. :

If whiten is ‘True’, K is the pre-whitening matrix that projects data onto the first n.comp principal components. If whiten is ‘False’, K is ‘None’.

W: (n_components, n_components) array :

estimated un-mixing matrix The mixing matrix can be obtained by:

w = np.dot(W, K.T)
A = w.T * (w * w.T).I

S: (n_components, n) array :

estimated source matrix

Notes

The data matrix X is considered to be a linear combination of non-Gaussian (independent) components i.e. X = AS where columns of S contain the independent components and A is a linear mixing matrix. In short ICA attempts to un-mix’ the data by estimating an un-mixing matrix W where ``S = W K X.`

This implementation was originally made for data of shape [n_features, n_samples]. Now the input is transposed before the algorithm is applied. This makes it slightly faster for Fortran-ordered input.

Implemented using FastICA: A. Hyvarinen and E. Oja, Independent Component Analysis: Algorithms and Applications, Neural Networks, 13(4-5), 2000, pp. 411-430