This page

Citing

Please consider citing the scikit-learn.

9.11.1. sklearn.lda.LDA

class sklearn.lda.LDA(n_components=None, priors=None)

Linear Discriminant Analysis (LDA)

Parameters :

n_components: int :

Number of components (< n_classes - 1)

priors : array, optional, shape = [n_classes]

Priors on classes

See also

QDA

Examples

>>> import numpy as np
>>> from sklearn.lda import LDA
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> y = np.array([1, 1, 1, 2, 2, 2])
>>> clf = LDA()
>>> clf.fit(X, y)
LDA(n_components=None, priors=None)
>>> print clf.predict([[-0.8, -1]])
[1]

Attributes

means_ array-like, shape = [n_classes, n_features] Class means
xbar_ float, shape = [n_features] Over all mean
priors_ array-like, shape = [n_classes] Class priors (sum to 1)
covariance_ array-like, shape = [n_features, n_features] Covariance matrix (shared by all classes)

Methods

decision_function(X) This function return the decision function values related to each
fit(X, y[, store_covariance, tol]) Fit the LDA model according to the given training data and parameters.
fit_transform(X[, y]) Fit to data, then transform it
predict(X) This function does classification on an array of test vectors X.
predict_log_proba(X) This function return posterior log-probabilities of classification
predict_proba(X) This function return posterior probabilities of classification
score(X, y) Returns the mean error rate on the given test data and labels.
set_params(**params) Set the parameters of the estimator.
transform(X) Project the data so as to maximize class separation (large separation between projected class means and small variance within each class).
__init__(n_components=None, priors=None)
decision_function(X)

This function return the decision function values related to each class on an array of test vectors X.

Parameters :X : array-like, shape = [n_samples, n_features]
Returns :C : array, shape = [n_samples, n_classes]
fit(X, y, store_covariance=False, tol=0.0001)

Fit the LDA model according to the given training data and parameters.

Parameters :

X : array-like, shape = [n_samples, n_features]

Training vector, where n_samples in the number of samples and n_features is the number of features.

y : array, shape = [n_samples]

Target values (integers)

store_covariance : boolean

If True the covariance matrix (shared by all classes) is computed and stored in self.covariance_ attribute.

fit_transform(X, y=None, **fit_params)

Fit to data, then transform it

Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.

Parameters :

X : numpy array of shape [n_samples, n_features]

Training set.

y : numpy array of shape [n_samples]

Target values.

Returns :

X_new : numpy array of shape [n_samples, n_features_new]

Transformed array.

predict(X)

This function does classification on an array of test vectors X.

The predicted class C for each sample in X is returned.

Parameters :X : array-like, shape = [n_samples, n_features]
Returns :C : array, shape = [n_samples]
predict_log_proba(X)

This function return posterior log-probabilities of classification according to each class on an array of test vectors X.

Parameters :X : array-like, shape = [n_samples, n_features]
Returns :C : array, shape = [n_samples, n_classes]
predict_proba(X)

This function return posterior probabilities of classification according to each class on an array of test vectors X.

Parameters :X : array-like, shape = [n_samples, n_features]
Returns :C : array, shape = [n_samples, n_classes]
score(X, y)

Returns the mean error rate on the given test data and labels.

Parameters :

X : array-like, shape = [n_samples, n_features]

Training set.

y : array-like, shape = [n_samples]

Labels for X.

Returns :

z : float

set_params(**params)

Set the parameters of the estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The former have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns :self :
transform(X)

Project the data so as to maximize class separation (large separation between projected class means and small variance within each class).

Parameters :X : array-like, shape = [n_samples, n_features]
Returns :X_new : array, shape = [n_samples, n_components]