8.25.1. sklearn.qda.QDA¶
- class sklearn.qda.QDA(priors=None)¶
Quadratic Discriminant Analysis (QDA)
A classifier with a quadratic decision boundary, generated by fitting class conditional densities to the data and using Bayes’ rule.
The model fits a Gaussian density to each class.
Parameters : priors : array, optional, shape = [n_classes]
Priors on classes
See also
- sklearn.lda.LDA
- Linear discriminant analysis
Examples
>>> from sklearn.qda import QDA >>> import numpy as np >>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]]) >>> y = np.array([1, 1, 1, 2, 2, 2]) >>> clf = QDA() >>> clf.fit(X, y) QDA(priors=None) >>> print clf.predict([[-0.8, -1]]) [1]
Attributes
means_ array-like, shape = [n_classes, n_features] Class means priors_ array-like, shape = [n_classes] Class priors (sum to 1) covariances_ list of array-like, shape = [n_features, n_features] Covariance matrices of each class Methods
decision_function(X) Apply decision function to an array of samples. fit(X, y[, store_covariances, tol]) Fit the QDA model according to the given training data and parameters. get_params([deep]) Get parameters for the estimator predict(X) Perform classification on an array of test vectors X. predict_log_proba(X) Return posterior probabilities of classification. predict_proba(X) Return posterior probabilities of classification. score(X, y) Returns the mean accuracy on the given test data and labels. set_params(**params) Set the parameters of the estimator. - __init__(priors=None)¶
- decision_function(X)¶
Apply decision function to an array of samples.
Parameters : X : array-like, shape = [n_samples, n_features]
Array of samples (test vectors).
Returns : C : array, shape = [n_samples, n_classes]
Decision function values related to each class, per sample.
- fit(X, y, store_covariances=False, tol=0.0001)¶
Fit the QDA model according to the given training data and parameters.
Parameters : X : array-like, shape = [n_samples, n_features]
Training vector, where n_samples in the number of samples and n_features is the number of features.
y : array, shape = [n_samples]
Target values (integers)
store_covariances : boolean
If True the covariance matrices are computed and stored in the self.covariances_ attribute.
- get_params(deep=True)¶
Get parameters for the estimator
Parameters : deep: boolean, optional :
If True, will return the parameters for this estimator and contained subobjects that are estimators.
- predict(X)¶
Perform classification on an array of test vectors X.
The predicted class C for each sample in X is returned.
Parameters : X : array-like, shape = [n_samples, n_features] Returns : C : array, shape = [n_samples]
- predict_log_proba(X)¶
Return posterior probabilities of classification.
Parameters : X : array-like, shape = [n_samples, n_features]
Array of samples/test vectors.
Returns : C : array, shape = [n_samples, n_classes]
Posterior log-probabilities of classification per class.
- predict_proba(X)¶
Return posterior probabilities of classification.
Parameters : X : array-like, shape = [n_samples, n_features]
Array of samples/test vectors.
Returns : C : array, shape = [n_samples, n_classes]
Posterior probabilities of classification per class.
- score(X, y)¶
Returns the mean accuracy on the given test data and labels.
Parameters : X : array-like, shape = [n_samples, n_features]
Training set.
y : array-like, shape = [n_samples]
Labels for X.
Returns : z : float
- set_params(**params)¶
Set the parameters of the estimator.
The method works on simple estimators as well as on nested objects (such as pipelines). The former have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.
Returns : self :