8.2.5. sklearn.covariance.GraphLasso¶
- class sklearn.covariance.GraphLasso(alpha=0.01, mode='cd', tol=0.0001, max_iter=100, verbose=False)¶
Sparse inverse covariance estimation with an l1-penalized estimator.
Parameters : alpha: positive float, optional :
The regularization parameter: the higher alpha, the more regularization, the sparser the inverse covariance
cov_init: 2D array (n_features, n_features), optional :
The initial guess for the covariance
mode: {‘cd’, ‘lars’} :
The Lasso solver to use: coordinate descent or LARS. Use LARS for very sparse underlying graphs, where p > n. Elsewhere prefer cd which is more numerically stable.
tol: positive float, optional :
The tolerance to declare convergence: if the dual gap goes below this value, iterations are stopped
max_iter: integer, optional :
The maximum number of iterations
verbose: boolean, optional :
If verbose is True, the objective function and dual gap are plotted at each iteration
See also
Attributes
covariance_ array-like, shape (n_features, n_features) Estimated covariance matrix precision_ array-like, shape (n_features, n_features) Estimated pseudo inverse matrix. Methods
error_norm(comp_cov[, norm, scaling, squared]) Computes the Mean Squared Error between two covariance estimators. fit(X[, y]) get_params([deep]) Get parameters for the estimator mahalanobis(observations) Computes the mahalanobis distances of given observations. score(X_test[, assume_centered]) Computes the log-likelihood of a gaussian data set with self.covariance_ as an estimator of its covariance matrix. set_params(**params) Set the parameters of the estimator. - __init__(alpha=0.01, mode='cd', tol=0.0001, max_iter=100, verbose=False)¶
- error_norm(comp_cov, norm='frobenius', scaling=True, squared=True)¶
Computes the Mean Squared Error between two covariance estimators. (In the sense of the Frobenius norm)
Parameters : comp_cov: array-like, shape = [n_features, n_features] :
The covariance to compare with.
norm: str :
The type of norm used to compute the error. Available error types: - ‘frobenius’ (default): sqrt(tr(A^t.A)) - ‘spectral’: sqrt(max(eigenvalues(A^t.A)) where A is the error (comp_cov - self.covariance_).
scaling: bool :
If True (default), the squared error norm is divided by n_features. If False, the squared error norm is not rescaled.
squared: bool :
Whether to compute the squared error norm or the error norm. If True (default), the squared error norm is returned. If False, the error norm is returned.
Returns : The Mean Squared Error (in the sense of the Frobenius norm) between :
`self` and `comp_cov` covariance estimators. :
- get_params(deep=True)¶
Get parameters for the estimator
Parameters : deep: boolean, optional :
If True, will return the parameters for this estimator and contained subobjects that are estimators.
- mahalanobis(observations)¶
Computes the mahalanobis distances of given observations.
The provided observations are assumed to be centered. One may want to center them using a location estimate first.
Parameters : observations: array-like, shape = [n_observations, n_features] :
The observations, the Mahalanobis distances of the which we compute.
Returns : mahalanobis_distance: array, shape = [n_observations,] :
Mahalanobis distances of the observations.
- score(X_test, assume_centered=False)¶
Computes the log-likelihood of a gaussian data set with self.covariance_ as an estimator of its covariance matrix.
Parameters : X_test : array-like, shape = [n_samples, n_features]
Test data of which we compute the likelihood, where n_samples is the number of samples and n_features is the number of features.
Returns : res : float
The likelihood of the data set with self.covariance_ as an estimator of its covariance matrix.
- set_params(**params)¶
Set the parameters of the estimator.
The method works on simple estimators as well as on nested objects (such as pipelines). The former have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.
Returns : self :