This documentation is for scikit-learn version 0.10Other versions

Citing

If you use the software, please consider citing scikit-learn.

This page

8.2.12. sklearn.covariance.graph_lasso

sklearn.covariance.graph_lasso(emp_cov, alpha, cov_init=None, mode='cd', tol=0.0001, max_iter=100, verbose=False, return_costs=False, eps=2.2204460492503131e-16)

l1-penalized covariance estimator

Parameters :

emp_cov: 2D ndarray, shape (n_features, n_features) :

Empirical covariance from which to compute the covariance estimate

alpha: positive float :

The regularization parameter: the higher alpha, the more regularization, the sparser the inverse covariance

cov_init: 2D array (n_features, n_features), optional :

The initial guess for the covariance

mode: {‘cd’, ‘lars’} :

The Lasso solver to use: coordinate descent or LARS. Use LARS for very sparse underlying graphs, where p > n. Elsewhere prefer cd which is more numerically stable.

tol: positive float, optional :

The tolerance to declare convergence: if the dual gap goes below this value, iterations are stopped

max_iter: integer, optional :

The maximum number of iterations

verbose: boolean, optional :

If verbose is True, the objective function and dual gap are printed at each iteration

return_costs: boolean, optional :

If return_costs is True, the objective function and dual gap at each iteration are returned

eps: float, optional :

The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems.

Returns :

covariance : 2D ndarray, shape (n_features, n_features)

The estimated covariance matrix

precision : 2D ndarray, shape (n_features, n_features)

The estimated (sparse) precision matrix

costs : list of (objective, dual_gap) pairs

The list of values of the objective function and the dual gap at each iteration. Returned only if return_costs is True

Notes

The algorithm employed to solve this problem is the GLasso algorithm, from the Friedman 2008 Biostatistics paper. It is the same algorithm as in the R glasso package.

One possible difference with the glasso R package is that the diagonal coefficients are not penalized.