This page

Citing

Please consider citing the scikit-learn.

9.2.20. sklearn.linear_model.lars_path

sklearn.linear_model.lars_path(X, y, Xy=None, Gram=None, max_iter=500, alpha_min=0, method='lar', overwrite_X=False, eps=2.2204460492503131e-16, overwrite_Gram=False, verbose=False)

Compute Least Angle Regression and LASSO path

Parameters :

X: array, shape: (n_samples, n_features) :

Input data

y: array, shape: (n_samples) :

Input targets

max_iter: integer, optional :

Maximum number of iterations to perform, set to infinity for no limit.

Gram: None, ‘auto’, array, shape: (n_features, n_features), optional :

Precomputed Gram matrix (X’ * X), if ‘auto’, the Gram matrix is precomputed from the given X, if there are more samples than features

alpha_min: float, optional :

Minimum correlation along the path. It corresponds to the regularization parameter alpha parameter in the Lasso.

method: {‘lar’, ‘lasso’} :

Specifies the returned model. Select ‘lar’ for Least Angle Regression, ‘lasso’ for the Lasso.

eps: float, optional :

The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems.

Returns :

alphas: array, shape: (max_features + 1,) :

Maximum of covariances (in absolute value) at each iteration.

active: array, shape (max_features,) :

Indices of active variables at the end of the path.

coefs: array, shape (n_features, max_features + 1) :

Coefficients along the path

See also

LassoLars, Lars, decomposition.sparse_encode, decomposition.sparse_encode_parallel

Notes