This documentation is for scikit-learn version 0.10Other versions

Citing

If you use the software, please consider citing scikit-learn.

This page

8.14.1.19. sklearn.linear_model.lasso_path

sklearn.linear_model.lasso_path(X, y, eps=0.001, n_alphas=100, alphas=None, precompute='auto', Xy=None, fit_intercept=True, normalize=False, copy_X=True, verbose=False, **params)

Compute Lasso path with coordinate descent

The optimization objective for Lasso is:

(1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
Parameters :

X : numpy array of shape [n_samples,n_features]

Training data. Pass directly as fortran contiguous data to avoid unnecessary memory duplication

y : numpy array of shape [n_samples]

Target values

eps : float, optional

Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3

n_alphas : int, optional

Number of alphas along the regularization path

alphas : numpy array, optional

List of alphas where to compute the models. If None alphas are set automatically

precompute : True | False | ‘auto’ | array-like

Whether to use a precomputed Gram matrix to speed up calculations. If set to ‘auto’ let us decide. The Gram matrix can also be passed as argument.

Xy : array-like, optional

Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.

fit_intercept : bool

Fit or not an intercept

normalize : boolean, optional

If True, the regressors X are normalized

copy_X : boolean, optional, default True

If True, X will be copied; else, it may be overwritten.

verbose : bool or integer

Amount of verbosity

params : kwargs

keyword arguments passed to the Lasso objects

Returns :

models : a list of models along the regularization path

Notes

See examples/plot_lasso_coordinate_descent_path.py for an example.

To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a fortran contiguous numpy array.