This page

Citing

Please consider citing the scikit-learn.

9.2.8. sklearn.linear_model.Lars

class sklearn.linear_model.Lars(fit_intercept=True, verbose=False, normalize=True, precompute='auto', n_nonzero_coefs=500, eps=2.2204460492503131e-16, overwrite_X=False)

Least Angle Regression model a.k.a. LAR

Parameters :

n_nonzero_coefs : int, optional

Target number of non-zero coefficients. Use np.inf for no limit.

fit_intercept : boolean

Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (e.g. data is expected to be already centered).

verbose : boolean or integer, optional

Sets the verbosity amount

normalize : boolean, optional

If True, the regressors X are normalized

precompute : True | False | ‘auto’ | array-like

Whether to use a precomputed Gram matrix to speed up calculations. If set to ‘auto’ let us decide. The Gram matrix can also be passed as argument.

overwrite_X : boolean, optional

If True, X will not be copied Default is False

eps: float, optional :

The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the ‘tol’ parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.

See also

lars_path, LassoLARS, LarsCV, LassoLarsCV, decomposition.sparse_encode, decomposition.sparse_encode_parallel

References

http://en.wikipedia.org/wiki/Least_angle_regression

Examples

>>> from sklearn import linear_model
>>> clf = linear_model.Lars(n_nonzero_coefs=1)
>>> clf.fit([[-1, 1], [0, 0], [1, 1]], [-1.1111, 0, -1.1111]) 
Lars(eps=..., fit_intercept=True, n_nonzero_coefs=1,
   normalize=True, overwrite_X=False, precompute='auto', verbose=False)
>>> print clf.coef_ 
[ 0. -1.11...]

Attributes

coef_ array, shape = [n_features] parameter vector (w in the fomulation formula)
intercept_ float independent term in decision function.

Methods

fit(X, y[, overwrite_X]) Fit the model using X, y as training data.
predict(X) Predict using the linear model
score(X, y) Returns the coefficient of determination of the prediction
set_params(**params) Set the parameters of the estimator.
__init__(fit_intercept=True, verbose=False, normalize=True, precompute='auto', n_nonzero_coefs=500, eps=2.2204460492503131e-16, overwrite_X=False)
fit(X, y, overwrite_X=False)

Fit the model using X, y as training data.

Parameters :

x : array-like, shape = [n_samples, n_features]

training data.

y : array-like, shape = [n_samples]

target values.

Returns :

self : object

returns an instance of self.

predict(X)

Predict using the linear model

Parameters :

X : numpy array of shape [n_samples, n_features]

Returns :

C : array, shape = [n_samples]

Returns predicted values.

score(X, y)

Returns the coefficient of determination of the prediction

Parameters :

X : array-like, shape = [n_samples, n_features]

Training set.

y : array-like, shape = [n_samples]

Returns :

z : float

set_params(**params)

Set the parameters of the estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The former have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns :self :