This page

Citing

Please consider citing the scikit-learn.

9.2.3. sklearn.linear_model.RidgeCV

class sklearn.linear_model.RidgeCV(alphas=array([ 0.1, 1., 10. ]), fit_intercept=True, normalize=False, score_func=None, loss_func=None, cv=None)

Ridge regression with built-in cross-validation.

By default, it performs Generalized Cross-Validation, which is a form of efficient Leave-One-Out cross-validation. Currently, only the n_features > n_samples case is handled efficiently.

Parameters :

alphas: numpy array of shape [n_alpha] :

Array of alpha values to try. Small positive values of alpha improve the conditioning of the problem and reduce the variance of the estimates. Alpha corresponds to (2*C)^-1 in other linear models such as LogisticRegression or LinearSVC.

fit_intercept : boolean

Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (e.g. data is expected to be already centered).

normalize : boolean, optional

If True, the regressors X are normalized

loss_func: callable, optional :

function that takes 2 arguments and compares them in order to evaluate the performance of prediciton (small is good) if None is passed, the score of the estimator is maximized

score_func: callable, optional :

function that takes 2 arguments and compares them in order to evaluate the performance of prediciton (big is good) if None is passed, the score of the estimator is maximized

See also

Ridge

Methods

fit(X, y[, sample_weight]) Fit Ridge regression model
predict(X) Predict using the linear model
score(X, y) Returns the coefficient of determination of the prediction
set_params(**params) Set the parameters of the estimator.
__init__(alphas=array([ 0.1, 1., 10. ]), fit_intercept=True, normalize=False, score_func=None, loss_func=None, cv=None)
fit(X, y, sample_weight=1.0)

Fit Ridge regression model

Parameters :

X : array-like, shape = [n_samples, n_features]

Training data

y : array-like, shape = [n_samples] or [n_samples, n_responses]

Target values

sample_weight : float or array-like of shape [n_samples]

Sample weight

cv : cross-validation generator, optional

If None, Generalized Cross-Validationn (efficient Leave-One-Out) will be used.

Returns :

self : Returns self.

predict(X)

Predict using the linear model

Parameters :

X : numpy array of shape [n_samples, n_features]

Returns :

C : array, shape = [n_samples]

Returns predicted values.

score(X, y)

Returns the coefficient of determination of the prediction

Parameters :

X : array-like, shape = [n_samples, n_features]

Training set.

y : array-like, shape = [n_samples]

Returns :

z : float

set_params(**params)

Set the parameters of the estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The former have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns :self :