Contents

6.3.1. scikits.learn.linear_model.BayesianRidge

class scikits.learn.linear_model.BayesianRidge(n_iter=300, eps=0.001, alpha_1=9.9999999999999995e-07, alpha_2=9.9999999999999995e-07, lambda_1=9.9999999999999995e-07, lambda_2=9.9999999999999995e-07, compute_score=False, fit_intercept=True, verbose=False)

Bayesian ridge regression

Fit a Bayesian ridge model and optimize the regularization parameters lambda (precision of the weights) and alpha (precision of the noise).

Parameters :

X : array, shape = (n_samples, n_features)

Training vectors.

y : array, shape = (length)

Target values for training vectors

n_iter : int, optional

Maximum number of interations. Default is 300.

eps : float, optional

Stop the algorithm if w has converged. Default is 1.e-3.

alpha_1 : float, optional

Hyper-parameter : shape parameter for the Gamma distribution prior over the alpha parameter. Default is 1.e-6

alpha_2 : float, optional

Hyper-parameter : inverse scale parameter (rate parameter) for the Gamma distribution prior over the alpha parameter. Default is 1.e-6.

lambda_1 : float, optional

Hyper-parameter : shape parameter for the Gamma distribution prior over the lambda parameter. Default is 1.e-6.

lambda_2 : float, optional

Hyper-parameter : inverse scale parameter (rate parameter) for the Gamma distribution prior over the lambda parameter. Default is 1.e-6

compute_score : boolean, optional

If True, compute the objective function at each step of the model. Default is False

fit_intercept : boolean, optional

wether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (e.g. data is expected to be already centered). Default is True.

Notes

See examples/linear_model/plot_bayesian_ridge.py for an example.

Examples

>>> from scikits.learn import linear_model
>>> clf = linear_model.BayesianRidge()
>>> clf.fit([[0,0], [1, 1], [2, 2]], [0, 1, 2])
BayesianRidge(n_iter=300, verbose=False, lambda_1=1e-06, lambda_2=1e-06,
       fit_intercept=True, eps=0.001, alpha_2=1e-06, alpha_1=1e-06,
       compute_score=False)
>>> clf.predict([[1, 1]])
array([ 1.])

Attributes

coef_ array, shape = (n_features) Coefficients of the regression model (mean of distribution)
alpha_ float estimated precision of the noise.
lambda_ array, shape = (n_features) estimated precisions of the weights.
scores_ float if computed, value of the objective function (to be maximized)

Methods

fit(X, y) self Fit the model
predict(X) array Predict using the model.
__init__(n_iter=300, eps=0.001, alpha_1=9.9999999999999995e-07, alpha_2=9.9999999999999995e-07, lambda_1=9.9999999999999995e-07, lambda_2=9.9999999999999995e-07, compute_score=False, fit_intercept=True, verbose=False)
fit(X, y, **params)

Fit the model

Parameters :

X : numpy array of shape [n_samples,n_features]

Training data

y : numpy array of shape [n_samples]

Target values

Returns :

self : returns an instance of self.

predict(X)

Predict using the linear model

Parameters :

X : numpy array of shape [n_samples, n_features]

Returns :

C : array, shape = [n_samples]

Returns predicted values.

score(X, y)

Returns the coefficient of determination of the prediction

Parameters :

X : array-like, shape = [n_samples, n_features]

Training set.

y : array-like, shape = [n_samples]

Returns :

z : float