9.2.17. sklearn.linear_model.BayesianRidge¶
- class sklearn.linear_model.BayesianRidge(n_iter=300, tol=0.001, alpha_1=9.9999999999999995e-07, alpha_2=9.9999999999999995e-07, lambda_1=9.9999999999999995e-07, lambda_2=9.9999999999999995e-07, compute_score=False, fit_intercept=True, normalize=False, overwrite_X=False, verbose=False)¶
Bayesian ridge regression
Fit a Bayesian ridge model and optimize the regularization parameters lambda (precision of the weights) and alpha (precision of the noise).
Parameters : X : array, shape = (n_samples, n_features)
Training vectors.
y : array, shape = (length)
Target values for training vectors
n_iter : int, optional
Maximum number of iterations. Default is 300.
tol : float, optional
Stop the algorithm if w has converged. Default is 1.e-3.
alpha_1 : float, optional
Hyper-parameter : shape parameter for the Gamma distribution prior over the alpha parameter. Default is 1.e-6
alpha_2 : float, optional
Hyper-parameter : inverse scale parameter (rate parameter) for the Gamma distribution prior over the alpha parameter. Default is 1.e-6.
lambda_1 : float, optional
Hyper-parameter : shape parameter for the Gamma distribution prior over the lambda parameter. Default is 1.e-6.
lambda_2 : float, optional
Hyper-parameter : inverse scale parameter (rate parameter) for the Gamma distribution prior over the lambda parameter. Default is 1.e-6
compute_score : boolean, optional
If True, compute the objective function at each step of the model. Default is False
fit_intercept : boolean, optional
wether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (e.g. data is expected to be already centered). Default is True.
normalize : boolean, optional
If True, the regressors X are normalized Default is False
overwrite_X : boolean, optional
If True, X will not be copied Default is False
verbose : boolean, optional
Verbose mode when fitting the model. Default is False.
Notes
See examples/linear_model/plot_bayesian_ridge.py for an example.
Examples
>>> from sklearn import linear_model >>> clf = linear_model.BayesianRidge() >>> clf.fit([[0,0], [1, 1], [2, 2]], [0, 1, 2]) BayesianRidge(alpha_1=1e-06, alpha_2=1e-06, compute_score=False, fit_intercept=True, lambda_1=1e-06, lambda_2=1e-06, n_iter=300, normalize=False, overwrite_X=False, tol=0.001, verbose=False) >>> clf.predict([[1, 1]]) array([ 1.])
Attributes
coef_ array, shape = (n_features) Coefficients of the regression model (mean of distribution) alpha_ float estimated precision of the noise. lambda_ array, shape = (n_features) estimated precisions of the weights. scores_ float if computed, value of the objective function (to be maximized) Methods
fit(X, y) self Fit the model predict(X) array Predict using the model. - __init__(n_iter=300, tol=0.001, alpha_1=9.9999999999999995e-07, alpha_2=9.9999999999999995e-07, lambda_1=9.9999999999999995e-07, lambda_2=9.9999999999999995e-07, compute_score=False, fit_intercept=True, normalize=False, overwrite_X=False, verbose=False)¶
- fit(X, y)¶
Fit the model
Parameters : X : numpy array of shape [n_samples,n_features]
Training data
y : numpy array of shape [n_samples]
Target values
Returns : self : returns an instance of self.
- predict(X)¶
Predict using the linear model
Parameters : X : numpy array of shape [n_samples, n_features]
Returns : C : array, shape = [n_samples]
Returns predicted values.
- score(X, y)¶
Returns the coefficient of determination of the prediction
Parameters : X : array-like, shape = [n_samples, n_features]
Training set.
y : array-like, shape = [n_samples]
Returns : z : float
- set_params(**params)¶
Set the parameters of the estimator.
The method works on simple estimators as well as on nested objects (such as pipelines). The former have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.
Returns : self :