This documentation is for scikit-learn version 0.11-gitOther versions

Citing

If you use the software, please consider citing scikit-learn.

This page

8.8.6. sklearn.feature_selection.RFE

class sklearn.feature_selection.RFE(estimator, n_features_to_select, step=1)

Feature ranking with recursive feature elimination.

Given an external estimator that assigns weights to features (e.g., the coefficients of a linear model), the goal of recursive feature elimination (RFE) is to select features by recursively considering smaller and smaller sets of features. First, the estimator is trained on the initial set of features and weights are assigned to each one of them. Then, features whose absolute weights are the smallest are pruned from the current set features. That procedure is recursively repeated on the pruned set until the desired number of features to select is eventually reached.

Parameters :

estimator : object

A supervised learning estimator with a fit method that updates a coef_ attribute that holds the fitted parameters. Important features must correspond to high absolute values in the coef_ array.

For instance, this is the case for most supervised learning algorithms such as Support Vector Classifiers and Generalized Linear Models from the svm and linear_model modules.

n_features_to_select : int

The number of features to select.

step : int or float, optional (default=1)

If greater than or equal to 1, then step corresponds to the (integer) number of features to remove at each iteration. If within (0.0, 1.0), then step corresponds to the percentage (rounded down) of features to remove at each iteration.

Notes

References:

[R57]Guyon, I., Weston, J., Barnhill, S., & Vapnik, V., “Gene selection for cancer classification using support vector machines”, Mach. Learn., 46(1-3), 389–422, 2002.

Examples

The following example shows how to retrieve the 5 right informative features in the Friedman #1 dataset.

>>> from sklearn.datasets import make_friedman1
>>> from sklearn.feature_selection import RFE
>>> from sklearn.svm import SVR
>>> X, y = make_friedman1(n_samples=50, n_features=10, random_state=0)
>>> estimator = SVR(kernel="linear")
>>> selector = RFE(estimator, 5, step=1)
>>> selector = selector.fit(X, y)
>>> selector.support_ 
array([ True,  True,  True,  True,  True,
        False, False, False, False, False], dtype=bool)
>>> selector.ranking_
array([1, 1, 1, 1, 1, 6, 4, 3, 2, 5])

Attributes

n_features_ int The number of selected features.
support_ array of shape [n_features] The mask of selected features.
ranking_ array of shape [n_features] The feature ranking, such that ranking_[i] corresponds to the ranking position of the i-th feature. Selected (i.e., estimated best) features are assigned rank 1.

Methods

fit(X, y) Fit the RFE model and then the underlying estimator on the selected
predict(X) Reduce X to the selected features and then predict using the
score(X, y) Reduce X to the selected features and then return the score of the
set_params(**params) Set the parameters of the estimator.
transform(X) Reduce X to the selected features during the elimination.
__init__(estimator, n_features_to_select, step=1)
fit(X, y)

Fit the RFE model and then the underlying estimator on the selected features.

Parameters :

X : array of shape [n_samples, n_features]

The training input samples.

y : array of shape [n_samples]

The target values.

predict(X)

Reduce X to the selected features and then predict using the underlying estimator.

Parameters :

X : array of shape [n_samples, n_features]

The input samples.

Returns :

y : array of shape [n_samples]

The predicted target values.

score(X, y)

Reduce X to the selected features and then return the score of the underlying estimator.

Parameters :

X : array of shape [n_samples, n_features]

The input samples.

y : array of shape [n_samples]

The target values.

set_params(**params)

Set the parameters of the estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The former have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns :self :
transform(X)

Reduce X to the selected features during the elimination.

Parameters :

X : array of shape [n_samples, n_features]

The input samples.

Returns :

X_r : array of shape [n_samples, n_selected_features]

The input samples with only the features selected during the elimination.