This documentation is for scikit-learn version 0.11-gitOther versions

Citing

If you use the software, please consider citing scikit-learn.

This page

8.3.4. sklearn.cross_validation.LeaveOneOut

class sklearn.cross_validation.LeaveOneOut(n, indices=True)

Leave-One-Out cross validation iterator.

Provides train/test indices to split data in train test sets. Each sample is used once as a test set (singleton) while the remaining samples form the training set.

Due to the high number of test sets (which is the same as the number of samples) this cross validation method can be very costly. For large datasets one should favor KFold, StratifiedKFold or ShuffleSplit.

Parameters :

n: int :

Total number of elements

indices: boolean, optional (default True) :

Return train/test split as arrays of indices, rather than a boolean mask array. Integer indices are required when dealing with sparse matrices, since those cannot be indexed by boolean masks.

See also

LeaveOneLabelOut, domain-specific

Examples

>>> from sklearn import cross_validation
>>> X = np.array([[1, 2], [3, 4]])
>>> y = np.array([1, 2])
>>> loo = cross_validation.LeaveOneOut(2)
>>> len(loo)
2
>>> print loo
sklearn.cross_validation.LeaveOneOut(n=2)
>>> for train_index, test_index in loo:
...    print "TRAIN:", train_index, "TEST:", test_index
...    X_train, X_test = X[train_index], X[test_index]
...    y_train, y_test = y[train_index], y[test_index]
...    print X_train, X_test, y_train, y_test
TRAIN: [1] TEST: [0]
[[3 4]] [[1 2]] [2] [1]
TRAIN: [0] TEST: [1]
[[1 2]] [[3 4]] [1] [2]
__init__(n, indices=True)