This documentation is for scikit-learn version 0.11-gitOther versions

Citing

If you use the software, please consider citing scikit-learn.

This page

8.13.1. sklearn.semi_supervised.LabelPropagation

class sklearn.semi_supervised.LabelPropagation(kernel='rbf', gamma=20, n_neighbors=7, alpha=1, max_iters=30, tol=0.001)

Label Propagation classifier

Parameters :

kernel : {‘knn’, ‘rbf’}

String identifier for kernel function to use. Only ‘rbf’ and ‘knn’ kernels are currently supported..

gamma : float

parameter for rbf kernel

n_neighbors : integer > 0

parameter for knn kernel

alpha : float

clamping factor

max_iters : float

change maximum number of iterations allowed

tol : float

Convergence tolerance: threshold to consider the system at steady state

See also

LabelSpreading
Alternate label proagation strategy more robust to noise

References

Xiaojin Zhu and Zoubin Ghahramani. Learning from labeled and unlabeled data with label propagation. Technical Report CMU-CALD-02-107, Carnegie Mellon University, 2002 http://pages.cs.wisc.edu/~jerryzhu/pub/CMU-CALD-02-107.pdf

Examples

>>> from sklearn import datasets
>>> label_prop_model = LabelPropagation()
>>> iris = datasets.load_iris()
>>> random_unlabeled_points = np.where(np.random.random_integers(0, 1,
...    size=len(iris.target)))
>>> labels = np.copy(iris.target)
>>> labels[random_unlabeled_points] = -1
>>> label_prop_model.fit(iris.data, labels)
... 
LabelPropagation(...)

Methods

fit(X, y) Fit a semi-supervised label propagation model based
get_params([deep]) Get parameters for the estimator
predict(X) Performs inductive inference across the model.
predict_proba(X) Predict probability for each possible outcome.
score(X, y) Returns the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of the estimator.
__init__(kernel='rbf', gamma=20, n_neighbors=7, alpha=1, max_iters=30, tol=0.001)
fit(X, y)

Fit a semi-supervised label propagation model based

All the input data is provided matrix X (labeled and unlabeled) and corresponding label matrix y with a dedicated marker value for unlabeled samples.

Parameters :

X : array-like, shape = [n_samples, n_features]

A {n_samples by n_samples} size matrix will be created from this

y : array_like, shape = [n_samples]

n_labeled_samples (unlabeled points are marked as -1) All unlabeled samples will be transductively assigned labels

Returns :

self : returns an instance of self.

get_params(deep=True)

Get parameters for the estimator

Parameters :

deep: boolean, optional :

If True, will return the parameters for this estimator and contained subobjects that are estimators.

predict(X)

Performs inductive inference across the model.

Parameters :

X : array_like, shape = [n_samples, n_features]

Returns :

y : array_like, shape = [n_samples]

Predictions for input data

predict_proba(X)

Predict probability for each possible outcome.

Compute the probability estimates for each single sample in X and each possible outcome seen during training (categorical distribution).

Parameters :

X : array_like, shape = [n_samples, n_features]

Returns :

probabilities : array, shape = [n_samples, n_classes]

Normalized probability distributions across class labels

score(X, y)

Returns the mean accuracy on the given test data and labels.

Parameters :

X : array-like, shape = [n_samples, n_features]

Training set.

y : array-like, shape = [n_samples]

Labels for X.

Returns :

z : float

set_params(**params)

Set the parameters of the estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The former have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns :self :