This documentation is for scikit-learn version 0.11-gitOther versions

Citing

If you use the software, please consider citing scikit-learn.

This page

Recognizing hand-written digits

An example showing how the scikit-learn can be used to recognize images of hand-written digits.

This example is commented in the tutorial section of the user manual.

../_images/plot_digits_classification_1.png

Script output:

Classification report for classifier SVC(C=100, cache_size=200, class_weight=None, coef0=0.0, degree=3,
  gamma=0.001, kernel=rbf, probability=False, scale_C=True, shrinking=True,
  tol=0.001):
             precision    recall  f1-score   support

          0       1.00      0.98      0.99        88
          1       0.93      0.92      0.93        91
          2       0.98      0.93      0.95        86
          3       0.95      0.81      0.88        91
          4       0.99      0.92      0.96        92
          5       0.88      0.90      0.89        91
          6       0.99      0.99      0.99        91
          7       0.95      1.00      0.97        89
          8       0.91      0.84      0.88        88
          9       0.75      0.96      0.84        92

avg / total       0.93      0.93      0.93       899


Confusion matrix:
[[86  0  0  0  1  1  0  0  0  0]
 [ 0 84  0  0  0  1  0  0  0  6]
 [ 0  0 80  3  0  0  0  0  0  3]
 [ 0  0  0 74  0  4  0  4  5  4]
 [ 0  0  0  0 85  0  0  1  2  4]
 [ 0  0  0  0  0 82  1  0  0  8]
 [ 0  1  0  0  0  0 90  0  0  0]
 [ 0  0  0  0  0  0  0 89  0  0]
 [ 0  5  2  0  0  2  0  0 74  5]
 [ 0  0  0  1  0  3  0  0  0 88]]

Python source code: plot_digits_classification.py

print __doc__

# Author: Gael Varoquaux <gael dot varoquaux at normalesup dot org>
# License: Simplified BSD

# Standard scientific Python imports
import pylab as pl

# Import datasets, classifiers and performance metrics
from sklearn import datasets, svm, metrics

# The digits dataset
digits = datasets.load_digits()

# The data that we are interested in is made of 8x8 images of digits,
# let's have a look at the first 3 images, stored in the `images`
# attribute of the dataset. If we were working from image files, we
# could load them using pylab.imread. For these images know which
# digit they represent: it is given in the 'target' of the dataset.
for index, (image, label) in enumerate(zip(digits.images, digits.target)[:4]):
    pl.subplot(2, 4, index + 1)
    pl.axis('off')
    pl.imshow(image, cmap=pl.cm.gray_r, interpolation='nearest')
    pl.title('Training: %i' % label)

# To apply an classifier on this data, we need to flatten the image, to
# turn the data in a (samples, feature) matrix:
n_samples = len(digits.images)
data = digits.images.reshape((n_samples, -1))

# Create a classifier: a support vector classifier
classifier = svm.SVC(C=100, gamma=0.001)

# We learn the digits on the first half of the digits
classifier.fit(data[:n_samples / 2], digits.target[:n_samples / 2])

# Now predict the value of the digit on the second half:
expected = digits.target[n_samples / 2:]
predicted = classifier.predict(data[n_samples / 2:])

print "Classification report for classifier %s:\n%s\n" % (
    classifier, metrics.classification_report(expected, predicted))
print "Confusion matrix:\n%s" % metrics.confusion_matrix(expected, predicted)

for index, (image, prediction) in enumerate(
    zip(digits.images[n_samples / 2:], predicted)[:4]):
    pl.subplot(2, 4, index + 5)
    pl.axis('off')
    pl.imshow(image, cmap=pl.cm.gray_r, interpolation='nearest')
    pl.title('Prediction: %i' % prediction)

pl.show()