This page

Citing

Please consider citing the scikit-learn.

3.7. Naive Bayes

Naive Bayes algorithms are a set of supervised learning methods based on applying Bayes’ theorem with the “naive” assumption of independence between every pair of features. Given a class variable c and a dependent set of feature variables f_1 through f_n, Bayes’ theorem states the following relationship:

p(c \mid f_1,\dots,f_n) \propto p(c) p(f_1,\dots,f_n \mid c)

Using the naive independence assumption this relationship is simplified to:

p(c \mid f_1,\dots,f_n) \propto p(c) \prod_{i=1}^{n} p(f_i \mid c)

\Downarrow

\hat{c} = \arg\max_c p(c) \prod_{i=1}^{n} p(f_i \mid c),

so we can use Maximum A Posteriori (MAP) estimation to estimate p(c) and p(f_i \mid c).

The different naive Bayes classifiers differ by the assumption on the distribution of p(f_i \mid c).

In spite of their apparently over-simplified assumptions, naive Bayes classifiers have worked quite well in many real-world situations, famously document classification and spam filtering. They requires a small amount of training data to estimate the necessary parameters. (For theoretical reasons why naive Bayes works well, and on which types of data it does, see the references below.)

Naive Bayes learners and classifiers can be extremely fast compared to more sophisticated methods. The decoupling of the class conditional feature distributions means that each distribution can be independently estimated as a one dimensional distribution. This in turn helps to alleviate problems stemming from the curse of dimensionality.

References:

3.7.1. Gaussian Naive Bayes

GaussianNB implements the Gaussian Naive Bayes algorithm for classification. The likelihood of the features is assumed to be gaussian:

p(f_i \mid c) &= \frac{1}{\sqrt{2\pi\sigma^2_c}} \exp^{-\frac{ (f_i - \mu_c)^2}{2\pi\sigma^2_c}}

The parameters of the distribution, \sigma_c and \mu_c are estimated using maximum likelihood.

Examples:

  • example_naive_bayes.py

3.7.2. Multinomial Naive Bayes

MultinomialNB implements the Multinomial Naive Bayes algorithm for classification. Multinomial Naive Bayes models the distribution of words in a document as a multinomial. The distribution is parametrized by the vector \overline{\theta_c} = (\theta_{c1},\ldots,\theta_{cn}) where c is the class of document, n is the size of the vocabulary and \theta_{ci} is the probability of word i appearing in a document of class c. The likelihood of document d is,

p(d \mid \overline{\theta_c}) &= \frac{ (\sum_i f_i)! }{\prod_i f_i !} \prod_i(\theta_{ci})^{f_i}

where f_{i} is the frequency count of word i. It can be shown that the maximum posterior probability is,

\hat{c} = \arg\max_c [ \log p(\overline{\theta_c}) + \sum_i f_i \log \theta_{ci} ]

The vector of parameters \overline{\theta_c} is estimated by a smoothed version of maximum likelihood,

\hat{\theta}_{ci} = \frac{ N_{ci} + \alpha_i }{N_c + \alpha }

where N_{ci} is the number of times word i appears in a document of class c and N_{c} is the total count of words in a document of class c. The smoothness priors \alpha_i and their sum \alpha account for words not seen in the learning samples.

3.7.3. Bernoulli Naive Bayes

BernoulliNB implements the naive Bayes training and classification algorithms for data that is distributed according to multivariate Bernoulli distributions. It requires samples to be represented as binary-valued/boolean feature vectors; if handed any other kind of data, it binarizes it (depending on the binarize parameter).

In the case of text classification, word occurrence vectors (rather than word count vectors) may be used to train and use this classifier. BernoulliNB might perform better on some datasets, especially those with shorter documents, because it explicitly penalizes the non-occurrence of words/features in a dataset where MultinomialNB would only notice a zero count, but for text classification MultinomialNB will generally be better. It is advisable to evaluate both models, if time permits.

References: