This page

scikits.learn.feature_extraction.text.Vectorizer

Vectorizer(analyzer=WordNGramAnalyzer(stop_words=set(['all', 'six', 'less', 'being', 'indeed', 'over', 'move', 'anyway', 'four', 'not', 'own', 'through', 'yourselves', 'fify', 'where', 'mill', 'only', 'find', 'before', 'one', 'whose', 'system', 'how', 'somewhere', 'with', 'thick', 'show', 'had', 'enough', 'should', 'to', 'must', 'whom',...'amoungst', 'yours', 'their', 'rather', 'without', 'so', 'five', 'the', 'first', 'whereas', 'once']),
max_n=1, token_pattern='\b\w\w+\b', charset='utf-8', min_n=1,
preprocessor=RomanPreprocessor()), max_df=1.0, max_features=None, use_tf=True, use_idf=True)

Convert a collection of raw documents to a matrix

Equivalent to CountVectorizer followed by TfidfTransformer.

Methods

Vectorizer.fit(raw_documents)

Learn a conversion law from documents to array data

Vectorizer.fit_transform(raw_documents)

Learn the representation and return the vectors.

Parameters :

raw_documents: iterable :

an iterable which yields either str, unicode or file objects

Returns :

vectors: array, [n_samples, n_features] :

Vectorizer.transform(raw_documents, copy=True)

Return the vectors.

Parameters :

raw_documents: iterable :

an iterable which yields either str, unicode or file objects

Returns :

vectors: array, [n_samples, n_features] :