2. Document Classification

Now that we have a good understanding of TF-IDF term document matrix, we can treat each term as a feature, and each document (row) as an instance or a training sample to train a classifier. The classifier can be any traditional supervised learning models that deals with tabular shaped data, where one column stores the labels of each sample. All other columns are feature variables, in this case, each term/word is a feature. An illustration table is shown below.

ID

best

it

of

the

times

was

worst

age

wisdom

foolishness

class label

0

0.844727

0.117126

0.117126

0.117126

0.480957

0.117126

0.844727

0.480957

0.844727

0.844727

positive

1

5.000000

0.117126

0.117126

0.117126

0.480957

0.117126

0.844727

0.000000

0.000000

0.000000

negative

2

0.000000

0.117126

0.117126

0.117126

0.000000

0.117126

0.000000

0.480957

0.844727

0.000000

positive

3

0.000000

0.117126

0.117126

0.117126

0.000000

0.117126

0.000000

0.480957

0.000000

0.844727

negative

The goal of this guide is to explore some of the main ‘scikit-learn’ tools on a popular classification task: analyzing a collection of text documents (newsgroups posts) and classify them into one of the twenty different topics.

In this notebook we will see how to:

  • load the file contents and the categories

  • extract feature vectors suitable for machine learning

  • train a linear model to perform categorization

  • use a grid search strategy to find a good configuration of both the feature extraction components and the classifier

Original Notebook Credit to scikit-learn tutorial on Working with Text Data

2.1. Loading the 20 newsgroups dataset

The dataset is called “Twenty Newsgroups”. Here is the official description:

The 20 Newsgroups data set is a collection of approximately 20,000 newsgroup documents, partitioned (nearly) evenly across 20 different newsgroups. To the best of our knowledge, it was originally collected by Ken Lang, probably for his paper “Newsweeder: Learning to filter netnews,” though he does not explicitly mention this collection. The 20 newsgroups collection has become a popular data set for experiments in text applications of machine learning techniques, such as text classification and text clustering.

In the following we will use the built-in dataset loader for 20 newsgroups from scikit-learn.

In order to get faster execution times for this first example we will work on a partial dataset with only 4 categories out of the 20 available in the dataset:

categories = ['alt.atheism', 'soc.religion.christian',
               'comp.graphics', 'sci.med']

We can now load the list of files matching those categories as follows (this may take a while - 65.1s on a desktop computer - AMD 16 core, 32GB RAM):

from sklearn.datasets import fetch_20newsgroups
twenty_train = fetch_20newsgroups(subset='train',
      categories=categories, shuffle=True, random_state=42)

The returned dataset is a scikit-learn “bunch”: a simple holder object with fields that can be both accessed as python dict keys or object attributes for convenience, for instance the target_names holds the list of the requested category names::

twenty_train.target_names
['alt.atheism', 'comp.graphics', 'sci.med', 'soc.religion.christian']

The files themselves are loaded in memory in the data attribute. For reference the filenames are also available:

len(twenty_train.data)
2257
twenty_train.filenames[0]
'C:\\Users\\wei\\scikit_learn_data\\20news_home\\20news-bydate-train\\comp.graphics\\38440'

Let’s print the first three lines of the first loaded file:

print("\n".join(twenty_train.data[0].split("\n")[:3]))
From: sd345@city.ac.uk (Michael Collier)
Subject: Converting images to HP LaserJet III?
Nntp-Posting-Host: hampton

Below is how to access the class label (i.e. target column) of the first document.

print(twenty_train.target_names[twenty_train.target[0]])
comp.graphics

Supervised learning algorithms will require a category label for each document in the training set. In this case the category is the name of the newsgroup which also happens to be the name of the folder holding the individual documents.

For speed and space efficiency reasons scikit-learn loads the target attribute as an array of integers that corresponds to the index of the category name in the target_names list. The category integer id of each sample is stored in the target attribute:

twenty_train.target[:10]
array([1, 1, 3, 3, 3, 3, 3, 2, 2, 2], dtype=int64)

It is possible to get back the category names as follows:

for t in twenty_train.target[:10]:
    print(twenty_train.target_names[t])
comp.graphics
comp.graphics
soc.religion.christian
soc.religion.christian
soc.religion.christian
soc.religion.christian
soc.religion.christian
sci.med
sci.med
sci.med

You might have noticed that the samples were shuffled randomly when we called fetch_20newsgroups(..., shuffle=True, random_state=42): this is useful if you wish to select only a subset of samples to quickly train a model and get a first idea of the results before re-training on the complete dataset later.

2.2. Extracting features from text files

In order to perform machine learning on text documents, we first need to turn the text content into numerical feature vectors.

2.2.1. Bags of words

The most intuitive way to do so is to use a bags of words representation:

  1. Assign a fixed integer id to each word occurring in any document of the training set (for instance by building a dictionary from words to integer indices).

  2. For each document #i, count the number of occurrences of each word w and store it in X[i, j] as the value of feature #j where j is the index of word w in the dictionary.

The bags of words representation implies that n_features is the number of distinct words in the corpus: this number is typically larger than 100,000.

If n_samples == 10000, storing X as a NumPy array of type float32 would require 10000 x 100000 x 4 bytes = 4GB in RAM which is barely manageable on today’s computers.

Fortunately, most values in X will be zeros since for a given document less than a few thousand distinct words will be used. For this reason we say that bags of words are typically high-dimensional sparse datasets. We can save a lot of memory by only storing the non-zero parts of the feature vectors in memory.

scipy.sparse matrices are data structures that do exactly this, and scikit-learn has built-in support for these structures.

2.2.2. Tokenizing text with scikit-learn

Text preprocessing, tokenizing and filtering of stopwords are all included in class: CountVectorizer, which builds a dictionary of features and transforms documents to feature vectors:

 from sklearn.feature_extraction.text import CountVectorizer
 
 count_vect = CountVectorizer()
 X_train_counts = count_vect.fit_transform(twenty_train.data)
 X_train_counts.shape
(2257, 35788)

Class CountVectorizer supports counts of N-grams of words or consecutive characters. Once fitted, the vectorizer has built a dictionary of feature indices:

# The number of times the word 'algorithm' occurs 
count_vect.vocabulary_.get(u'algorithm')
4690

The index value of a word in the vocabulary is linked to its frequency in the whole training corpus.

Note

The method count_vect.fit_transform performs two actions: it learns the vocabulary and transforms the documents into count vectors. It’s possible to separate these steps by calling count_vect.fit(twenty_train.data) followed by X_train_counts = count_vect.transform(twenty_train.data), but doing so would tokenize and vectorize each text file twice.

2.2.3. From occurrences to frequencies

Occurrence count is a good start but there is an issue: longer documents will have higher average count values than shorter documents, even though they might talk about the same topics.

To avoid these potential discrepancies it suffices to divide the number of occurrences of each word in a document by the total number of words in the document: these new features are called tf for Term Frequencies.

Another refinement on top of tf is to downscale weights for words that occur in many documents in the corpus and are therefore less informative than those that occur only in a smaller portion of the corpus.

This downscaling is the tf–idf - “Term Frequency times Inverse Document Frequency” we discussed earlier.

As dicussed in the previous notebooks, both tf and tf–idf can be computed as follows using class TfidfTransformer:

from sklearn.feature_extraction.text import TfidfTransformer

tf_transformer = TfidfTransformer(use_idf=False).fit(X_train_counts)
X_train_tf = tf_transformer.transform(X_train_counts)
X_train_tf.shape
(2257, 35788)

In the above example-code, we firstly use the fit(..) method to fit our estimator to the data and secondly the transform(..) method to transform our count-matrix to a tf-idf representation. These two steps can be combined to achieve the same end result faster by skipping redundant processing. This is done through using the fit_transform(..) method as shown below, and as mentioned in the note in the previous section:

tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
X_train_tfidf.shape
(2257, 35788)
X_train_tfidf.toarray()
array([[0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.],
       ...,
       [0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.]])

2.3. Training a classifier

Now that we have our features, we can train a classifier to try to predict the category of a post. Let’s start with a naïve Bayes <naive_bayes> classifier, which provides a nice baseline for this task. scikit-learn includes several variants of this classifier; the one most suitable for word counts is the multinomial variant:

from sklearn.naive_bayes import MultinomialNB

clf = MultinomialNB().fit(X_train_tfidf, twenty_train.target)

To try to predict the outcome on a new document we need to extract the features using almost the same feature extracting chain as before. The difference is that we call transform instead of fit_transform on the transformers, since they have already been fit to the training set:

docs_new = ['God is love', 'OpenGL on the GPU is fast']
X_new_counts = count_vect.transform(docs_new)
X_new_tfidf = tfidf_transformer.transform(X_new_counts)

predicted = clf.predict(X_new_tfidf)

for doc, category in zip(docs_new, predicted):
  print('%r => %s' % (doc, twenty_train.target_names[category]))
'God is love' => soc.religion.christian
'OpenGL on the GPU is fast' => comp.graphics

2.4. Building a pipeline

In order to make the vectorizer => transformer => classifier easier to work with, scikit-learn provides a class ~sklearn.pipeline.Pipeline that behaves like a compound classifier:

from sklearn.pipeline import Pipeline
text_clf = Pipeline([
     ('vect', CountVectorizer()),
     ('tfidf', TfidfTransformer()),
     ('clf', MultinomialNB())])

The names vect, tfidf and clf (classifier) are arbitrary. We will use them to perform grid search for suitable hyperparameters below. We can now train the model with a single command:

text_clf.fit(twenty_train.data, twenty_train.target)
Pipeline(steps=[('vect', CountVectorizer()), ('tfidf', TfidfTransformer()),
                ('clf', MultinomialNB())])

2.5. Evaluation of the performance on the test set

Evaluating the predictive accuracy of the model is simply a comparision of the predicted and the actual labels.

import numpy as np
twenty_test = fetch_20newsgroups(subset='test', 
              categories=categories, shuffle=True, random_state=42)
docs_test = twenty_test.data
predicted = text_clf.predict(docs_test)
np.mean(predicted == twenty_test.target)
0.8348868175765646

We achieved 83.5% accuracy. Let’s see if we can do better with a linear support vector machine (SVM) <svm>, which is widely regarded as one of the best text classification algorithms (although it’s also a bit slower than naïve Bayes). We can change the learner by simply plugging a different classifier object into our pipeline:

from sklearn.linear_model import SGDClassifier
text_clf = Pipeline([
     ('vect', CountVectorizer()),
     ('tfidf', TfidfTransformer()),
     ('clf', SGDClassifier(loss='hinge', penalty='l2',
                           alpha=1e-3, random_state=42,
                           max_iter=5, tol=None))])

text_clf.fit(twenty_train.data, twenty_train.target)
Pipeline(steps=[('vect', CountVectorizer()), ('tfidf', TfidfTransformer()),
                ('clf',
                 SGDClassifier(alpha=0.001, max_iter=5, random_state=42,
                               tol=None))])
predicted = text_clf.predict(docs_test)
np.mean(predicted == twenty_test.target)
0.9101198402130493

We achieved 91.3% accuracy using the SVM. scikit-learn provides further utilities for more detailed performance analysis of the results:

from sklearn import metrics
print(metrics.classification_report(twenty_test.target, predicted,
      target_names=twenty_test.target_names))
                        precision    recall  f1-score   support

           alt.atheism       0.95      0.80      0.87       319
         comp.graphics       0.87      0.98      0.92       389
               sci.med       0.94      0.89      0.91       396
soc.religion.christian       0.90      0.95      0.93       398

              accuracy                           0.91      1502
             macro avg       0.91      0.91      0.91      1502
          weighted avg       0.91      0.91      0.91      1502
metrics.confusion_matrix(twenty_test.target, predicted)
array([[256,  11,  16,  36],
       [  4, 380,   3,   2],
       [  5,  35, 353,   3],
       [  5,  11,   4, 378]], dtype=int64)

As expected the confusion matrix shows that posts from the newsgroups on atheism and Christianity are more often confused for one another than with computer graphics.

Note

SGD stands for Stochastic Gradient Descent. This is a simple optimization algorithms that is known to be scalable when the dataset has many samples.

By setting loss="hinge" and penalty="l2" we are configuring the classifier model to tune its parameters for the linear Support Vector Machine cost function.

Alternatively we could have used sklearn.svm.LinearSVC (Linear Support Vector Machine Classifier) that provides an alternative optimizer for the same cost function based on the liblinear_ C++ library.