Training a linear model to classify IMDb movie reviews as positive or negative

Image by AbsolutVision @ pixabay.com

Sentiment analysis, an important area in Natural Language Processing, is the process of automatically detecting affective states of text. Sentiment analysis is widely applied to voice-of-customer materials such as product reviews in online shopping websites like Amazon, movie reviews, or social media. It can be just a basic task of classifying the polarity of a text as being positive/negative or it can go beyond polarity, looking at emotional states such as “happy”, “angry”, etc.

Here we will build a classifier that is able to distinguish movie reviews as being either positive or negative. For that, we will use Large Movie Review Dataset v1.0[2] of IMDB movie reviews. This dataset contains 50,000 movie reviews divided evenly into 25k train and 25k test. The labels are balanced between the two classes (positive and negative). Reviews with a score <= 4 out of 10 are labeled negative and those with score >= 7 out of 10 are labeled positive. Neutral reviews are not included in the labeled data. This dataset also contains unlabeled reviews for unsupervised learning; we will not use them here. There are no more than 30 reviews for a particular movie because the ratings of the same movie tend to be correlated. All reviews for a given movie are either in train or test set but not in both, in order to avoid test accuracy gain by memorizing movie-specific terms.


Data preprocessing

After the dataset has been downloaded and extracted from archive we have to transform it into a more suitable form for feeding it into a machine learning model for training. We will start by combining all review data into 2 pandas Data Frames representing the train and test datasets, and then saving them as csv files: imdb_train.csv and imdb_test.csv.

The Data Frames will have the following form:

where:

  • review1, review2, … = the actual text of movie review
  • 0 = negative review
  • 1 = positive review

But machine learning algorithms work only with numerical values. We can’t just input the text itself into a machine learning model and have it learn from that. We have to, somehow, represent the text by numbers or vectors of numbers. One way of doing this is by using the Bag-of-words model[3], in which a piece of text(often called a document) is represented by a vector of the counts of words from a vocabulary in that document. This model doesn’t take into account grammar rules or word ordering; all it considers is the frequency of words. If we use the counts of each word independently we name this representation a unigram. In general, in a n-gram we take into account the counts of each combination of n words from the vocabulary that appears in a given document.

For example, consider these two documents:

The vocabulary of all words encountered in these two sentences is:

The unigram representations of d1 and d2:

And, the bigrams of d1 and d2 are:

Often, we can achieve slightly better results if instead of counts of words we use something called term frequency times inverse document frequency (or tf-idf). Maybe it sounds complicated, but it is not. Bear with me, I will explain this. The intuition behind this is the following. So, what’s the problem of using just the frequency of terms inside a document? Although some terms may have a high frequency inside documents they may not be so relevant for describing a given document in which they appear. That’s because those terms may also have a high frequency across the collection of all documents. For example, a collection of movie reviews may have terms specific to movies/cinematography that is present in almost all documents(they have a high document frequency). So, when we encounter those terms in a document this doesn’t tell much about whether it is a positive or negative review. We need a way of relating term frequency (how frequent a term is inside a document) to document frequency (how frequent a term is across the whole collection of documents). That is:

Now, there are more ways used to describe both term frequency and inverse document frequency. But the most common way is by putting them on a logarithmic scale:

where:

We added 1 in the first logarithm to avoid getting -∞ when the count is 0. In the second logarithm we added one fake document to avoid division by zero.

Before we transform our data into vectors of counts or tf-idf values we should remove English stopwords[6][7]. Stopwords are words that are very common in a language and are usually removed in the preprocessing stage of natural text-related tasks like sentiment analysis or search.

Note that we should construct our vocabulary only based on the training set. When we will process the test data in order to make predictions we should use only the vocabulary constructed in the training phase, the rest of the words will be ignored.

Now, let’s create the data frames and save them as csv files:

Text vectorization

Fortunately, for the text vectorization part, all the hard work is already done in the Scikit-Learn classes CountVectorizer[8] and TfidfTransformer[5]. We will use these classes to transform our csv files into unigram and bigram matrices(using both counts and tf-idf values). (It turns out that if we only use a n-gram for a large n we don’t get a good accuracy, we usually use all n-grams up to some n. So, when we say here bigrams we actually refer to uni+bigrams and when we say unigrams it’s just unigrams.) Each row in those matrices will represent a document (review) in our dataset, and each column will represent values associated with each word in the vocabulary (in the case of unigrams) or values associated with each combination of maximum 2 words in the vocabulary (bigrams).

CountVectorizer has a parameter ngram_range which expects a tuple of size 2 that controls what n-grams to include. After we constructed a CountVectorizer object we should call .fit() method with the actual text as a parameter, in order for it to learn the required statistics of our collection of documents. Then, by calling .transform() method with our collection of documents it returns the matrix for the n-gram range specified. As the class name suggests, this matrix will contain just the counts. To obtain the tf-idf values, the class TfidfTransformer should be used. It has the .fit() and .transform() methods that are used in a similar way with those of CountVectorizer, but they take as input the counts matrix obtained in the previous step and .transform() will return a matrix with tf-idf values. We should use .fit() only on training data and then store these objects. When we want to evaluate the test score or whenever we want to make a prediction we should use these objects to transform the data before feeding it into our classifier.

Note that the matrices generated for our train or test data will be huge, and if we store them as normal numpy arrays they will not even fit into RAM. But most of the entries in these matrices will be zero. So, these Scikit-Learn classes are using Scipy sparse matrices[9] (csr_matrix[10] to be more exactly), which store just the non-zero entries and save a LOT of space.

We will use a linear classifier with stochastic gradient descent, sklearn.linear_model.SGDClassifier[11], as our model. First we will generate and save our data in 4 forms: unigram and bigram matrix (with both counts and tf-idf values for each). Then we will train and evaluate our model for each these 4 data representations using SGDClassifier with the default parameters. After that, we choose the data representation which led to the best score and we will tune the hyper-parameters of our model with this data form using cross-validation in order to obtain the best results.


Choosing data format

Now, for each data form we split it into train & validation sets, train a SGDClassifier and output the score.

This is what we get:

Unigram Counts
Train score: 0.99 ; Validation score: 0.87

Unigram Tf-Idf
Train score: 0.95 ; Validation score: 0.89

Bigram Counts
Train score: 1.0 ; Validation score: 0.89

Bigram Tf-Idf
Train score: 0.98 ; Validation score: 0.9

The best data form seems to be bigram with tf-idf as it gets the highest validation accuracy: 0.9; we will use it next for hyper-parameter tuning.


Using Cross-Validation for hyperparameter tuning

For this part we will use RandomizedSearchCV[12] which chooses the parameters randomly from the list that we give, or according to the distribution that we specify from scipy.stats (e.g. uniform); then is estimates the test error by doing cross-validation and after all iterations we can find the best estimator, the best parameters and the best score in the variables best_estimator_best_params_ and best_score_.

Because the search space for the parameters that we want to test is very big and it may need a huge number of iterations until it finds the best combination, we will split the set of parameters in 2 and do the hyper-parameter tuning process in two phases. First we will find the optimal combination of loss, learning_rate and eta0 (i.e. initial learning rate); and then for penalty and alpha.

The output that we get is:

Best params:
{
    'eta0': 0.008970361272584921,
    'learning_rate': 'optimal',
    'loss': 'squared_hinge'
}

Best score: 0.90564

Because we got “learning_rate = optimal” to be the best, then we will ignore the eta0 (initial learning rate) as it isn’t used when learning_rate=’optimal’; we got this value of eta0 just because of the randomness involved in the process.

Best params:
{
    'alpha': 1.2101013664295101e-05,
    'penalty': 'l2'
}

Best score: 0.90852

So, the best parameters that I got are:

loss: squared_hinge
learning_rate: optimal
penalty: l2
alpha: 1.2101013664295101e-05

Saving the best classifier


Testing model

And we got 90.18% test accuracy. That’s not bad for our simple linear model. There are more advanced methods that give better results. The current state-of-the-art on this dataset is 97.42% [13]


References

[1] Sentiment Analysis — Wikipedia
[2] Learning Word Vectors for Sentiment Analysis
[3] Bag-of-words model — Wikipedia
[4] Tf-idf — Wikipedia
[5] TfidfTransformer — Scikit-learn documentation
[6] Stop words — Wikipedia
[7] A list of English stopwords
[8] CountVectorizer — Scikit-learn documentation
[9] Scipy sparse matrices
[10] Compressed Sparse Row matrix
[11] SGDClassifier — Scikit-learn documentation
[12] RandomizedSearchCV — Scikit-learn documentation
[13] Sentiment Classification using Document Embeddings trained with Cosine Similarity


The Jupyter notebook can be found here.

I hope you found this information useful and thanks for reading!

This article is also posted on Medium here. Feel free to have a look!


Dorian

Passionate about Data Science, AI, Programming & Math

0 0 votes
Article Rating
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x