AIData Science

6 Practices to enhance the performance of a Text Classification Model

Introduction

A few months back, I was working on creating a sentiment classifier for Twitter data. After trying the common approaches, I was still struggling to get good accuracy on the results.

Text classification problems and algorithms have been around for a while now. They are widely used for Email Spam Filtering by the likes of Google and Yahoo, for conducting sentiment analysis of twitter data and automatic news categorization in google alerts.

However, while dealing with enormous amount of text data, model’s performance and accuracy becomes a challenge. The performance of a text classification model is heavily dependent upon the type of words used in the corpus and type of features created for classification. I used several practices to improve the results of my model.

6 practices_1

In this article, I’ve illustrated the six best practices to enhance the performance and accuracy of a text classification model which I had used:

 

1. Domain Specific Features in the Corpus

For a classification problem, it is important to choose the test and training corpus very carefully. For a variety of features to act in the classification algorithm, domain knowledge plays an integral part.

a1 a2

For example, if the problem is “Sentiment Classification on social media data”, the training corpus should consist of the data from social sources like twitter and facebook.

On the other hand if the problem is “Sentiment Classification for news data”, the corpus should consist of data from news sources. This is because the vocabulary of a corpus varies with domains. Social Media contains a lot of slangs and improper keywords like “awsum, lol, gooood” etc which are absent in any of the formal corpus such as news, blogs etc.

Let’s take an example of a naive bayes classification problem where the task is to classify the statements into two Classes: “Class A and Class B”. We’ve training data corpus and a test data corpus.  As mentioned here, the training corpus should contain the features from relevant corpus. Therefore, training corpus should consist of data points such as:

 

2. Use An Exhaustive Stopword List

Stopwords are defined as the most commonly used words in a corpus. Most commonly used stopwords are “a, the, of, on, … etc”. These words are used to define the structure of a sentence. But, are of no use in defining the context. Treating these type of words as feature words would result in poor performance in text classification. These words can be directly ignored from the corpus in order to obtain a better performance. Apart from language stopwords, There are some other supporting words as well which are of lesser importance than any other terms. These includes:

a3

Language Stopwords – a, of, on, the … etc

Location Stopwords – Country names, Cities names etc

Time Stopwords – Name of the months and days (january, february, monday, tuesday, today, tomorrow …) etc

Numerals Stopwords – Words describing numerical terms ( hundred, thousand, … etc)

After removal of these entities from the test data, test data would be reformed to following:

test_data = [
    (' luv  phones.', 'Class A'),
    ('   amaaaazingg company!', 'Class A'),
    ('  feeling very gooood about  features lol.', 'Class A'),
    ('   bestest phones.', 'Class A'),
    ("  awesomee player", 'Class A'),
    ('  not like  phone #apple  ', 'Class B'),
    ('  tired   stuff.', 'Class B'),
    ('   worst fears! . check  out here: http://goo.gl/qdjk3rf ', 'Class B'),
    (' boss lives,   horrible.', 'Class B')
x]

 

3. Noise Free Corpus

In most of the data science problems, it is recommended to undertake a classification algorithm on a cleaned corpus rather than a noisy corpus. Noisy corpus refers to unimportant entities of the text such as punctuations marks, numerical values, links and urls etc. Removal of these entities from the text would increase the accuracy, because size of sample space of possible features set decreases.

Yet, it becomes essential to note that, these entities should only be removed if the classification problem does not use these entities. For example – emoticons such as  :), :P, 🙁  are important for sentiment classification but may not be important for other text classifications.

After eliminating the noisy entities like hashtags, url text, the training corpus would look like:

test_data = [
    ('luv phones', 'Class A'),
    ('amaaaazingg company', 'Class A'),
    ('feeling very gooood about features lol', 'Class A'),
    ('bestest phones', 'Class A'),
    ("awesomee player", 'Class A'),
    ('not like phone apple  ', 'Class B'),
    ('tired stuff', 'Class B'),
    ('worst fears check out here', 'Class B'),
    ('boss lives horrible', 'Class B')
]


4. Eliminating features with extremely low frequency

Keywords which occur in lesser frequency in the corpus usually does not play a role in text classification. One can get rid of these low occurring features, resulting in better performance of the model. For example – if the frequency counts of words of a corpus looks like the following:It is clear that terms “fig” and “dale” have occurred in low frequency as compared to other terms. Hence, if we choose a threshold of 10,  all keywords with less frequency can be ignored, resulting in good accuracy.

111


5. Normalized Corpus

Words are the integral part of any classification technique. However, these words are often used with different variations in the text depending on their grammar (verb, adjective, noun, etc.). It is always a good practice to normalize the terms to their root forms. This technique is known as Lemmatization. For example, the words:

  • Playing
  • Player
  • Plays
  • Play
  • Players
  • Played

All can be normalized down to the word “Play” as far as the classifier is concerned. Performance will be good when there is single feature for ten variations rather than ten features for each variations.

test_data = [
    ('luv phone', 'Class A'),
    ('amazing company', 'Class A'),
    ('feeling very good about feature lol', 'Class A'),
    ('best phone', 'Class A'),
    ("awesome player", 'Class A'),
    ('not like phone apple', 'Class B'),
    ('tired stuff', 'Class B'),
    ('worst fear check out here', 'Class B'),
    ('boss live horrible', 'Class B')
]

 

6. Use Complex Features: n-grams and part of speech tags

In some cases, features as the combination of words provides better significance rather than considering single words as features. Combination of N words together are called N-grams. It is known that Bigrams are the most informative N-Gram combinations. Adding bigrams to feature set will improve the accuracy of text classification model.

Similarly considering Part of Speech tags combined with with words/n-grams will give an extra set of feature space. also increase the classifications. For example –

it’s better to train the model, such that word “book” when used as NOUN means “book of pages”, when used as VERB means to “book a ticket or something else”.

 12

 

End Notes

In this article, we discussed few practices to improve the accuracy of a text classifier model. These gave me an improvement of ~10% – 20% in accuracy depending on the use case.

This is obviously not a complete list, but it provides a nice introduction for optimization of text classification algorithms. If you feel there are any other techniques which I have missed, feel free to share in comments section below.

If you like what you just read & want to continue your analytics learning, subscribe to our emailsfollow us on twitter or like our facebook page.

Shivam5992 Bansal

22 Jul 2022

Shivam Bansal is a data scientist with exhaustive experience in Natural Language Processing and Machine Learning in several domains. He is passionate about learning and always looks forward to solving challenging analytical problems.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
WP2Social Auto Publish Powered By : XYZScripts.com

Adblock Detected

Please consider supporting us by disabling your ad blocker