Aspect-Based Sentiment Analysis Using Python

Sentiment Analysis is an exciting field in text analytics which helps us to understand the speaker's thoughts about an entity (can be any object) or a person. This analysis can be at a speaker level where the high-level sentiment of the overall text matters to us. Sometimes this high-level sentiment does not satisfy our needs, and we should dive deep into the granularity of emotions. 

To achieve such granularity, discovering these granules(entities) is of utmost importance, followed by their related sentiments. In NLP(Natural Language Processing), these "entities" are called aspects, and this whole process is called Aspect Based Sentiment Analysis(ABSA).

In this blog, we will see how to carry out ABSA, the motivation for doing so and the main steps involved in solving our problem.

Key takeaways from this article

  • Problem Statement for ABSA
  • Dataset Explained
  • Aspect Term Extraction (ATE)
  • Polarity Sentiment Classification (PSC)
  • Aviso Case Study

Problem Statement for Aspect Based Sentiment Analysis

Before solving any problem, we need to understand the need for it. Let's understand the need to perform ABSA. Most current approaches attempt to detect the overall polarity of a sentence or paragraph regardless of the varied things discussed (e.g., laptops, restaurants) and their aspects (e.g., keypad, charger for laptop and food, service for restaurants). Consider a review about a restaurant:

 “The food was good, but the service was bad” 

The overall sentiment will be near neutral as the sentence tells both good and bad about the restaurant. Does this overall sentiment serve any meaningful purpose here? The answer will be no. This is where ABSA comes into the picture and shines bright. It identifies the aspects present, which are food and service. This is known as ATE (Aspect term Extraction). It is then followed by PSC (Polarity Sentiment Classification) to identify the sentiment for each aspect, i.e. for food, it's positive, whereas, for service, it's negative.

Visual representation for aspect based sentiment analysis where each sentiment is classified in various granular entities and rule based model is used to find user's behavior.

This approach becomes particularly helpful in social networking sites, review websites, blogs etc. and helps us to know a writer's or user's genuine sentiment at a more granular level when compared to the overall feeling.

The problem approach discussed uses a heuristic-driven solution and dependency parsing to solve the ATE on the restaurant dataset. After this, dependency parsing and the text blob library are used for PSC. In this blog, our primary focus is on ATE, and a brief overview of PSC is given.

Dataset Explained

The SemEval 2014 dataset is the most common available dataset for ABSA online. The dataset contains data on restaurant and laptop reviews by customers. We will only be working with restaurant reviews; hence the file Resturant_Train.XML is only considered, which includes customer reviews on the restaurant. This can be downloaded from this link. The XML sample file is shown below. Many might be wondering what XML is. XML stands for Extensible Markup Language, which helps us to define and store data in a shareable manner.

Data Snippet of the data used to build the ABSA model, and it is in the xml format which will be converted into a csv file later.

Let's look into the fields first:

  • text: This contains the sentence, i.e. review of the customer.
  • aspectTerms: This has multiple aspectTerm, which consists of aspects like food, staff etc., its polarity and the index span of its occurrence in the review sentence.
  • aspectCategory: This tells us the category of the aspects. For example, wall painting, table, and chair can all be aspects in a sentence, but they all refer to decoration.

Please note that currently, the data is in XML format. Now there are two options available: to use this file in code directly or to convert it into a CSV and then use it. Here the file is transformed into a required CSV file. We chose only those fields that are required as per our discussion till now, i.e. text, aspect, the polarity of aspect and location of aspect in the sentence. 

The following code is used to convert it to CSV.

import xml.etree.ElementTree as ET
xmlfile = "Restaurants_Train.xml"
tree = ET.parse(xmlfile)
root = tree.getroot()

f_w = open('Resturants_refined.csv','w')
for sentence in root.findall('sentence'):
    for aspectTerms in sentence.findall('aspectTerms'):
        text = ''
        for aspect in aspectTerms.findall('aspectTerm'):
            text = text + '##' + aspect.get('term')+'#'+aspect.get('polarity')
        f_w.write(sentence.findall('text')[0].text+'#'+text.strip()+'\n') # automatically becomes three hashes
            
f_w.close()

Let's look into the approach of Aspect Term Extraction followed by Polarity Sentiment Classification.

Aspect Term Extraction Approach (ATE)

The model is a hierarchical rule-based model. The following steps, as shown in the diagram, are followed.

How to extract granular terms using ATE (Aspect Term Extraction) approach

The main components of our approach are (Each level can also be seen as a layer as the next level comes only after the first level is done):

Level 1: Identifying POS tags and dependency parsing for the text.

Level 2: Identifying aspects based on rules (rule-based approach) and results from point 1.

Level 3: Applying a new layer over the results of point 2 to increase accuracy. In this layer, the neighbour aspects are combined, and sub-aspects are pruned or dropped as they are already a part of the larger aspect. 

Steps Involved in Level 1

First, let's understand what exactly is POS tagging and dependency parsing.

POS Tagging

POS tagging is also known as Parts of Speech Tagging. It is also called grammatical tagging and marks text in a word corresponding to particular parts of speech. Some of the common parts of speech that are usually identified are:

  • DT → Determiner
  • JJ → Adjective
  • NN → Noun
  • NNS → Noun Plural
  • CC → coordinating conjunction
  • VBD → Verb

The NLTK library is used to get the parts of speech for each sentence word, as shown below. Please note that input is a list of strings, not the string itself.

import nltk
nltk.pos_tag("The food was good but the service was bad".split(' '))

##output
[('The', 'DT'), ('food', 'NN'), ('was', 'VBD'), ('good', 'JJ'), ('but', 'CC'), ('the', 'DT'), ('service', 'NN'), ('was', 'VBD'), ('bad', 'JJ')]

NLTK library stands for Natural Language Toolkit. It consists of various libraries and is very helpful for many NLP processes, such as removing stop-words, tokenizing, stemming, parsing etc. and in our case, for POS.

Dependency Parsing

Dependency Parsing helps us to determine the grammatical structure of a sentence. It tells us different relations between words that might exist. An example is shown below.

Dependancy parsing helps in finding the relation among words present in a sentence

Food has the determiner 'the', and that relation is shown above. The relation, as can be seen, is shown with the help of an arrow. The "nsubj" relationship between bad and service shows an arrow going from bad → service. This happens as "service" behaviour is dependent on "bad" as it has a "nsubj" relationship. Some common relationships observed in dependency parsing are:

  • det → determiner
  • nsubj → nominal subject
  • cop → copula

More dependency relations are available; their details are in this link. The library used by us for dependency parsing is StanfordCoreNLP. In our case, it is a one-stop destination for all NLP tasks like NER tagging, POS tagging, and dependency parsing. It is written in Java and has a wrapper available in Python. Different models are available for other languages, such as English, Arabic, Chinese etc. More details about this library can be found on its official website. It is used in the following ways:

from stanfordcorenlp import StanfordCoreNLP
nlp = StanfordCoreNLP('stanford-corenlp-4.5.2')
dependency_parsed = nlp.dependency_parse(sentence)

Steps Involved in Level 2

Now that we know what POS tagging and dependency parsing are let's switch our gears to identify the aspects, i.e. Level 2. Even though we will be using a rule-based approach, it's always good to understand all possible approaches.

  • Rule-based approach: Rules which are generic enough and follow a broad pattern serve as the backbone for this approach. This is usually preferred when less data is available to observe, but a logical view can be formed.
  • Statistics-based approach: In this, rules can be based on statistics, i.e. available data. We get statistics that help us formulate rules that apply to the data as best as possible.
  • ML-based approach: This approach involves model training where a model is trained on the available data for better results. A popular LLM (Large Language Model) is E2E BERT, which is used for ABSA and gives us aspects and their sentiment in one step.

Now before defining aspects, let's identify what an aspect is. An aspect is an entity or feature a user talks about, and we identify it in this blog based on certain rules. We follow an assumption that an aspect mainly consists of nouns. 

The rules discussed below are based on patterns observed and are used to filter out the actual aspects from all the Nouns detected in the text. These rules are directly picked from the research paper mentioned at the end of the blog.

Rule 1: If a word is a noun preceded by another noun, then concatenate it and its preceding word. This combined word is an aspect.

Example sentence to show the implementation of rule 1 for ABSA.

Here "computer size" will be an aspect as 2 noun's come together.

Rule 2: If the word is a noun and is in a "dobj" relationship with a verb in the sentence, that word is an aspect.

Rule 3: If the word is a noun and is in a "nsubj" relationship with an adjective in the sentence, that word is an aspect.

Example 2 for ABSA for the complete sentence

In the above example, the word "vibe" is a noun (as parts of speech marked is NN) and is in a "nsubj" relationship with the word "good", which is an adjective (JJ) and hence an aspect.

Rule 4: If the word is a noun and is in a modifier relationship with a copula verb in the sentence, then that word is an aspect. 

The code to implement these rules is shown below. Please note that nouns can be represented by different names depending on nouns (NN-like balls), Plural Nouns (NNS-like balls), Proper Nouns (NNP like Harshit) and Plural Proper Nouns (NNPS like Indians).

noun_dict = {'NNS':1 , 'NN' : 1 , 'NNP':1 , 'NNPS' : 1}
adjective_list = {'JJ':1, 'JJR':1,'JJS':1}
verb_list = {'VB':1,'VBD':1,'VBZ':1,'VBN':1,'VBP':1,'VBP':1,'VBZ':1}
relation_considered = {'nsubj':1 , 'obj':1 , 'amod':1 , 'advmod':1 , 'cop':1}

def rules_dependency_parsing_aspects(pos_tag , dependency_parsed):
  
    pos_tag_dict = {}
    aspects=[]
    dependency_parsed_pruned_dict = {}
    copula_verb_dict = {}
    #print('pos_tag is :', pos_tag)
    #print('dependency_parser :' , dependency_parsed)
    
    for i in range(len(pos_tag)):
        if (pos_tag[i][1] in noun_dict):
            pos_tag_dict[i+1] = 'noun'
        if (pos_tag[i][1] in adjective_list):
            pos_tag_dict[i+1] = 'adj'
        if (pos_tag[i][1] in verb_list):
            pos_tag_dict[i+1] = 'verb'


      for item in dependency_parsed:
        if(item[0] in relation_considered):
            if(item[1] in pos_tag_dict and item[2] in pos_tag_dict):
                if item[1] in dependency_parsed_pruned_dict:
                    dependency_parsed_pruned_dict[item[1]] = dependency_parsed_pruned_dict[item[1]] + [item]
                else:
                    dependency_parsed_pruned_dict[item[1]] = [item]
                if item[2] in dependency_parsed_pruned_dict:
                    dependency_parsed_pruned_dict[item[2]] = dependency_parsed_pruned_dict[item[2]] + [item]
                else:
                    dependency_parsed_pruned_dict[item[2]] = [item]
                if(item[0]=='cop'):
                    if(pos_tag_dict[item[1]] == 'verb'):
                        copula_verb_dict[item[1]] = 'copula verb'
                    if(pos_tag_dict[item[2]] == 'verb'):
                        copula_verb_dict[item[2]] = 'copula verb'
                        
    for i in range(len(pos_tag)):
        if(i+1 in pos_tag_dict):
            if(pos_tag_dict[i+1] == 'noun' and i+1 in dependency_parsed_pruned_dict):
                for item in dependency_parsed_pruned_dict[i+1]:
                    if(item[0]=='obj' and (pos_tag_dict[item[1]]=='verb' or pos_tag_dict[item[2]] == 'verb')):
                        aspects.append(pos_tag[i][0])
                    if(item[0]=='nsubj' and (pos_tag_dict[item[1]]=='adj' or pos_tag_dict[item[2]] == 'adj')):
                        aspects.append(pos_tag[i][0])
                    if(item[0] in modifier and (item[1] in copula_verb_dict or item[2] in copula_verb_dict)):
                        aspects.append(pos_tag[i][0])
                        
    for i in range(1,len(pos_tag)):
        if(pos_tag[i][1] in noun_dict):
            if (pos_tag[i-1][1] in noun_dict):
                aspects.append(pos_tag[i-1][0]+' ' + pos_tag[i][0])
            elif (pos_tag[i-1][1] in adjective_list):
                aspects.append(pos_tag[i][0])
    return aspects

Let's evaluate it on our csv file of the restaurant training dataset.  There can be multiple aspects in a sentence, as also seen in our primary example: "The food was good, but the service was bad". Aspects extracted are considered correct if all the aspects present in the text are identified correctly; otherwise, it is wrong. 

Cases where aspects were correctly identified = 677. 

The total number of examples = 2013

Accuracy = (Cases where correctly identified)/(Total cases available) = (677/2013) = 34% (This accuracy is after level 2 only)

Please note that the above accuracy is significantly less due to the following reasons:

1) Wrong Annotation in Dataset:

  • Sentence: It may be a bit packed on weekends, but the vibe is good, and it is the best food you will find in the area. 
  • Annotated aspects in the dataset: food, vibe and packed. 
  • Aspects identified by us: food, vibe.

We can observe that "packed" is not a noun and cannot be an aspect. It is not a one-off case and makes it evident that not all sentences in the dataset are correctly annotated.

2) Partially Correction is not allowed: Due to the stringent evaluation, all the cases where we correctly predicted any proper subset of the annotated aspects present in the dataset are marked as wrong. This was observed in the example from the previous point where as per our rules, the words "food" and "vibe" were identified by us as aspects but not the word "packaged".

Level 3

Please note that Level 3 of our algorithm is yet to be implemented. This was deliberately left out to focus on the incremental impact of each level on the accuracy. Let's Understand it using an example.

  • Sentence: Deep Fried Skewers are good and still rare to find in NYC.
  • Annotated aspects in the dataset: Deep Fried Skewers.
  • Aspects identified by us: Skewers, Deep Fried, Fried Skewers.

Level 3-Rule 1: This step states that if any two aspects are present next to each other in the text, they should be clubbed together to form a single aspect. 

Impact: Aspects "Deep Fried" and "Skewers" clubbed together to result in a single aspect: Deep Fried Skewers.

Level 3-Rule 2: Sub-aspects are dropped or pruned as they have already occurred in larger aspects. 

Impact: The aspect "Skewer" is dropped/ pruned as it acts like a sub-aspect of the aspect "Deep Fried Skewers." 

The code for Level 3-Rule 2 is shown below; it portrays a brute-force approach. This approach is considered the most naive way to tackle any problem but, at the same time, an easy way to understand. It is called naive, as no attention is paid to memory consumption or execution time. It is generally helpful for small datasets and can give timeout errors for large datasets.

aspects_final=[]
## In below code aspects_all contains all aspects for a sentence
for i in range(len(aspects_all)):
  flag=0
  aspect = aspects_all[i]
  for j in range (i+1,len(aspects_all)):
    if(aspect in aspects_all[j]):
      flag=1
      break
  if(flag==0):
    aspects_final.append(aspect)

After applying both these rules, we observe an increase in accuracy from 34% to 44%, which shows an increase of 10%. The following code shows how the accuracy can be calculated:

f = open('Resturants_refined.csv','r')
tp = 0
fp = 0
tp1=0
fp1=0
total_orig_aspects = 0
aspects_calculated = 0
aspects_not_found = 0 
for line in f:
    value = 0
    empty = 1
    line1 = line.split('###')
    s1 = []
    s2 = line1[0].split('.')
    aspects_orig = set([item.split('#')[0] for item in line1[1].split('##')])
    aspects_orig2 = []
    for item in list(aspects_orig):
        aspects_orig2 = aspects_orig2 + item.strip().split(' ')
    for i in range(len(s2)):
        text = s2[i].replace(',','').replace(':','').replace("'",'')
        s1.append(text)
        #if(i+2 < len(s2)):
        #    text = s2[i] + s2[i+1] + s2[i+2]
        #    s1.append(text)
    #s1 = [s]
    aspects_total = []
    aspects_total2 =[]
    for sentence in s1:
        text_tok = nltk.word_tokenize(sentence)
        pos_tagged = nltk.pos_tag(text_tok)
        dependency_parsed = nlp.dependency_parse(sentence)
        aspects  = rules_dependency_parsing_aspects(pos_tagged , dependency_parsed)
        aspects_total = aspects_total + aspects
        for item in list(aspects_total):
            aspects_total2 = aspects_total2 + item.strip().split(' ')
    if(len(set(aspects_orig) - set(aspects_total))== 0):
        value = 1
    if(len(aspects_total)==0):
        empty = 0
    f_w.write(line1[0] + '@' + str(value) +'@'+str(empty)+'@' +'#'.join(aspects_orig) + '@'+'#'.join(aspects_total)+ '\n')

    if(len(set(aspects_orig) - set(aspects_total))== 0):
        tp = tp+1
    else:
        fp = fp + 1

accuracy = tp/(tp+fp)

NOTE: All the rules mentioned in the paper under Level 2 are not implemented as part of this blog. The other rules can be found in the paper mentioned, and the author states that they reached a recall of 81.9 on the same dataset as used in this blog. To read about the recall in detail, please refer to this blog.

The Drawback of ATE Architecture

The recall of 81.9 looks pretty good to us. But this architecture has a major flaw, making it difficult to use in real life. Consider the statement:

“I went outside to eat food on Tuesday. It tasted very good and was at an affordable price”

Here food should have been an aspect appreciated in the following sentence. This is not possible here, as dependency parsing only works across sentences.

We have analyzed the ATE. Let's look into how PSC can be done using a straightforward approach.

Polarity Sentiment Classification (PSC)

The aspects have been identified, but still, the sentiment of each aspect is unknown to us. Here PSC comes into the picture as it helps us identify the sentiment of every aspect we identify. Steps to implement a simple PSC are as shown below:

Step 1: Perform dependency parsing. 

Step 2: Merge the adjectives and their corresponding adverbs (if any) as mentioned in the text. Consider this situation: "The food we had on Tuesday was very delicious." Here adjective "delicious" is picked along with an adverb ("very" in this case), which is connected to "delicious".

Step 3: Use the Text Blob library to know the sentiment of these combined words. Text blob is a Python library which provides APIs for various NLP tasks such as classification, translation etc. In our case, it is used to know the sentiment involved with the adjective and adverb we identified. The sentiment can be either positive, neutral or negative.

For example, the sentiment for the words "very delicious" will be positive, and the aspect will be the word "food". Hence the word "food" can be said to be a positive aspect here.

This approach has the following flaws:

  • The negation statements are not handled. So "food is not delicious" will come as a positive for us. This will be a crucial flaw for real-time applications.
  • The number of aspects needs to be 1 in the sentence for this approach to work. The alternative would be breaking a single sentence into multiple sentences and treating each sentence as a unique one for PSC. An example would be "The food was good, but the service was bad", which is broken into sentences such as "The food was good" and "The service was bad".

Now our ABSA model is completed after these two steps are done. Let's think a step further. The reviews given by customers to a restaurant are available to us. We can do sentiment analysis based on the method discussed above. Still, assume we could group these reviews into categories of food, decoration, service, presentation etc. This can be done with the help of topic modelling, which helps identify each statement's type. More details can be read in the blog here.

Future Approach

The current approach discussed in this blog is suitable for simple use cases but fails at the industry level, where the situation can be much more complex. LLM (Large Language Models) are often used for ATE and PSC to tackle the complexity, or sometimes separate models are built for both. 

E2E-ABSA is a Bert-based model which identifies aspects along with their sentiment by leveraging a single model. This classification model can be fine-tuned on custom datasets as per our needs.

Aviso Case Study

Aviso.AI: Aviso helps its customer's sales representatives to increase their success rate of cracking a deal. This is achieved by analyzing complex data fields, two of them being:

  1. Various revenue-related risks
  2. Analyzing the call between the sales rep and the buyer to provide enriched insights

The part of analyzing the call between the sales rep and buyer falls under CI (Conversational Intelligence). ABSA plays a vital role in CI. Let's understand using an example: consider a call between the sales rep of company C1 and buyer B1. Say the buyer B1 praises C1's dashboard. Consequently, C1's dashboard (an aspect) has a positive sentiment (positive aspect) from the buyers' perspective, and the chances of the deal being brokered increase if the sales rep focuses on this aspect (dashboard) more compared to the rest of the things.

Interview Questions

  • What are the different steps in ABSA?
  • What is the need for ABSA?
  • Why is a rule-based approach considered and not directly a model built?
  • What is dependency parsing?
  • Explain more about POS tagging.

Research Paper

  1. An Unsupervised Hierarchical Rule-Based Model for Aspect Term Extraction Augmented with Pruning Strategies

Conclusion

In this blog, we have seen a fundamental approach for identifying aspects. Its various benefits and drawbacks were considered. A brief about PSC was explained. The importance of ABSA is presented along with an industry example. Overall it's a growing field that is catching traction nowadays as it's much more valuable and practical than traditional sentiment analysis.

Enjoy Learning!

More from EnjoyAlgorithms

Self-paced Courses and Blogs