This post contains a few basic examples of how to use the limbic
package. First, a quick overview of the lexicon-based classifier is described, and then a few notes on how a machine learning model was trained and how it can be used to predict the emotions for a given text.
Why building this package?
The objective of this package is simple: If you need to compute some emotion analysis on a word or a set of words this should be able to help. For now, it only supports plain text and subtitles, but the idea is to extend it to other formats (pdf, email, among other formats). In the meantime, this includes a basic example on how to use it on plain text and another example on how to use it in a collection of subtitles for series (all episodes for all seasons of a show). The name of the package is based on the limbic system, which is a set of brain structures that support different functions, like emotions or behavior among others.
In this package, there are two strategies to compute the emotions:
- Via lexicon-based word matching, which is quite straightforward and examples of its usage are described below.
- Via a multi-label machine learning classifier trained with the specific purpose of identifying emotions and their strength in full sentences.
Limbic also has a set of tools that are easy to reuse and extend for different use cases. For example, contains tools for the analysis of subtitles in a show, but can be easily extended to analyze books, papers, websites, customer reviews, or even further applications like comparing a movie script with its book, comparing properties of movies in a sequel, among others.
Installing the package
In the meantime, while I finish adding this as a pypi package, you can install it by building the source code from the repository by first installing all the dependencies from the requirements.txt file and the dependencies for Spacy, the NLP framework used through this package. However, it might be easier to use pip install
directly from the Github repository, as shown below,
pip install git+https://github.com/glhuilli/limbic.git
python -m spacy download en_core_web_sm
Importing a lexicon-based emotion classifier
The only thing you need to create a new lexicon-based emotion classifier is, of course, the lexicon. However, in case you are dealing with a specific context, it’s possible to use a terms mapping dictionary, which will automatically replace terms on the input you want to process.
The lexicon has to be loaded by the user, and it could be either a custom lexicon or a lexicon from the NRC. To load a lexicon, you can either use a generic load_lexicon
or load_nrc_lexicon
tailored for some NRC lexicons.
To use the generic load_lexicon
method you can do the following:
where the hypothetical file ../data/lexicon.csv
is a csv
file with the header term,emotion,score
.
To use the load_nrc_lexicon
method, you need to download one of the supported NRC files and do
The supported files are the affect_intensity
lexicon, the emotion
lexicon (aka EmoLex
), and the vad
lexicon.
Once the lexicon is ready, you can create the LexiconLimbicModel
,
Also, there’s an option to use a terms mapping
dictionary has to be of type Dict[str, str]
, where a given term or collection of terms will be mapped to another term of collection of terms when computing the emotions. This is specifically helpful for texts with a specific context that you would like to include in the model. This is specifically helpful for texts with a specific context that you would like to include in the model. An example of this included below (check out this example to use a mapping dictionary).
Important note
In case you are using NRC lexicons, you need to know that there are some constraints about using them for profit. Please refer to the NRC website for more information on how to notify and work with their data. Otherwise, you are free to use limbic however you want under the MIT license.
Emotions from Terms
Once the limbic
model is loaded, you can either get the emotions that either a single term or a full-sentence has. For example, you can get the emotions associated with the word love
or hate
. Alternatively, you can get the emotions associated with not love
and not hate
, which is passing an is_negated
parameter to the method.
For each term, a list of Emotion
named tuples is returned. Each Emotion
has the fields category
, which indicates one of the motions that the term has been assigned, a value
that quantifies how strong the emotion category has been assigned to the term, and the term
. This term in case the method is called with is_negated=True
, has a dash as a prefix, e.g., term=love, is_negated=True
will generate an Emotion
with term=-love
.
--------------------------------------------------
Emotions for love, hate, not love, and not hate.
--------------------------------------------------
love -> [Emotion(category='joy', value=0.828, term='love')]
hate -> [Emotion(category='anger', value=0.828, term='hate'),
Emotion(category='fear', value=0.484, term='hate'),
Emotion(category='sadness', value=0.656, term='hate')]
LOVE (negated) -> [Emotion(category='sadness', value=0.828, term='-love')]
Hate (negated) -> [Emotion(category='fear', value=0.828, term='-hate'),
Emotion(category='anger', value=0.484, term='-hate'),
Emotion(category='joy', value=0.656, term='-hate')]
Negated terms
The categories supported for the is_negated
parameter are the ones included in the Plutchik’s wheel of emotions, shown below (source: Wikipedia)
Here, each emotion is placed in a wheel where any emotion is facing its “opposite” on the other side. For example, joy
is on the opposite side of sadness
, rage
to terror
, and so on. When terms are negated, the opposite emotion will be used. For example, love
has an emotion of joy
with score 0.828
(following the NRC affect_intensity
lexicon). Then love
negated will have an emotion of sadness
with score 0.828
.
Emotions for sentences
Like getting the emotions of a term, limbic
has a method for getting the emotions for full or partial sentence. This is supported by the fact that each sentence has multiple terms, which some of them could have one or multiple emotions. Note that, in some cases, a sentence could have some negated terms that need to be considered. Some examples of how to process sentences and the expected output are presented below.
[Emotion(category='joy', value=0.828, term='love'),
Emotion(category='joy', value=0.812, term='enjoy')]
Then you can try checking a sentence with negated terms,
[Emotion(category='sadness', value=0.828, term='-love'),
Emotion(category='joy', value=0.812, term='enjoy')]
Now, if you try to get the emotions from the following sentence without context, you can get unexpected results,
[Emotion(category='sadness', value=0.828, term='-love'),
Emotion(category='joy', value=0.812, term='enjoy'),
Emotion(category='anger', value=0.203, term='sentence'),
Emotion(category='fear', value=0.266, term='sentence'),
Emotion(category='sadness', value=0.234, term='sentence')]
Emotions using the terms mapping
Note that in the last example I don't love but I enjoy this sentence
, the word sentence
could be placed under two different contexts: sentence
as in a set for words or sentence
as in punishment.
If you are under the context that sentence
is just a collection of words, you can use the terms_mapping
when defining the limbic
object.
[Emotion(category='sadness', value=0.828, term='-love'),
Emotion(category='joy', value=0.812, term='enjoy')]
Using a Machine Learning model for Emotion Analysis
Similar to the example above, using a lexicon-based model, this is just a quick walkthrough to understand how to load and use the machine learning model built in Tensorflow, which is included in the limbic
package.
First, you need to understand the constraints and limitations of the model:
- It was built only for a very narrow set of emotions (called Affection Emotions in limbic), which are “joy”, “sadness”, “anger”, and “fear”.
- Using a synthetic dataset created using the lexicon-based model from a very particular dataset (top ~90 full books from different websites), a bidirectional RNN combined with a CNN was trained to predict multiple emotions as a multi-label classification problem. As future work, I’ll add a section on how the network was decided step by step and the reasons for some of the decisions I took when defining the layers’ parameters. Now you can see this in the code itself, where I added as many comments as I could for each step of the network construction code.
- Note that given that the data used to train was generated by using the lexicon-based model, this means that any biases that could come from that model will be included in the resulting trained model.
- Emotions were not computed using any context disambiguation. Any unfortunate relationship associated with the lexicon-based model could be included in the ML trained model.
- Parameters for the ML model were not tweaked with the full extent of hyper-parameter optimization, which means that it might not be the best version of itself. The same goes for the benchmark experiments with other models (FastText and Scikitlearn-based models). This is part of the future work still needed for this model to perform optimally.
- Negations are not being captured correctly in the training phase, so a full dive into why and what can be done to improve this, it’s necessary and will be .considered as future work.
The general idea behind the definition of the model is the following:
- Use the embeddings layer first leveraging a pre-trained layer (e.g., Glove in this case).
- Create a bi-directional layer to capture contextual info from upstream and downstream directions.
- Allow some drop-out to include some regularization into the model and avoid over-fitting.
- Use a convolution layer to extract some relationships between the hypothesis from the previous layer.
- Using a pooling layer (avg and max) to group and allow more signal encoding from previous convolutions.
- Use the sigmoid activation function in the output layer for the multi-label problem.
- Use binary cross-entropy loss function, which is well suited for the multi-label classification problem.
To use the model is fairly simple. All you need to do is to create a TfLimbicModel and pass down the sentence you want to extract the emotions from,
array([0.53164005, 0.95961887, 0.11894947, 0.05770496], dtype=float32)
The output represents the probability that each emotion was found in the input sentence. I included in limbic
a more expressive way of returning such probabilities by using the get_sentence_emotions
method, where each emotion is mapped to the right probability of being included in the sentence,
[EmotionValue(category='sadness', value=0.53164005),
EmotionValue(category='joy', value=0.95961887),
EmotionValue(category='fear', value=0.11894947),
EmotionValue(category='anger', value=0.057704955)]
Note that this works within the boundaries for a full sentence. If the sentence is larger than 150 words, then it will be clipped to the first 150 words, given how the input layer of the network was constructed.
Improving this model is an on-going work, and I’ll be updating this post and the code accordingly. If you have any suggestions on how to improve it, please let me know! Contributions and comments are more than welcome.