reCAPTCHA WAF Session Token
Data Science and ML

Using Hugging Face Transformers for Emotion Detection in Text

Thank you for reading this post, don't forget to subscribe!

Image by juicy_fish on Freepik

 

Hugging Face hosts a variety of transformer-based Language Models (LMs) specialized in addressing language understanding and language generation tasks, including but not limited to:

  • Text classification
  • Named Entity Recognition (NER)
  • Text generation
  • Question-answering
  • Summarization
  • Translation

A particular -and pretty common- case of text classification task is sentiment analysis, where the goal is to identify the sentiment of a given text. The “simplest” type of sentiment analysis LMs are trained to determine the polarity of an input text such as a customer review of a product, into positive vs negative, or positive vs negative vs neutral. These two specific problems are formulated as binary or multiple-class classification tasks, respectively.

There are also LMs that, while still identifiable as sentiment analysis models, are trained to categorize texts into several emotions such as anger, happiness, sadness, and so on.

This Python-based tutorial focuses on loading and illustrating the use of a Hugging Face pre-trained model for classifying the main emotion associated with an input text. We will use the emotions dataset publicly available on the Hugging Face hub. This dataset contains thousands of Twitter messages written in English.

 

Loading the Dataset

We’ll start by loading the training data within the emotions dataset by running the following instructions:

<code>!pip install datasets
from datasets import load_dataset
all_data = load_dataset("jeffnyman/emotions")
train_data = all_data["train"]
</code>

 

Below is a summary of what the training subset in the train_data variable contains:

<code>Dataset({
features: ['text', 'label'],
num_rows: 16000
})</code>

 

The training fold in the emotions dataset contains 16000 instances associated with Twitter messages. For each instance, there are two features: one input feature containing the actual message text, and one output feature or label containing its associated emotion as a numerical identifier:

  • 0: sadness
  • 1: joy
  • 2: love
  • 3: anger
  • 4: fear
  • 5: surprise

For instance, the first labeled instance in the training fold has been classified with the ‘sadness’ emotion:

 

Output:

<code>{'text': 'i didnt feel humiliated', 'label': 0}</code>

 

Loading the Language Model

Once we have loaded the data, the next step is to load a suitable pre-trained LM from Hugging Face for our target emotion detection task. There are two main approaches to loading and utilizing LMs using Hugging Face’s Transformer library:

  1. Pipelines offer a very high abstraction level for getting ready to load an LM and perform inference on them almost instantly with very few lines of code, at the cost of having little configurability.
  2. Auto classes provide a lower level of abstraction, requiring more coding skills but offering more flexibility to adjust model parameters as well as customize text preprocessing steps like tokenization.

This tutorial gives you an easy start, by focusing on loading models as pipelines. Pipelines require specifying at least the type of language task, and optionally a model name to load. Since emotion detection is a very specific form of text classification problem, the task argument to use when loading the model should be “text-classification”:

<code>from transformers import pipeline
classifier = pipeline("text-classification", model="j-hartmann/emotion-english-distilroberta-base")</code>

 

On the other hand, it is highly recommended to specify with the ‘model’ argument the name of a specific model in Hugging Face hub capable of addressing our specific task of emotion detection. Otherwise, by default, we may load a text classification model that has not been trained upon data for this particular 6-class classification problem.

You may ask yourself: “How do I know which model name to use?”. The answer is simple: do a little bit of exploration throughout the Hugging Face website to find suitable models or models trained upon a specific dataset like the emotions data.

The next step is to start making predictions. Pipelines make this inference process incredibly easy, but just calling our newly instantiated pipeline variable and passing an input text to classify as an argument:

<code>example_tweet = "I love hugging face transformers!"
prediction = classifier(example_tweet)
print(prediction)
</code>

 

As a result, we get a predicted label and a confidence score: the closer this score to 1, the more “reliable” the prediction made is.

<code>[{'label': 'joy', 'score': 0.9825918674468994}]</code>

 

So, our input example “I love hugging face transformers!” confidently conveys a sentiment of joy.

You can pass multiple input texts to the pipeline to perform several predictions simultaneously, as follows:

<code>example_tweets = ["I love hugging face transformers!", "I really like coffee but it's too bitter..."]
prediction = classifier(example_tweets)
print(prediction)</code>

 

The second input in this example seemed much more challenging for the model to perform a confident classification:

<code>[{'label': 'joy', 'score': 0.9825918674468994}, {'label': 'sadness', 'score': 0.38266682624816895}]</code>

 

Last, we can also pass a batch of instances from a dataset like our previously loaded ’emotions’ data. This example passes the first 10 training inputs to our LM pipeline for classifying their feelings, then it prints a list containing each predicted label, leaving their confidence scores aside:

<code>train_batch = train_data[:10]["text"]
predictions = classifier(train_batch)
labels = [x['label'] for x in predictions]
print(labels)
</code>

 

Output:

<code>['sadness', 'sadness', 'anger', 'joy', 'anger', 'sadness', 'surprise', 'fear', 'joy', 'joy']</code>

 

For comparison, here are the original labels given to these 10 training instances:

<code>print(train_data[:10]["label"])</code>

 

Output:

<code>[0, 0, 3, 2, 3, 0, 5, 4, 1, 2]</code>

 

By looking at the emotions each numerical identifier is associated with, we can see that about 7 out of 10 predictions match the real labels given to these 10 instances.

Now that you know how to use Hugging Face transformer models to detect text emotions, why not explore other use cases and language tasks where pre-trained LMs can help?
 
 

Iván Palomares Carrascosa is a leader, writer, speaker, and adviser in AI, machine learning, deep learning & LLMs. He trains and guides others in harnessing AI in the real world.

Back to top button
Consent Preferences
WP Twitter Auto Publish Powered By : XYZScripts.com
SiteLock