Search code examples
pythonnlpspacysentiment-analysiswordnet

Emotional score of sentences using Spacy


I have a series of 100.000+ sentences and I want to rank how emotional they are.

I am quite new to the NLP world, but this is how I managed to get started (adaptation from spacy 101)

import spacy
from spacy.matcher import Matcher

matcher = Matcher(nlp.vocab)

def set_sentiment(matcher, doc, i, matches):
    doc.sentiment += 0.1

myemotionalwordlist = ['you','superb','great','free']

sentence0 = 'You are a superb great free person'
sentence1 = 'You are a great person'
sentence2 = 'Rocks are made o minerals'

sentences = [sentence0,sentence1,sentence2]

pattern2 = [[{"ORTH": emotionalword, "OP": "+"}] for emotionalword in myemotionalwordlist]
matcher.add("Emotional", set_sentiment, *pattern2)  # Match one or more emotional word

for sentence in sentences:
    doc = nlp(sentence)
    matches = matcher(doc)

    for match_id, start, end in matches:
        string_id = nlp.vocab.strings[match_id]
        span = doc[start:end]
    print("Sentiment", doc.sentiment)

myemotionalwordlist is a list of about 200 words that Ive built manually.

My questions are:

(1-a) Counting the number of emotional words does not seem like the best approach. Anyone has any suggetions of a better way of doing so?

(1-b) In case this approach is good enough, any suggestions on how I can extract emotional words from wordnet?

(2) Whats the best way of escalating this? I am thinking about adding all sentences to a pandas data frame and then applying the match function to each one of them

Thanks in advance!


Solution

  • There are going to be two main approaches:

    • the one you have started, which is a list of emotional words, and counting how often they appear
    • showing examples of what you consider emotional sentences and what are unemotional sentences to a machine learning model, and let it work it out.

    The first way will get better as you give it more words, but you will eventually hit a limit. (Simply due to the ambiguity and flexibility of human language, e.g. while "you" is more emotive than "it", there are going to be a lot of unemotional sentences that use "you".)

    any suggestions on how I can extract emotional words from wordnet?

    Take a look at sentiwordnet, which adds a measure of positivity, negativity or neutrality to each wordnet entry. For "emotional" you could extract just those that have either pos or neg score over e.g. 0.5. (Watch out for the non-commercial-only licence.)

    The second approach will probably work better if you can feed it enough training data, but "enough" can sometimes be too much. Other downsides are the models often need much more compute power and memory (a serious issue if you need to be offline, or working on a mobile device), and that they are a blackbox.

    I think the 2020 approach would be to start with a pre-trained BERT model (the bigger the better, see the recent GPT-3 paper), and then fine-tune it with a sample of your 100K sentences that you've manually annotated. Evaluate it on another sample, and annotate more training data for the ones it got wrong. Keep doing this until you get the desired level of accuracy.

    (Spacy has support for both approaches, by the way. What I called fine-tuning above is also called transfer learning. See https://spacy.io/usage/training#transfer-learning Also googling for "spacy sentiment analysis" will find quite a few tutorials.)