Search code examples
pythonnltkstemmingpart-of-speech

Confused about priority between stemmer and pos tagger


So I was analyzing a text corpus and I used stemmer for all the tokenized words. But I also have to find all the nouns in the corpus so I again did a nltk.pos_tag(stemmed_sentence) But my question is am I doing it right?

A.] tokenize->stem->pos_tagging

OR

B.] tokenize->stem       #stemming and pos_tagging done seperately
    tokeinze->pos_tagging

Ive followed method A, but Im confused as to its the right way to achieve pos_tagging.


Solution

  • Why don't you try it out?

    Here's an example:

    >>> from nltk.stem import PorterStemmer
    >>> from nltk import word_tokenize, pos_tag
    >>> sent = "This is a messed up sentence from the president's Orama and it's going to be sooo good, you're gonna laugh."
    

    This is the outcome of tokenizing.

    >>> [word for word in word_tokenize(sent)]
    ['This', 'is', 'a', 'messed', 'up', 'sentence', 'from', 'the', 'president', "'s", 'Orama', 'and', 'it', "'s", 'going', 'to', 'be', 'sooo', 'good', ',', 'you', "'re", 'gon', 'na', 'laugh', '.']
    

    This is the outcome of tokenize -> stem

    >>> porter = PorterStemmer()
    >>> [porter.stem(word) for word in word_tokenize(sent)]
    [u'Thi', u'is', u'a', u'mess', u'up', u'sentenc', u'from', u'the', u'presid', u"'s", u'Orama', u'and', u'it', u"'s", u'go', u'to', u'be', u'sooo', u'good', u',', u'you', u"'re", u'gon', u'na', u'laugh', u'.']
    

    This is the outcome of tokenize -> stem -> POS tag

    >>> pos_tag([porter.stem(word) for word in word_tokenize(sent)])
    [(u'Thi', 'NNP'), (u'is', 'VBZ'), (u'a', 'DT'), (u'mess', 'NN'), (u'up', 'RP'), (u'sentenc', 'NN'), (u'from', 'IN'), (u'the', 'DT'), (u'presid', 'JJ'), (u"'s", 'POS'), (u'Orama', 'NNP'), (u'and', 'CC'), (u'it', 'PRP'), (u"'s", 'VBZ'), (u'go', 'RB'), (u'to', 'TO'), (u'be', 'VB'), (u'sooo', 'RB'), (u'good', 'JJ'), (u',', ','), (u'you', 'PRP'), (u"'re", 'VBP'), (u'gon', 'JJ'), (u'na', 'NN'), (u'laugh', 'IN'), (u'.', '.')]
    

    This is the outcome of tokenize -> POS tag

    >>> pos_tag([word for word in word_tokenize(sent)])
    [('This', 'DT'), ('is', 'VBZ'), ('a', 'DT'), ('messed', 'VBN'), ('up', 'RP'), ('sentence', 'NN'), ('from', 'IN'), ('the', 'DT'), ('president', 'NN'), ("'s", 'POS'), ('Orama', 'NNP'), ('and', 'CC'), ('it', 'PRP'), ("'s", 'VBZ'), ('going', 'VBG'), ('to', 'TO'), ('be', 'VB'), ('sooo', 'RB'), ('good', 'JJ'), (',', ','), ('you', 'PRP'), ("'re", 'VBP'), ('gon', 'JJ'), ('na', 'NN'), ('laugh', 'IN'), ('.', '.')]
    

    So what's the right way?