Search code examples
nlprootspacychunkslemmatization

finding the POS of the root of a noun_chunk with spacy


When using spacy you can easily loop across the noun_phrases of a text as follows:

S='This is an example sentence that should include several parts and also make clear that studying Natural language Processing is not difficult'
nlp = spacy.load('en_core_web_sm')
doc = nlp(S)

[chunk.text for chunk in doc.noun_chunks]
# = ['an example sentence', 'several parts', 'Natural language Processing']

You can also get the "root" of the noun chunk:

[chunk.root.text for chunk in doc.noun_chunks]
# = ['sentence', 'parts', 'Processing']

How can I get the POS of every of those words (even if looks like the root of a noun_phrase is always a noun), and how can I get the lemma, the shape and the word in singular of that particular word.

Is that even possible?

thx.


Solution

  • Each chunk.root is a Token where you can get different attributes including lemma_ and pos_ (or tag_ if you prefer the PennTreekbak POS tags).

    import spacy
    S='This is an example sentence that should include several parts and also make ' \
      'clear that studying Natural language Processing is not difficult'
    nlp = spacy.load('en_core_web_sm')
    doc = nlp(S)
    for chunk in doc.noun_chunks:
        print('%-12s %-6s  %s' % (chunk.root.text, chunk.root.pos_, chunk.root.lemma_))
    
    sentence     NOUN    sentence
    parts        NOUN    part
    Processing   NOUN    processing
    

    BTW... In this sentence "processing" is a noun so the lemma of it is "processing", not "process" which is the lemma of the verb "processing".