Search code examples
pythonnltktokenizestop-words

How to treat a phrase containing stopwords as a single token with Python nltk.tokenize


A string can be tokenized by removing some unnecessary stopwords using nltk.tokenize. But how can I tokenize a phrase containing stopwords as a single token, while removing other stopwords?

For example:

Input: Trump is the President of the United States.

Output: ['Trump','President of the United States']

How can I get the result that just removes 'is' and first 'the' but doesn't remove 'of' and second 'the'?


Solution

  • You can use nltk's Multi-Word Expression Tokenizer which allows to merge multi-word expressions into single tokens. You can create a lexicon of multi-word expressions and add entries to it like this:

    from nltk.tokenize import MWETokenizer
    mwetokenizer = MWETokenizer([('President','of','the','United','States')], separator=' ')
    mwetokenizer.add_mwe(('President','of','France'))
    

    Note that MWETokenizer takes a list of tokenized text as input, and re-tokenizes it. So, first tokenize the sentence eg. with word_tokenize(), and then feed it into the MWETokenizer:

    from nltk.tokenize import word_tokenize
    sentence = "Trump is the President of the United States, and Macron is the President of France."
    mwetokenized_sentence = mwetokenizer.tokenize(word_tokenize(sentence))
    # ['Trump', 'is', 'the', 'President of the United States', ',', 'and', 'Macron', 'is', 'the', 'President of France', '.']
    

    Then, filter out stop-words to get the final filtered tokenized sentence:

    from nltk.corpus import stopwords
    stop_words = set(stopwords.words('english'))
    filtered_sentence = [token for token in mwetokenizer.tokenize(word_tokenize(sentence)) if token not in stop_words]
    print(filtered_sentence)
    

    Output:

    ['Trump', 'President of the United States', ',', 'Macron', 'President of France', '.']