Search code examples
pythonapache-sparkpysparknltkrdd

How Do I Count POS Tags Using Pyspark and NLTK?


I have a some text or a large file and I need to use NLTK and Pyspark to count the number of POS tags. I couldn't find a way to import the textfile so I tried to add a short string and failed.

The counting line needs to incorporate pyspark.

##textfile = sc.textfile('') 
##or 
##textstring = """This is just a bunch of words to use for this example.  John gave ##them to me last night but Kim took them to work.  Hi Stacy.  ###'''URL:http://example.com'''"""

tstring = sc.parallelize(List(textstring)).collect()

TOKEN_RE = re.compile(r"\b[\w']+\b")

dropURL=text.filter(lambda x: "URL" not in x)

words = dropURL.flatMap(lambda dropURL: dropURL.split(" "))

nltkwords = words.flatMap(lambda words: nltk.tag.pos_tag(nltk.regexp_tokenize(words, TOKEN_RE)))
#word_counts =nltkwords.map(lambda nltkwords: (ntlkwords,1))


nltkwords.take(50)

Solution

  • Here's an example for your testing string. I think you're just missing a step to split the string by space. Otherwise the whole line will be removed because URL is in that line.

    import nltk
    import re
    
    textstring = """This is just a bunch of words to use for this example.  John gave ##them to me last night but Kim took them to work.  Hi Stacy.  ###'''URL:http://example.com'''"""
    
    TOKEN_RE = re.compile(r"\b[\w']+\b")
    text = sc.parallelize(textstring.split(' '))
    dropURL = text.filter(lambda x: "URL" not in x)
    
    words = dropURL.flatMap(lambda dropURL: dropURL.split(" "))
    
    nltkwords = words.flatMap(lambda words: nltk.tag.pos_tag(nltk.regexp_tokenize(words, TOKEN_RE)))
    
    nltkwords.collect()
    # [('This', 'DT'), ('is', 'VBZ'), ('just', 'RB'), ('a', 'DT'), ('bunch', 'NN'), ('of', 'IN'), ('words', 'NNS'), ('to', 'TO'), ('use', 'NN'), ('for', 'IN'), ('this', 'DT'), ('example', 'NN'), ('John', 'NNP'), ('gave', 'VBD'), ('them', 'PRP'), ('to', 'TO'), ('me', 'PRP'), ('last', 'JJ'), ('night', 'NN'), ('but', 'CC'), ('Kim', 'NNP'), ('took', 'VBD'), ('them', 'PRP'), ('to', 'TO'), ('work', 'NN'), ('Hi', 'NN'), ('Stacy', 'NN')]
    

    To count the occurrences of pos tags, you can do a reduceByKey:

    word_counts = nltkwords.map(lambda x: (x[1], 1)).reduceByKey(lambda x, y: x + y)
    
    word_counts.collect()
    # [('NNS', 1), ('TO', 3), ('CC', 1), ('DT', 3), ('JJ', 1), ('VBZ', 1), ('RB', 1), ('NN', 7), ('VBD', 2), ('PRP', 3), ('IN', 2), ('NNP', 2)]