how to construct the unigrams, bi-grams and tri-grams for large corpora then to compute the frequency for each of them. Arrange the results by the most frequent to the least frequent grams.
from nltk import word_tokenize
from nltk.util import ngrams
from collections import Counter
text = "I need to write a program in NLTK that breaks a corpus (a large collection of \
txt files) into unigrams, bigrams, trigrams, fourgrams and fivegrams.\
I need to write a program in NLTK that breaks a corpus"
token = nltk.word_tokenize(text)
bigrams = ngrams(token,2)
trigrams = ngrams(token,3)```
Try this:
import nltk
from nltk import word_tokenize
from nltk.util import ngrams
from collections import Counter
text = '''I need to write a program in NLTK that breaks a corpus (a large
collection of txt files) into unigrams, bigrams, trigrams, fourgrams and
fivegrams. I need to write a program in NLTK that breaks a corpus'''
token = nltk.word_tokenize(text)
most_frequent_bigrams = Counter(list(ngrams(token,2))).most_common()
most_frequent_trigrams = Counter(list(ngrams(token,3))).most_common()
for k, v in most_frequent_bigrams:
print (k,v)
for k, v in most_frequent_trigrams:
print (k,v)