I am trying to tokenize a sentence, and I believe that the code is correct but there is no output. What could be the problem? Here is the code.
import nltk
from nltk.tokenize import word_tokenize
text = word_tokenize("And now for something completely different")
nltk.pos_tag(text)
text = word_tokenize("They refuse to permit us to obtain the refuse permit")
nltk.pos_tag(text)
It seems the following packages are missing.
Note: You need to download them for the first time.
Try this..
import nltk
nltk.download('punkt')
nltk.download('averaged_perceptron_tagger')
from nltk.tokenize import word_tokenize
text = word_tokenize("And now for something completely different")
print(nltk.pos_tag(text))
text = word_tokenize("They refuse to permit us to obtain the refuse permit")
print(nltk.pos_tag(text))
print("----End of execution----")