I want to calculate Conditional Probability Distribution for my language model but I am not able to do because I need Conditional Frequency Distribution which I am not able to generate. This is my code:
# -*- coding: utf-8 -*-
import io
import nltk
from nltk.util import ngrams
from nltk.tokenize import sent_tokenize
from preprocessor import utf8_to_ascii
with io.open("mypet.txt",'r',encoding='utf8') as utf_file:
file_content = utf_file.read()
ascii_content = utf8_to_ascii(file_content)
sentence_tokenize_list = sent_tokenize(ascii_content)
all_trigrams = []
for sentence in sentence_tokenize_list:
sentence = sentence.rstrip('.!?')
tokens = nltk.re.findall(r"\w+(?:[-']\w+)*|'|[-.(]+|\S\w*", sentence)
trigrams = ngrams(tokens, 3,pad_left=True,pad_right=True,left_pad_symbol='<s>', right_pad_symbol="</s>")
all_trigrams.extend(trigrams)
conditional_frequency_distribution = nltk.ConditionalFreqDist(all_trigrams)
conditional_probability_distribution = nltk.ConditionalProbDist(conditional_frequency_distribution, nltk.MLEProbDist)
for trigram in all_trigrams:
print "{0}: {1}".format(conditional_probability_distribution[trigram[0]].prob(trigram[1]), trigram)
But I am getting this error:
line 23, in <module>
ValueError: too many values to unpack
This is my preprocessor.py file which is handling utf-8 chars:
# -*- coding: utf-8 -*-
import json
def utf8_to_ascii(utf8_text):
with open("utf_to_ascii.json") as data_file:
data = json.load(data_file)
utf_table = data["chars"]
for key, value in utf_table.items():
utf8_text = utf8_text.replace(key, value)
return utf8_text.encode('ascii')
And this is my utf_to_ascii.json file which I used to replace utf-8 char to ascii char:
{
"chars": {
"“":"",
"”":"",
"’":"'",
"—":"-",
"–":"-"
}
}
Can someone suggest How can I calculate Conditional Frequency Distribution for trigrams in NLTK?
I finally figured out how to do that. So in above code in I am converting trigrams to bigrams. For example, I have ('I', 'am', 'going')
,am converting it to (('I', 'am'), 'going')
. So its a bigram with two tuples where first tuple is again a tuple of two words. For achieving this I just changed few lines of this code:
trigrams_as_bigrams = []
for sentence in sentence_tokenize_list:
....
....
trigrams = ngrams(tokens, 3,pad_left=True,pad_right=True,left_pad_symbol='<s>', right_pad_symbol="</s>")
trigrams_as_bigrams.extend([((t[0],t[1]), t[2]) for t in trigrams])
....
....
And rest of the code is same as before. Its working fine for me. Thank you for your efforts.