I have created a tokenized data ( text ) within a data frame in Python
I just want to count the tokenized data and have an output that shows the frequency of repetition for each element in the tokenized data.
here is the code I used to create the tokenized data :
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import re
def tokenize(txt):
tokens = re.split('\W+', txt)
return tokens
Complains['clean_text_tokenized'] = Complains['clean text'].apply(lambda x: tokenize(x.lower()))
# Complains['clean text'] is the original file of the data
Complains['clean_text_tokenized'].head(10)
here is the output of the tokenized data
0 [comcast, cable, internet, speeds]
1 [payment, disappear, service, got, disconnected]
2 [speed, and, service]
3 [comcast, imposed, a, new, usage, cap, of, 300...
4 [comcast, not, working, and, no, service, to, ...
5 [isp, charging, for, arbitrary, data, limits, ...
6 [throttling, service, and, unreasonable, data,...
7 [comcast, refuses, to, help, troubleshoot, and...
8 [comcast, extended, outages]
9 [comcast, raising, prices, and, not, being, av...
Name: clean_text_tokenized, dtype: object
any advice would be helpful
You can use Counter
:
from collections import Counter
# ... and then
def tokenize(txt):
return Counter(re.split('\W+', txt))
See a Python test:
from collections import Counter
import pandas as pd
import re
Complains = pd.DataFrame({'clean text':['comcast, cable, internet, speeds', 'payment, disappear, service, got, disconnected']})
Complains['clean_text_tokenized'] = Complains['clean text'].str.findall(r'\w+')
freq = Counter([item for sublist in Complains['clean_text_tokenized'].to_list() for item in sublist])