I am trying to create cluster out of text contained in an excel file but I'm getting the error "AttributeError: 'int' object has no attribute 'lower'".
Sample.xlsx is a file containing data like this:
I have created a list called corpus which has unique text according to each row and I get that problem while vectorizing the corpus.
'''python
import pandas as pd
import numpy as np
data=pd.read_excel('sample.xlsx')
idea=data.iloc[:,0:1] #Selecting the first column that has text.
#Converting the column of data from excel sheet into a list of documents, where each document corresponds to a group of sentences.
corpus=[]
for index,row in idea.iterrows():
corpus.append(row['_index_text_data'])
#Count Vectoriser then tidf transformer
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(corpus) #ERROR AFTER EXECUTING THESE #LINES
#vectorizer.get_feature_names()
#print(X.toarray())
from sklearn.feature_extraction.text import TfidfTransformer
transformer = TfidfTransformer(smooth_idf=False)
tfidf = transformer.fit_transform(X)
print(tfidf.shape )
from sklearn.cluster import KMeans
num_clusters = 5 #Change it according to your data.
km = KMeans(n_clusters=num_clusters)
km.fit(tfidf)
clusters = km.labels_.tolist()
idea={'Idea':corpus, 'Cluster':clusters} #Creating dict having doc with the corresponding cluster number.
frame=pd.DataFrame(idea,index=[clusters], columns=['Idea','Cluster']) # Converting it into a dataframe.
print("\n")
print(frame) #Print the doc with the labeled cluster number.
print("\n")
print(frame['Cluster'].value_counts()) #Print the counts of doc belonging `#to each cluster.`
Error: "AttributeError: 'int' object has no attribute 'lower'"
If someone is looking for an answer to this question then just convert the entire corpus to text using '''corpus = [str (item) for item in corpus]''' in the code above after the for loop.
New code:
corpus=[]
for index,row in idea.iterrows():
corpus.append(row['_index_text_data'])
corpus = [str (item) for item in corpus]