Search code examples
pythoncluster-analysisfeature-extractiontf-idf

On what basis my source is vectorizing & clustering data?


I am taking an input from a text wanted to build a semantic vocabulary, however without vocabulary I am just passing a token list of words. But I am not able to figure out on what basis vectorization & clustering is happening when vocabulary is not set? In the documentation it is mentioned that "If not given, a vocabulary is determined from the input documents." However, I am only taking one txt file for my input.

I have tried to create vocabulary out of the wordnet synonym set but not able to reach anywhere.

import string
import re
import nltk
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
from sklearn.cluster import KMeans
from nltk.corpus import wordnet


src = open('Sample.txt', 'r')
pageData = src.read().splitlines()

# preprocessing
def clean_text(text):
text = "".join([word.lower() for word in text if word not in string.punctuation])
tokenize = re.split("\W+", text)  # tokenizing based on words
return text

filter_data = clean_text(pageData)
# Feature Extraction
Tfidf_vectorizer = TfidfVectorizer(tokenizer=clean_text, analyzer='char', 
use_idf=True, stop_words=stopwords)
Tfidf_matrix = Tfidf_vectorizer.fit_transform(filter_data)  # checking the 
words in filter data to find relevance
terms = Tfidf_vectorizer.get_feature_names()

# Clustering
km = KMeans(n_clusters=5, n_jobs=-1)
labels = km.fit_transform(Tfidf_matrix)
clusters = km.labels_.tolist()
X = Tfidf_matrix.todense()

Solution

  • The vocabulary here is a mapping of words to coldumns.

    If you don't predefine a vocabulary (which is necessary when processing multiple sources to get the same columns) it will simply be built by adding new columns when seeing new words.