Please see the question listed here for more context.
I attempting to use a document term matrix, built using text2vec
, to train a naive bayes (nb
) model using the caret
package. However, I get this warning message:
Warning message: In eval(xpr, envir = envir) : model fit failed for Fold01.Rep1: usekernel=FALSE, fL=0, adjust=1 Error in NaiveBayes.default(x, y, usekernel = FALSE, fL = param$fL, ...) : Zero variances for at least one class in variables:
Please help me to understand this message and what steps I need to take to avoid the model fitting from failing. I've a feeling that I need to remove more sparse terms from the DTM but I'm not sure.
Code to build the model:
control <- trainControl(method="repeatedcv", number=10, repeats=3, savePredictions=TRUE, classProbs=TRUE)
Train_PRDHA_String.df$Result <- ifelse(Train_PRDHA_String.df$Result == 1, "X", "Y")
(warn=1)
(warnings=2)
t4 = Sys.time()
svm_nb <- train(x = as.matrix(dtm_train), y = as.factor(Train_PRDHA_String.df$Result),
method = "nb",
trControl=control,
tuneLength = 5,
metric ="Accuracy")
print(difftime(Sys.time(), t4, units = 'sec'))
Code to build the Document Term Matrix (Text2Vec):
library(text2vec)
library(data.table)
#Define preprocessing function and tokenization fucntion
preproc_func = tolower
token_func = word_tokenizer
#Union both of the Text fields - learn vocab from both fields
union_txt = c(Train_PRDHA_String.df$MAKTX_Keyword, Train_PRDHA_String.df$PH_Level_04_Description_Keyword)
#Create an iterator over tokens with the itoken() function
it_train = itoken(union_txt,
preprocessor = preproc_func,
tokenizer = token_func,
ids = Train_PRDHA_String.df$ID,
progressbar = TRUE)
#Build Vocabulary
vocab = create_vocabulary(it_train)
vocab
#Dimensional Reduction
pruned_vocab = prune_vocabulary(vocab,
term_count_min = 10,
doc_proportion_max = 0.5,
doc_proportion_min = 0.001)
vectorizer = vocab_vectorizer(pruned_vocab)
#Start building a document-term matrix
#vectorizer = vocab_vectorizer(vocab)
#learn vocabulary from Train_PRDHA_String.df$MAKTX_Keyword
it1 = itoken(Train_PRDHA_String.df$MAKTX_Keyword, preproc_func,
token_func, ids = Train_PRDHA_String.df$ID)
dtm_train_1 = create_dtm(it1, vectorizer)
#learn vocabulary from Train_PRDHA_String.df$PH_Level_04_Description_Keyword
it2 = itoken(Train_PRDHA_String.df$PH_Level_04_Description_Keyword, preproc_func,
token_func, ids = Train_PRDHA_String.df$ID)
dtm_train_2 = create_dtm(it2, vectorizer)
#Combine dtm1 & dtm2 into a single matrix
dtm_train = cbind(dtm_train_1, dtm_train_2)
#Normalise
dtm_train = normalize(dtm_train, "l1")
dim(dtm_train)
It means, when these variables are resampled, they only have one unique value. You can use preProc = "zv"
to get rid of the warning. It would help to get a small, reproducible example for these questions.