I've trained a naive Bayes model with R using the tidymodels framework.
The whole model is saved in an rds file. Here's a snippet of the content of that model (the whole model has 181 or more such tables so I can't post it here):
══ Workflow [trained] ══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════
Preprocessor: Recipe
Model: naive_Bayes()
── Preprocessor ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
0 Recipe Steps
── Model ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
$apriori
grouping
A B C D E F
0.1666667 0.1666667 0.1666667 0.1666667 0.1666667 0.1666667
$tables
$tables$var1
var
grouping 1 2 3 4
1 0.3173302108 0.3728337237 0.2304449649 0.0793911007
2 0.2104513064 0.3885985748 0.2923990499 0.1085510689
3 0.2561613144 0.5481702763 0.1784914115 0.0171769978
4 0.0038167939 0.1059160305 0.5477099237 0.3425572519
5 0.0009017133 0.0324616772 0.3841298467 0.5825067628
6 0.1474328780 0.4399434762 0.3655204899 0.0471031559
$tables$var2
var
grouping 1 2 3 4
1 0.2215456674 0.3592505855 0.2777517564 0.1414519906
2 0.1532066508 0.3446555819 0.3225653207 0.1795724466
3 0.1762509335 0.4458551158 0.3330843913 0.0448095594
4 0.0009541985 0.0324427481 0.4208015267 0.5458015267
5 0.0009017133 0.0189359784 0.2957619477 0.6844003607
6 0.1427225624 0.4371172869 0.3546867640 0.0654733867
$tables$var3
var
grouping 1 2 3 4 5
1 0.7679700304 0.1992507609 0.0320767970 0.0004682744 0.0002341372
2 0.3680835906 0.3526478271 0.2526715744 0.0256471147 0.0009498931
3 0.0432835821 0.2328358209 0.5201492537 0.1694029851 0.0343283582
4 0.0514775977 0.2278360343 0.4642516683 0.1954242135 0.0610104862
5 0.0117117117 0.0702702703 0.3144144144 0.3486486486 0.2549549550
6 0.0150659134 0.1012241055 0.4077212806 0.3436911488 0.1322975518
$tables$var4
var
grouping 1 2 3 4 5
1 0.6518379771 0.3289627722 0.0187309764 0.0002341372 0.0002341372
2 0.1260983139 0.2125385894 0.5079553550 0.1184991688 0.0349085728
3 0.3089552239 0.4783582090 0.2059701493 0.0037313433 0.0029850746
4 0.3441372736 0.4718779790 0.1811248808 0.0019065777 0.0009532888
5 0.0270270270 0.0360360360 0.3432432432 0.3612612613 0.2324324324
6 0.0127118644 0.0555555556 0.4119585687 0.3672316384 0.1525423729
I read that file into R which works fine and then want to use that model and predict some values of a new data set with:
model <- readRDS(file.choose())
new_pred <- predict(model,
dat_new,
type = "prob")
For me, personally, this runs just fine. But when I sent this to a client of me, they get the following error:
Error in blueprint$forge$clean(blueprint = blueprint, new_data = new_data, :
attempt to apply non-function)
I know, with such little information it is very difficult to figure out what's going on, but I'm still trying. Maybe the tidymodels experts here know where such an error might come from.
Any ideas?
Update to show how the model is created:
library(tidymodels)
library(discrim)
model_recipe <- recipe(outcome_var ~ ., data = dat_train)
model_final <- naive_Bayes(Laplace = 1) |>
set_mode("classification") |>
set_engine("klaR", prior = rep((1/6), 6))
model_final_wf <- workflow() |>
add_recipe(model_recipe) |>
add_model(model_final)
full_fit <- model_final_wf |>
fit(data = dat_full)
saveRDS(full_fit, file = 'my_model.rds')
You are getting this error because your client are using too old a version of {hardhat}.
In version 1.1.0 of hardhat a lot of internals were changed about hardhat. This means that the $clean
object is no longer present, which is causing the error that we are seeing.
The recommended cause of action is for both of you to use the same version of {hardhat}, preferably the most recent one, which at the time of writing this is 1.2.0.
Additionally: when sharing models like this it is recommended that you also move along package versions to make sure everything is in sync, such as with renv or by using more dedicated model deployment such as with vetiver