I am trying to implement a very simple ML learning problem, where I use text to predict some outcome. In R, some basic example would be:
import some fake but funny text data
library(caret)
library(dplyr)
library(text2vec)
dataframe <- data_frame(id = c(1,2,3,4),
text = c("this is a this", "this is
another",'hello','what???'),
value = c(200,400,120,300),
output = c('win', 'lose','win','lose'))
> dataframe
# A tibble: 4 x 4
id text value output
<dbl> <chr> <dbl> <chr>
1 1 this is a this 200 win
2 2 this is another 400 lose
3 3 hello 120 win
4 4 what??? 300 lose
Use text2vec
to get a sparse matrix representation of my text (see also https://github.com/dselivanov/text2vec/blob/master/vignettes/text-vectorization.Rmd)
#these are text2vec functions to tokenize and lowercase the text
prep_fun = tolower
tok_fun = word_tokenizer
#create the tokens
train_tokens = dataframe$text %>%
prep_fun %>%
tok_fun
it_train = itoken(train_tokens)
vocab = create_vocabulary(it_train)
vectorizer = vocab_vectorizer(vocab)
dtm_train = create_dtm(it_train, vectorizer)
> dtm_train
4 x 6 sparse Matrix of class "dgCMatrix"
what hello another a is this
1 . . . 1 1 2
2 . . 1 . 1 1
3 . 1 . . . .
4 1 . . . . .
Finally, train the algo (for instance, using caret
) to predict output
using my sparse matrix.
mymodel <- train(x=dtm_train, y =dataframe$output, method="xgbTree")
> confusionMatrix(mymodel)
Bootstrapped (25 reps) Confusion Matrix
(entries are percentual average cell counts across resamples)
Reference
Prediction lose win
lose 17.6 44.1
win 29.4 8.8
Accuracy (average) : 0.264
My problem is:
I see how to import data into h20
using spark_read_csv
, rsparkling
and as_h2o_frame
.
However, for points 2. and 3. above I am completely lost.
Can someone please give me some hints or tell me if this approach is even possible with h2o
?
Many thanks!!
You can solve this one of two ways -- 1. in R first and then move to H2O for modeling or 2. Entirely in H2O using H2O's word2vec implementation.
Use R data.frames and text2vec, then convert the sparse matrix to an H2O frame and do the modeling in H2O.
# Use same code as above to get to this point, then:
# Convert dgCMatrix to H2OFrame, cbind the response col
train <- as.h2o(dtm_train)
train$y <- as.h2o(dataframe$output)
# Train any H2O model (e.g GBM)
mymodel <- h2o.gbm(y = "y", training_frame = train,
distribution = "bernoulli", seed = 1)
Or you can train a word2vec embedding in H2O, apply it to your text to get the equivalent of a sparse matrix. Then train a H2O machine learning model (GBM). I will try edit this answer later with a working example using your data, but in the meantime, here is an example demonstrating the use of H2O's word2vec functionality in R.