Model train system: AWS Ubuntu p2.xlarge, R 3.4.0, mxnet_1.0.1. Saved via:
mx.model.save(A3.MXmodel, "Action/A3.MXmodel", iteration = 3000)
Loading on same system works fine via:
A3.MXmodel <- mx.model.load("A3.MXmodel", iteration=3000)
A3.pred <- predict(A3.MXmodel, as.matrix(nNewVector))
A3.pred.label = max.col(t(A3.pred))-1
Moving the model files to a new system (AMI clone of first instance BUT on g2.xlarge). And attempting to predict:
A3.pred <- predict(A3.MXmodel, as.matrix(nNewVector))
Leads to an immediate crash of rstudio, no data saved or error messages. I can confirm mxnet is working on the new instance via the installation check:
library(mxnet)
a <- mx.nd.ones(c(2,3), ctx = mx.gpu())
b <- a * 2 + 1
b
Do I have to specifify somewhere on the new isntance that the models are based on GPU devices? Can a model trained on a GPU instance be run on a CPU instance with CPU mxnet build?
To answer the specific questions:
Do I have to specifify somewhere on the new isntance that the models are based on GPU devices?
No the model's structure and parameters are stored but there's no coding for the hardware.
Can a model trained on a GPU instance be run on a CPU instance with CPU mxnet build?
Yes. And that may be highly desirable -- train on GPU for speed, do inference on CPU because it's less expensive in computation and cost.