Is this expected behaviour?
library(mxnet)
hidden_u_1 <- 100
activ_hidden_1 <- 'tanh'
hidden_u_2 <- 1
learn_rate <- 0.001
initializer <- mx.init.uniform(1)
optimizer <- 'rmsprop' #sgd
loss <- mx.metric.mse
device.cpu <- mx.cpu()
mini_batch <- 64 #8
rounds <- 1 #2
## data symbols
nn_data <- mx.symbol.Variable('data')
nn_label <- mx.symbol.Variable('label')
## first fully connected layer
flatten <- mx.symbol.Flatten(data = nn_data)
fc1 <- mx.symbol.FullyConnected(data = flatten
, num_hidden = hidden_u_1)
activ1 <- mx.symbol.Activation(data = fc1, act.type = activ_hidden_1)
## second fully connected layer
fc2 <- mx.symbol.FullyConnected(data = activ1, num_hidden = hidden_u_2)
q_func <- mx.symbol.LinearRegressionOutput(data = fc2, label = nn_label, name = 'regr')
# initialize NN
train.x <- matrix(rnorm(640, 0, 1), ncol = 10)
train.x <- t(train.x)
dim(train.x) <- c(nrow(train.x), 1, 1, ncol(train.x))
train.y = rnorm(64, 0, 1)
nn_model <- mx.model.FeedForward.create(
symbol = q_func,
X = train.x,
y = train.y,
ctx = device.cpu,
num.round = rounds,
array.batch.size = mini_batch,
optimizer = optimizer,
eval.metric = loss,
learning.rate = learn_rate,
initializer = initializer
)
1 round (pass) of calculating loss values on a minibatch of samples with size more than 1 always returns NaN, while 2 and more passes give finite values starting from 2nd pass.
If the number of samples is n times larger than minibatch (e.g., 64 / 8), then even after 1 round the loss is calculable.