I am using a simple Keras model for series prediction.
I am feeding it input normalized across the entire series.
The model prediction accuracy seems to be correct during training. However, when I plot the outputs of the model.predict()
function, I can see that the outputs have been somehow scaled. It seems to be some kind of normalization/standardization type of scaling.
Changing the batch size on training affects the result. I tried setting the batch size to the size of the input set, so that the training with the entire series is done in a single batch, which improves the result, but it is still scaled.
My assumption is this has something to do with either normalization per input batch or output normalization. I do not have any BatchNormalization
layers in my model.
Is there a way to disable the default normalization/standardization of input/output in Keras (and does this default behavior exist)?
I am using Keras 2 with Tensorflow backend and Tensorflow 1.1.
Keras does not insert BN or any other normalization implicitly.
You must be observing something else.