Search code examples
pythonmachine-learningneural-networkkerasnormalization

Normalize the Validation Set for a Neural Network in Keras


So, I understand that normalization is important to train a neural network.

I also understand that I have to normalize validation- and test-set with the parameters from the training set (see e.g. this discussion: https://stats.stackexchange.com/questions/77350/perform-feature-normalization-before-or-within-model-validation)

My question is: How do I do this in Keras?

What I'm currently doing is:

import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from keras.callbacks import EarlyStopping

def Normalize(data):
    mean_data = np.mean(data)
    std_data = np.std(data)
    norm_data = (data-mean_data)/std_data
    return norm_data

input_data, targets = np.loadtxt(fname='data', delimiter=';')
norm_input = Normalize(input_data)

model = Sequential()
model.add(Dense(25, input_dim=20, activation='relu'))
model.add(Dense(1, activation='sigmoid'))

model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

early_stopping = EarlyStopping(monitor='val_acc', patience=50) 
model.fit(norm_input, targets, validation_split=0.2, batch_size=15, callbacks=[early_stopping], verbose=1)

But here, I first normalize the data w.r.t. the whole data set and then split up the validation set, which is wrong according to the above mentioned discussion.

It wouldn't be a big deal to save the mean and standard deviation from the training set(training_mean and training_std), but how can I apply the normalization with the training_mean and training_std on the validation set separately?


Solution

  • You can split your data into a training and testing dataset manually before fitting the model with sklearn.model_selection.train_test_split. Afterwards, normalize the training and testing data based on the mean and standard deviation of the training data. Finally, call model.fit with the validation_data argument.

    Code example

    import numpy as np
    from sklearn.model_selection import train_test_split
    
    data = np.random.randint(0,100,200).reshape(20,10)
    target = np.random.randint(0,1,20)
    
    X_train, X_test, y_train, y_test = train_test_split(data, target, test_size=0.2)
    
    def Normalize(data, mean_data =None, std_data =None):
        if not mean_data:
            mean_data = np.mean(data)
        if not std_data:
            std_data = np.std(data)
        norm_data = (data-mean_data)/std_data
        return norm_data, mean_data, std_data
    
    X_train, mean_data, std_data = Normalize(X_train)
    X_test, _, _ = Normalize(X_test, mean_data, std_data)
    
    model.fit(X_train, y_train, validation_data=(X_test,y_test), batch_size=15, callbacks=[early_stopping], verbose=1)