I am sure its a simple problem. I have trying to feed a batch of size (Batch_Size,28,28,1) into an Autoencoder. I want to introduce a lambda layer before the Conv2D layer, that'd pick each sample from the current batch and roll it horizontally.
encoder_input=Input(shape=self.input_dim,name='encoder_input')
x=encoder_input
x=Conv2D(filters=32,kernel_size=3,
strides=1,name='encoder_conv'+str(lyr),padding ='same')
Something along these lines I have done using Numpy
import numpy as np
def rotate(my_array):
rotate=np.random.randint(-6,6, size=1)
if rotate!=0:
new_array=np.roll(my_array,rotate[0],axis=1)
return new_array
else:
return my_array
arr=np.random.randint(10, size=(10,15,1))
new_array=rotate(arr)
This results the new_array to be a rotated version of arr. I want to use tf.roll similarly on my input batch.
Thanks David. Yes, indeed I found a better way to do it using ImageDataGenerator. What I realized from here is that , when trying to pass a numpy array , it needs to have a third dimension indicating the number of channels. This will cause the ImageDataGenerator to treat the numpy array as an image.
def my_func(img):
rotate=np.random.randint(-6,6,size=1)
new_img=np.roll(img,rotate[0],axis=1)
return new_img
BATCH_SIZE=32
gen = tf.keras.preprocessing.image.ImageDataGenerator(preprocessing_function = my_func)
train_gen = gen.flow(x_train,y_train,batch_size=BATCH_SIZE)
test_gen = gen.flow(x_test,y_test,batch_size=BATCH_SIZE)
my_model.model.fit(train_gen,
validation_data=test_gen,
epochs=300,
callbacks=callbacks_list)