Im loading mnist dataset as follows,
(X_train, y_train), (X_test, y_test) = mnist.load_data()
However since I need to load and train my own dataset, I wrote the little script as follows which will give the exact train and test values
def load_train(path):
X_train = []
y_train = []
print('Read train images')
for j in range(10):
files = glob(path + "*.jpeg")
for fl in files:
img = get_im(fl)
print(fl)
X_train.append(img)
y_train.append(j)
return np.asarray(X_train), np.asarray(y_train)
the pertained model generates a numpy array of size (64, 28, 28, 1) while training. Im concatenating the image_batch from the generated image as follows,
X = np.concatenate((image_batch, generated_images))
However im getting the following error,
ValueError: all the input arrays must have same number of dimensions
img_batch is of size (64, 28, 28) generated_images is of size (64, 28, 28, 1)
How do I expand the dimension of the img_batch
in X_train so as to concatenate with generated_images? or is there any other ways to load the custom images in place of loadmnist?
There is a function in python called np.expand_dims()
which can expand the dimension of any array along the axis provided in arguments. In your case use, img_batch = np.expand_dims(img_batch, axis=3)
.
One other approach would be to use reshape
function as suggested by @Ioannis Nasios. img_batch = img_batch.reshape(64,28,28,1)