model.add(Convolution2D(64, 3, 3, border_mode='same', input_shape=(32, 32, 3)))
model.add(Activation('relu'))
model.add(Convolution2D(32, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Convolution2D(64, 3, 3, border_mode='same'))
model.add(Activation('relu'))
model.add(Convolution2D(64, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(10))
model.add(Activation('softmax'))
this is the error
ValueError Traceback (most recent call last)
<ipython-input-21-a60216c72b54> in <module>()
----> 1 model.add(Convolution2D(64, 3, 3, border_mode='same', input_shape=(3, 32, 32)))
2 model.add(Activation('relu'))
3 model.add(Convolution2D(32, 3, 3))
4 model.add(Activation('relu'))
5 model.add(MaxPooling2D(pool_size=(2, 2)))
/home/pranshu_44/anaconda3/lib/python3.5/site-packages/keras/models.py in add(self, layer)
330 output_shapes=[self.outputs[0]._keras_shape])
331 else:
--> 332 output_tensor = layer(self.outputs[0])
333 if isinstance(output_tensor, list):
334 raise TypeError('All layers in a Sequential model '
/home/pranshu_44/anaconda3/lib/python3.5/site-packages/keras/engine/topology.py in __call__(self, x, mask)
527 # Raise exceptions in case the input is not compatible
528 # with the input_spec specified in the layer constructor.
--> 529 self.assert_input_compatibility(x)
530
531 # Collect input shapes to build layer.
/home/pranshu_44/anaconda3/lib/python3.5/site-packages/keras/engine/topology.py in assert_input_compatibility(self, input)
467 self.name + ': expected ndim=' +
468 str(spec.ndim) + ', found ndim=' +
--> 469 str(K.ndim(x)))
470 if spec.dtype is not None:
471 if K.dtype(x) != spec.dtype:
ValueError: Input 0 is incompatible with layer convolution2d_11: expected ndim=4, found ndim=2
i'm trying to image classification of cifar 10 but i'm getting this error according to do docs [https://keras.io/layers/convolutional/][1] my answer is correct but i don't know why i'm getting this error
The Conv2D layer requires an input of 4 dimensions, but, apparently, you only give 2. But I'm sure you've already noticed this.
According to adventuresinmachinelearning :
The format of the data to be supplied is [i, j, k, l] where i is the number of training samples, j is the height of the image, k is the weight and l is the channel number.
I'm unfamiliar with the data you're using, but the value for l (the channel number) should be:
[For a] greyscale image, l will always be equal to 1 (if we had an RGB image, it would be equal to 3)
So basically you just have to:
import tensorflow as tf
tf.reshape(your_image_tensor, [-1, 28, 28, 1]) #For a grayscale image
tf.reshape(your_image_tensor, [-1, 28, 28, 3]) #For a RGB image
Making the appropriate changes for your own code. If you dont want to use tensorflow, I recommend you read this
Update: You can also reshape arrays with numpy.reshape() For more: https://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html