Search code examples
kerasdeep-learningsemantic-segmentation

How to fix input shape error in keras model


I tried to perform semantic segmentation using this tutorial: https://github.com/nikhilroxtomar/UNet-Segmentation-in-Keras-TensorFlow/blob/master/unet-segmentation.ipynb I modified a bit his notebook but I successfully trained the model with a 50% accuracy.

I already tried to reshape the input array but it does not work. HERE IS THE CODE:

test =  X_train[0]
test.shape
>>> (480, 640, 4)
parallel_model.predict(test)
>>> ValueError: Error when checking input: expected input_3 to have 4 dimensions, but got array with shape (480, 640, 4)

Here is the model:

def UNet():
    f = [16, 32, 64, 128, 256]
    inputs = keras.layers.Input((480, 640, 4))

    p0 = inputs
    c1, p1 = down_block(p0, f[0]) #128 -> 64
    c2, p2 = down_block(p1, f[1]) #64 -> 32
    c3, p3 = down_block(p2, f[2]) #32 -> 16
    c4, p4 = down_block(p3, f[3]) #16->8

    bn = bottleneck(p4, f[4])

    u1 = up_block(bn, c4, f[3]) #8 -> 16
    u2 = up_block(u1, c3, f[2]) #16 -> 32
    u3 = up_block(u2, c2, f[1]) #32 -> 64
    u4 = up_block(u3, c1, f[0]) #64 -> 128

    outputs = keras.layers.Conv2D(4, (1, 1), padding="same", activation="sigmoid")(u4)
    model = keras.models.Model(inputs, outputs)
    return model

I know this is a noob error but I really want top solve it!


Solution

  • Keras works with "batches", never with single images.

    That means it's expecting 4 dimensions (batch_size, 480, 640, 4).

    If you are going to predict with a single image, then you need your input array with shape (1, 480, 640, 4).

    test =  X_train[0]
    test.shape
    >>> (480, 640, 4)
    test = test.reshape((-1, 480, 640, 4))
    test.shape
    >>> (1, 480, 640, 4)
    

    Now you can parallel_model.predict(test) .