I am using unet for image segmentation, using the code outlined herein.
My input images are 256x256x3. while the corresponding segmentation masks are 256x256.
I have changed the size for the input to Unet:
def unet(pretrained_weights = None,input_size = (256,256,3)):
and get a network with a 256x256x1 layer for the output
conv2d_144 (Conv2D) (None, 256, 256, 1) 2 conv2d_143[0][0]
See the full architecture here.
When I try and run using .fit_generator, I get the following error:
ValueError: Error when checking target: expected conv2d_144 to have shape (256, 256, 1) but got array with shape (256, 256, 3)
What can I do to fix this? Please let me know what extra information I can give!
Thank you!
PS: I have three classes in the outputs, could that be the reason?
I've actually fixed it by one-hot encoding my segmentation masks and changing the activation function of the last layer to softmax, with a filtersize to match the number of classes!