Search code examples
matplotlibkeras

why does Keras.InceptionV3.preprocess_input and plt.imshow(img) make pictures so dark


  1. why does Keras and/or matplotlib make pictures so dark?

  2. will the preprocessed darker images reduce my AI Model prediction accuracy?

this is the original image:

enter image description here

This is processed image enter image description here

Here is the code:

import matplotlib.pyplot as plt
import numpy as np
from keras.preprocessing import image
from keras.applications.inception_v3 import preprocess_input
def load_image(img_path, show=False):
        img = image.load_img(img_path, target_size=(299,299))
        img_tensor = np.expand_dims(img, axis=0)         # (1, height, width, channels), add a dimension because the model expects this shape: (batch_size, height, width, channels)
        img_tensor = preprocess_input(img_tensor)
        
        if show:
            plt.imshow(img_tensor[0])
            plt.axis('off')
            plt.show()
        
        return img_tensor

Solution

  • The preprocessing is (supposed to be) exactly the one used to train the Inception model. So, if you are going to use a pretrained Inception, it's essential to have this preprocessing, otherwise the Inception model will have terrible performance.

    A lot of Keras models were trained with the "caffe" preprocessing, which centers the image data based on the mean values for each channel.

    So, if an original image has channels from 0 to 255, an image preprocessed with this will have something "around" -127 to 128 (the exact values are a little different for each channel). This is why your images are getting "darker". In fact, for the plotter which is expecting 0 to 255, you are blacking-out a lot of pixels and reducing the intensity of the visible ones.

    But for you own model, or an untrained Inception, it won't make a huge difference.

    It might change the training speed, gradients, etc., but in general it will not be a big issue. The best, of course, might be open to experimenting and has a lot to do with the initialization of your weights, your optimizer, etc.

    I usually go with images from 0 to 1.

    The most important is:

    • If you are using any pretrained model, you need to use the same preprocessing that was used for training that model
    • This includes your own model. If you trained your model with a certain preprocessing (say normalizing from 0 to 1, for instance), your model will only work correctly for images that follow the same preprocessing.