Search code examples
pythonreact-nativergbtensorflow.jsgrayscale

Resizing image and changing into grayscale


I'm using my trained model in react native whose input size is 48,48,1. Input values are RGB images. So I tried converting the image into tensor3d, then into grayscale, and resized it. But after resizing the model always gives the same prediction values. I don't understand where something is wrong with my code. My model accuracy is also okay.

const imageDataArrayBuffer = await response.arrayBuffer();
const imageData = new Uint8Array(imageDataArrayBuffer);
let imageTensor = decodeJpeg(imageData).resizeBilinear([48,48]).reshape([48,48,3]);

imageTensor=tf.ones([48,48,3]);
rgb_weights=[0.2989, 0.5870, 0.1140];
imageTensor = tf.mul(imageTensor, rgb_weights);
imageTensor = tf.sum(imageTensor, -1);
imageTensor = tf.expandDims(imageTensor, -1);
imageTensor=imageTensor.resizeBilinear([48,48]).reshape([-1,48,48,1]);

let result = await model.predict(imageTensor).data();
alert("Result " +result);

Solution

  • One thing I noticed is that you're ending up with a float32 tensor, because you multiply your Uint8Array against a rgb_weights and you get a float32 tensor as a result. If you add toInt() to the tf.mul you keep the int32 structure in the tensor.

    See this example I made where your code works appropriately in creating a grayscale image LINK TO CODE

    If you remove the toInt() on line 8 of my example code, the image is no longer in the correct range.

    This also begs the question, what format does your model expect the imageTensor to be in? Do you need to normalize the tensor back to values between 0 to 1 so your model predictions are accurate? Assure you're respecting the types for each tensor AND for the model.

    Before: RGB image input

    After: Grayscale 48x48