Search code examples
pythonnumpypytorch

Evaluating pytorch pretrained model using a single image from a dataset


Could someone help me in this problem: I try to evaluate pretraining an image with a ML model and i reveive the error state in the bottom of this post.

As I understand the pytorch model want data to be in the following format: batch_channel, channel, Height, Lenght. I modify the tensor to be in this shape but I still get that Error.

Can someone explain to me why does this error occurs?

I am very new to coding and ML, so I am sorry if this question is not very specific.

from monai.transforms import AddChannel
from skimage.io import imread
import numpy as np
import cv2
from torch.utils.data import DataLoader
from torchvision import models


img_array = imread(train_imageinfo_list[0][0])


resized_img = cv2.resize(img_array, (224, 224))
img_tensor = torch.from_numpy(resized_img)
channel_adder = AddChannel()
channel_image = channel_adder(img_tensor)
batch_image = channel_adder(channel_image)
img_tensor = batch_image
model= models.vgg16()
model(img_tensor)
eval(model)

ERROR: RuntimeError: Given groups=1, weight of size [64, 3, 3, 3], expected input[1, 1, 224, 224] to have 3 channels, but got 1 channels instead


Solution

  • Your model expects a 3-channel input, that's why you are getting an error. A naive and straightforward approach is to convert your grayscale image to rgb by repeating the channel dimension three times:

    >>> x = img_tensor.repeat(1,3,1,1) # assuming img_tensor shaped BCHW
    >>> y = model(x)