I'm attempting to resize an image captured using pyscreenshot.grab() to 28x28 pixels
import pyscreenshot
from skimage.transform import resize
def captureAndSubsample():
userImage = pyscreenshot.grab(bbox=(785, 335, 1125, 675))
userImageResized = resize(userImage, (28, 28))
This code returns:
>
File "C:\Users\brad\Desktop\Development\Code\main.py", line 178, in captureAndSubsample
userImageResized = resize(userImage, (100, 100), 3)
File "C:\Users\brad\Desktop\Development\Code\venv\lib\site-packages\skimage\transform\_warps.py", line 144, in resize
image, output_shape = _preprocess_resize_output_shape(image, output_shape)
File "C:\Users\brad\Desktop\Development\Code\venv\lib\site-packages\skimage\transform\_warps.py", line 56, in _preprocess_resize_output_shape
input_shape = image.shape
File "C:\Users\brad\Desktop\Development\Code\venv\lib\site-packages\PIL\Image.py", line 519, in __getattr__
raise AttributeError(name)
AttributeError: shape
<
I've tried changing the input image to a shape:
def captureAndSubsample():
userImage = pyscreenshot.grab(bbox=(785, 335, 1125, 675))
userImage = shape(userImage)
userImageResized = resize(userImage, (100, 100)).shape(340, 340)
But this returns the same as it just returns a tuple of the dimensions, any and all help is appreciated.
Converting userImage
to a numpy array before passing to the scikit-image function does the trick.
import pyscreenshot
import numpy as np
from skimage.transform import resize
userImage = pyscreenshot.grab(bbox=(785, 335, 1125, 675))
userImage = np.array(userImage)
userImageResized = resize(userImage, (28, 28))