I try to read in a bunch of images (more than 180k) and resize them to new height and a new width. I use the code below and it works for a small number of images but for a large number, the kernel dies... So, is there a more efficient way to resize the images? Maybe without reading the images in?
folders = glob.glob(r'path\to\images\*')
imagenames_list = []
for folder in folders:
for f in glob.glob(folder+'/*.png'):
imagenames_list.append(f)
read_images = []
for image in imagenames_list:
read_images.append(cv2.imread(image))
def resize_images(img, new_width, new_height):
size = (new_width, new_height)
resized_img = cv2.resize(img, size)
return resized_img
resized_img = [resize_images(img, new_width=128, new_height=32) for img in read_images]
The problem seems to be that you are loading around 180,000 images into the read_images
list. That is a lot of images and is likely filling up the RAM and killing the kernel as well.
Since you have not mentioned what you intend to do with these resized images, I can suggest two things you might want to try instead.
create a resize_and_save(image_path, new_size, save_path)
to load, save and release the image right away. You can use this for every image in imagenames_list
if you are using this to create batches for training, load only as many images as necessary to prepare one batch and resize only those images