I have a few million images stored as jpgs. I'd like to reduce the size of each jpg by 80%. Here's a bash loop that I'm currently using (I'm on MacOS):
for i in *jpg; do convert "$i" -quality 80% "${i%.jpg}.jpg"; done;
The above line converts the images sequentially. Is there a way to parallelize and thus speed up this conversion? I don't need to use bash, just want to find the fastest way to make the conversion.
Using Python you can do it this way:
import glob
import shlex
import subprocess
from tqdm.contrib.concurrent import thread_map
def reduce_file(filepath):
output = f"{filepath}_reduced.jpg"
cmd = f"convert {filepath} -quality 80% {output}"
subprocess.run(shlex.split(cmd))
list(thread_map(reduce_file, glob.glob("./images/*.jpg")))
Given that your images are in images/*.jpg
.