Search code examples
bashmultithreadingcompressionxz

how much multiprocess can speed up on compress task?


I have a large compress task (compress 10000 large file( > 10 G each))

I found xz -z or gzip are slow.

So, i want to use multiprocess to make the compress parallelized.

but i heard the high IO task's limit is on IO speed.

I am not so good in hardware design.

So, can multiprocess speed up compress?


Solution

  • Yes, it can. Compression is usually compute bound, not I/O bound. You can use pigz to get close to a factor of n speedup for n cores.