I downloaded a compressed file in gz format of 4.5 GB with uncompressed file size of 15.7 GB.
When I extracted the file in Ubuntu 14.04 (using the archive manager), I got a message related to insufficient disk space.
However, after inspection I had more than 40 GB available.
Previous attempts to extract this file failed because apparently I didn't have enough disk space, but I was surprised by how much extra disk space I needed to finish the extraction.
I expected that some extra amount of disk space is needed to allow for some temporary files, but so much extra disk space seems a bit excessive.
Is this the expected behavior? The same happened in an EC2 instance with Ubuntu Server 14.04 using gzip -d file.gz.
It may be that it creates a temporary file to hold the output, copies that file to the eventual destination, then deletes the temporary. It may also be that the temporary copy is stored on another file system, one with less than 15.7G available.
You can probably bypass the use of temporary files by using zcat
, gzcat
, gunzip -c
or gunzip --to-stdout
and then redirecting the output to a file directly.