I am setting up docker on a google cloud compute machine, 1 vCPU and 3.75 GB ram.
If I simply run docker-compose up --build
, it does work but the process is sequential and slow. So I am using this bash script so that I can build images in the background, and skip the usual sequential process.
command=$1
shift
jobsList=""
taskList[0]=""
i=0
#Replaces all the fluff with nothing, and we get our job Id
function getJobId(){
echo "$(echo $STRING | sed s/^[^0-9]*// | sed s/[^0-9].*$//)"
}
for task in "$@"
do
echo "Command is $command $task"
docker-compose $command $task &> ${task}.text &
lastJob=`getJobId $(jobs %%)`
jobsList="$jobsList $lastJob"
echo "jobsList is $jobsList"
taskList[$i]="$command $task"
i=$(($i + 1))
done
i=0
for job in $jobsList
do
wait %$job
echo "${taskList[$i]} completed with status $?"
i=$(($i + 1))
done
and I use it in the following manner:
availableServices=$(docker-compose config --services)
while IFS='' read -r line || [[ -n "$line" ]]
do
services+=$(echo "$line ")
done <<<"$availableServices"
./runInParallel.sh build $services
I string together available services in docker-compose.yml, and pass it to my script.
But the issue is eventually all the processes fail with the following error:
npm WARN tar ENOSPC: no space left on device, write
Unhandled rejection Error: ENOSPC: no space left on device, write
I checked inodes, and on /dev/sda1
only 44% were used.
Here's my output for the command df -h
:
Filesystem Size Used Avail Use% Mounted on
udev 1.8G 0 1.8G 0% /dev
tmpfs 370M 892K 369M 1% /run
/dev/sda1 9.6G 9.1G 455M 96% /
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/loop0 55M 55M 0 100% /snap/google-cloud-sdk/64
/dev/loop2 55M 55M 0 100% /snap/google-cloud-sdk/62
/dev/loop1 55M 55M 0 100% /snap/google-cloud-sdk/63
/dev/loop3 79M 79M 0 100% /snap/go/3095
/dev/loop5 89M 89M 0 100% /snap/core/5897
/dev/loop4 90M 90M 0 100% /snap/core/6130
/dev/loop6 90M 90M 0 100% /snap/core/6034
/dev/sda15 105M 3.6M 101M 4% /boot/efi
tmpfs 370M 0 370M 0% /run/user/1001
and here's the output for df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 469499 385 469114 1% /dev
tmpfs 472727 592 472135 1% /run
/dev/sda1 1290240 636907 653333 50% /
tmpfs 472727 1 472726 1% /dev/shm
tmpfs 472727 8 472719 1% /run/lock
tmpfs 472727 18 472709 1% /sys/fs/cgroup
/dev/loop0 20782 20782 0 100% /snap/google-cloud-sdk/64
/dev/loop2 20680 20680 0 100% /snap/google-cloud-sdk/62
/dev/loop1 20738 20738 0 100% /snap/google-cloud-sdk/63
/dev/loop3 9417 9417 0 100% /snap/go/3095
/dev/loop5 12808 12808 0 100% /snap/core/5897
/dev/loop4 12810 12810 0 100% /snap/core/6130
/dev/loop6 12810 12810 0 100% /snap/core/6034
/dev/sda15 0 0 0 - /boot/efi
tmpfs 472727 10 472717 1% /run/user/1001
From your df -h
output, root directory (/dev/sda1) has only 455MB free space.
Whenever you run docker build, the docker-client (CLI) will send all the contents of the Dockerfile directory to docker-daemon which builds the image.
So, for example, if you have three services each with 300MB directories, you can build them sequentially with 455MB available free space, but to build them all at the same time you need 300MB*3 amount of free space for docker-daemon to cache and build the images.