I encountered a problem in my GitLab pipeline when running a container with an Apache service. It failed because Apache was not operational when the pipeline proceeded. I solved it by adding a 60-second sleep in the pipeline. Is there a way to resolve this without hardcoding the sleep?
I attempted to address it using a health check in the Dockerfile, but it seems to be the wrong approach. How do you handle such situations?
stage code :
test_job:
stage: test
image: docker:20.10.16
services:
- docker:20.10.16-dind
tags:
- shell
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker run -d -p 80:80 --env APP_ENV=$APP_ENV --env APP_DEBUG=$APP_DEBUG --name $CONTAINER_NAME $CONTAINER_IMAGE_NAME
- sleep 60
- curl http://localhost:80
- docker stop $CONTAINER_NAME
- docker container rm $CONTAINER_NAME
Part of dockerfile where I'm trying to do what is described :
CMD composer install && \
chmod -R 777 /var/www/html/config/autogenerated && \
chmod -R 777 /var/www/html/var && \
yarn install && \
yarn encore production && \
apache2-foreground
HEALTHCHECK --interval=10s --timeout=10s --retries=12 \
CMD curl -f http://localhost:80 || exit 1
Is there a way to resolve this without hardcoding the sleep?
Typically, you pollingly wait until the server responds.
- while ! curl http://localhost:80; do sleep 1; done
You could code a little bit of bash to add a timeout, but I am super lazy and use timeout
:
- timeout 60 bash -c 'while ! curl http://localhost:80; do sleep 1; done'
or, not to be inside quotes, with exported function:
- f() { while ! curl http://localhost:80; do sleep 1; done; }; export -f f; timeout 60 bash -c f