Search code examples
dockerspring-bootdocker-composedocker-swarm

Docker swarm deployment takes time and causes container to kill the service


I've been experimenting with docker. As part of it, I've been using a simple spring-boot application,

https://github.com/siva54/simpleusercontrol

When I run the application within a simple docker container, I see that the application logs show the below line,

2017-07-03 02:27:25.388 INFO 5 --- [ost-startStop-1] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 4963 ms

However when I run the same thing with docker swarm, The application takes up longer time,

2017-07-03 00:32:56.483 INFO 5 --- [ost-startStop-1] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 48699 ms

Also the application doesn't start up, Instead in the logs (docker logs <>), I do see see a string value "Killed". I'm guessing it might be due the to massive time, Possibly a timeout setting on the docker might be killing the service.

Can anyone please help me setting the timeout or fixing the application so as to not take that huge amount of time.

Please find the below links for more info,

Docker version : Docker version 17.06.0-ce, build 02c1d87

Dockerfile (https://github.com/siva54/simpleusercontrol/blob/master/Dockerfile)

Docker Compose (https://github.com/siva54/simpleusercontrol/blob/master/docker-compose.yml)

If anyone wants to experiment with my application, You can pull that image using "siva54/simpleusercontrol".

If you can startup the application and find the link (localhost:8080/swagger-ui.html#/) working for you, That should be working.

The following were used to run the swarm,

Initialize swarm

docker swarm init

Run the application

docker stack deploy -c docker-compose.yml app1

The following were used to run without swarm,

docker run siva54/simpleusercontrol

All this was done on images created using vagrant.


Solution

  • There are two things that jump out at me here.

    1. The Killed message in logs is usually something that the Linux Kernel OOM killer generates. I would check the output of dmesg on the host and probably also docker ps -a, docker service ls, and docker stack ls to see if they show any scheduling or exit code errors. This could be an indicator that the host doesn't have enough memory (for example, if the host has only 512 MB of memory, the application + host OS may be over the limit).
    2. The docker-compose.yml you link to defines a memory limit of 128 MB. But when I run start a regular container from that image it is using about 345 MB just idling. I am guessing that the 128 MB limit is causing the JVM garbage collector to work overtime, causing slow startup times for the application. It is also probably responsible for the OOM killer killing off the application. I'd try bumping up the memory limit to maybe 512 MB.