So I have:
version: "3.6"
services:
nginx:
image: nginx
app:
image: node:latest
And my nginx config is:
upstream project_app {
server app:4000;
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://project_app;
}
In order to update a container without downtime (rolling updates), I first upscale the app
service to 2:
docker-compose up -d --no-deps --scale app=2 --no-recreate app
It will create project_app_1
along project_app
.
But at this step, even when the new project_app_1
container is ready, all the traffic goes to project_app
, the former container.
To have them both used, I then need to run docker-compose restart nginx
.
Now, the traffic is router to both project_app
and project_app_1
, which is really cool.
I am now ready to kill project_app
which is outdated now.
My questions are:
project_app_1
or is it somewhat automatic?http://app:4000
works is because of DNS hostname config, right? Where can learn more on this?Thanks
PS: If you are curious about the whole script I use, I reported it on the associated github issue.
So I finally found more info on this.
When writing server app:4000;
, app
is a DNS entry, which resolves to multiple instances.
It is possible to update those DNS entries without having to restart nginx. The detail is here: https://serverfault.com/a/916786/182596
This reddit post and nnginx this article helped also.
Basically, one has to set on the nginx configuration to
docker
DNS server 127.0.0.11
resolver 127.0.0.11 valid=10s;
server {
set $app app:4000;
location / {
proxy_pass http://$app;
}
}
Once docker-compose up -d --no-deps --scale app=2 --no-recreate app
is called, it starts routing to both instances.
The issue is that when scaling down, it takes the DNS entry TTL to update that it is not valid anymore, hence, with 10s
, I do have 50% of my traffic being down for [0-10s]
, which is decent but not perfect.
I'm currently investigating: