I have a 3 node swarm. Each of which has a static ip address. I have a leader node-0 on ip 192.168.2.100
, a backup manager node-1 on 192.1682.101
, and a worker node-2 on 192.168.2.102
. node-0 is the leader that initialized the swarm, so the --advertise-addr
is 192.168.2.100
. I can deploy services that can land on any node, and node-0 handles the load balancing. So, if I have a database on node-2 (192.168.2.102:3306
), it is still reachable from node-0 192.168.2.100:3306
, even though the service is not directly running on node-0.
However, when I reboot node-0 (let's say it loses power), the next manager in line assumes leader role (node-1) - as expected.
But, now if I want to access a service, let's say an API or database from a client (a computer that's not in the swarm), I have to use 192.168.2.101:3306
as my entry point ip, because node-1 is handling load balancing. So, essentially from the outside world (other computers on the network), the ip address of swarm has changed, and this is unacceptable and impractical.
Is there a way to resolve this such that a given manager has priority over another manager? Otherwise, how is this sort of issue resolved such that the entry point ip of the swarm is not dependent on the acting leader?
Make all three of your nodes managers and use some sort of load balanced DNS to point to all three of your manager nodes. If one of the managers goes down, your DNS will route to one of the other two managers (seamlessly or slightly less seamlessly depending on how sophisticated your DNS routing/health-check/failover setup is). When you come to scale out with more nodes, nodes 4, 5, 6 etc can all be worker nodes but you will benefit from having three managers rather than one.