I implemented the following sample of the akka-cluster system, please see below diagram:
┌────host_D:3000────┐
┌───────▶│ .... │
│ ┌────host_C:3000────┐ │
┌────host_A:2551────┐ │ ┌───▶│ │ │
│ │──┘ │ ┌────host_B:3000────┐ │ │
│┌─────────────────┐│────┘ │┌─────────────────┐│ │ │
││ MasterActor ││──────▶││ WorkerActor ││ │─┘
│└─────────────────┘│ │└─────────────────┘│─┘
└───────────────────┘ └───────────────────┘
The MasterActor
and WorkerActor
are implemented in separated sbt-modules
and started with using scalatra-servlets. So an actor system is created in ServletContextListener
when a particular sbt module is deployed.
The MasterActor
and WorkerActor
are subscribed to the cluster events (such as MemberJoin/Up/etc). The WorkerActor
can be scaled on the different nodes. And the following restrictions for the ports are used:
2551
- for the MasterActor's cluster node3000
- for the WorkerActor's cluster nodeI need to focus on cluster-events only. Because the following details were omitted in this topic:
This works successfully on my local machine (and with using virtual machines under VirtualBox). But I've faced with the issues when I deployed on EC2/docker.
For example, I use two EC2 hosts with the following IP: 10.x.x.A
and 10.x.x.B
. The my project can be deployed in EC2 in the following ways:
MasterActor
module at 10.x.x.A
and WorkerActor
module at 10.x.x.B
I consider the way#1 when modules are deployed in the different hosts. Since I don't known which IP will be used for the MasterActor
then I reserve a seed-nodes for each node. According to the above restrictions for the ports. Please see below diagram which illustrates the my infrastructure and akka-cluster configuration.
┌──[ec2@10.x.x.A]─────────────────────────────────────────────┐
│ │
│ > ifconfig │
│ eth0 10.x.x.A │
│ docker0 172.17.0.1 │
│ │
│ │
│ ┌─────[docker:172.17.x.d1]──────────────────────────────┐ │
│ │ > ifconfig ┌─────────────────┐ │ │
│ │ eth0 172.17.x.d1 │ MasterActor │ │ │
│ │ └─────────────────┘ │ │
│ │ ClusterSystem { │ │
│ │ akka.remote.netty.tcp.hostname = "10.x.x.A" │ │
│ │ akka.remote.netty.tcp.port = "2551" │ │
│ │ akka.cluster.roles = ["master"] │ │
│ │ akka.remote.netty.tcp.bind-hostname = "172.17.x.d1" │ │
│ │ akka.remote.netty.tcp.bind-port = "2552" │ │
│ │ akka.cluster.seed-nodes = [ │ │
│ │ "akka.tcp://ClusterSystem@10.x.x.A:2551", │ │
│ │ "akka.tcp://ClusterSystem@10.x.x.A:3000", │ │
│ │ "akka.tcp://ClusterSystem@10.x.x.B:2551", │ │
│ │ "akka.tcp://ClusterSystem@10.x.x.B:3000" ] │ │
│ │ } │ │
│ │ │ │
│ └───────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
┌──[ec2@10.x.x.B]─────────────────────────────────────────────┐
│ │
│ > ifconfig │
│ eth0 10.x.x.B │
│ docker0 172.17.0.1 │
│ │
│ │
│ ┌─────[docker:172.17.x.d2]──────────────────────────────┐ │
│ │ > ifconfig ┌─────────────────┐ │ │
│ │ eth0 172.17.x.d2 │ WorkerActor │ │ │
│ │ └─────────────────┘ │ │
│ │ ClusterSystem { │ │
│ │ akka.remote.netty.tcp.hostname = "10.x.x.B" │ │
│ │ akka.remote.netty.tcp.port = "3000" │ │
│ │ akka.cluster.roles = ["worker"] │ │
│ │ akka.remote.netty.tcp.bind-hostname = "172.17.x.d2" │ │
│ │ akka.remote.netty.tcp.bind-port = "2552" │ │
│ │ akka.cluster.seed-nodes = [ │ │
│ │ "akka.tcp://ClusterSystem@10.x.x.A:2551", │ │
│ │ "akka.tcp://ClusterSystem@10.x.x.A:3000", │ │
│ │ "akka.tcp://ClusterSystem@10.x.x.B:2551", │ │
│ │ "akka.tcp://ClusterSystem@10.x.x.B:3000" ] │ │
│ │ } │ │
│ │ │ │
│ └───────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
Into the each EC2 instance I illustrated the result of ifconfig
command. The same was illustrated into each docker.
For akka-cluster configuration I used this manual:
The main issue: the MasterActor
node is started and registered itself in the akka-cluster successfully. But the WorkerActor
is started but doesn't registered in the akka-cluster.
The main questions: is this a correct configuration for the my cluster system? Are there any mistakes?
Also I've found some issue which can be connected with the main issue:
10.x.x.A
to 10.x.x.B
and vice versa The issue was connected with hosts and ports availability:
Can't ping from 10.x.x.A to 10.x.x.B and vice versa
Now cluster works successfully.