Search code examples
scalaproxyakkasharding

akka sharding proxy cannot contact with coordinator


I'm building an app with two kind of nodes (front and back) on akka 2.5.1, I'm using akka sharding for load and data distribution across the back nodes. The front node uses a shard proxy to send messages to the back. Shards initialisation is as follow:

  val renditionManager: ActorRef =
if(nodeRole == "back")
  clusterSharding.start(
    typeName = "Rendition",
    entityProps = Manger.props,
    settings = ClusterShardingSettings(system),
    extractEntityId = Manager.idExtractor,
    extractShardId = Manager.shardResolver)
else
  clusterSharding.startProxy(
    typeName = "Rendition",
    role = None,
    extractEntityId = Manager.idExtractor,
    extractShardId = Manager.shardResolver)

And i got some dead letters logs (omit most of the entries for brevity):

[info] [INFO] [06/02/2017 11:39:13.770] [wws-renditions-akka.actor.default-dispatcher-26] [akka://wws-renditions/system/sharding/RenditionCoordinator/singleton/coordinator] Message [akka.cluster.sharding.ShardCoordinator$Internal$Register] from Actor[akka.tcp://[email protected]:2552/system/sharding/Rendition#1607279929] to Actor[akka://wws-renditions/system/sharding/RenditionCoordinator/singleton/coordinator] was not delivered. [8] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[info] [INFO] [06/02/2017 11:39:15.607] [wws-renditions-akka.actor.default-dispatcher-21] [akka://wws-renditions/system/sharding/RenditionCoordinator/singleton/coordinator] Message [akka.cluster.sharding.ShardCoordinator$Internal$RegisterProxy] from Actor[akka://wws-renditions/system/sharding/Rendition#-267271026] to Actor[akka://wws-renditions/system/sharding/RenditionCoordinator/singleton/coordinator] was not delivered. [9] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[info] [INFO] [06/02/2017 11:39:15.762] [wws-renditions-akka.actor.default-dispatcher-21] [akka://wws-renditions/system/sharding/replicator] Message [akka.cluster.ddata.Replicator$Internal$Status] from Actor[akka.tcp://[email protected]:2552/system/sharding/replicator#-126233532] to Actor[akka://wws-renditions/system/sharding/replicator] was not delivered. [10] dead letters encountered, no more dead letters will be logged. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.

and if i tried to use the proxy it fails to deliver and shows:

[info] [WARN] [06/02/2017 12:12:28.047] [wws-renditions-akka.actor.default-dispatcher-15] [akka.tcp://[email protected]:2551/system/sharding/Rendition] Retry request for shard [51] homes from coordinator at [Actor[akka.tcp://[email protected]:2552/system/sharding/RenditionCoordinator/singleton/coordinator#-1550443839]]. [1] buffered messages. 

In the other hand, if I start a non-proxy shard in both nodes (front and back) it works properly.

Any advise? Thanks.

UPDATE

I finally figure it out why it was trying to connect to shards in wrong nodes. If it is only intended to start a shard in a single kind of node it is needed to add the following configuration

akka.cluster.sharding {
  role = "yourRole"
}

This way, akka sharding only will lookup on nodes tagged with role "yourRole"

Proxy is still not able to connect with shard coordinator and deliver messages to shards and got the following log trace:

[WARN] [06/06/2017 12:09:25.754] [cluster-nodes-akka.actor.default-dispatcher-16] [akka.tcp://[email protected]:2551/system/sharding/Manager] Retry request for shard [52] homes from coordinator at [Actor[akka.tcp://[email protected]:2552/system/sharding/ManagerCoordinator/singleton/coordinator#-2111378619]]. [1] buffered messages.

so help would be nice :)


Solution

  • Got it!

    I made 2 mistakes, for the first one check the UPDATE section in the main question.

    The second is due to, for some reason, it is needed 2 shard regions up within the cluster (For testing purposes I was using only one), no clue if this is stated somewhere in the Akka docs.