I have successfully deployed a cluster to azure with the Reverse Proxy enabled on all nodes and https working. This is a multi tenant cluster, each tenant has their own application and some of the stateful services will have the ability to manage some Web Sockets.
I managed to get an instance of kestrel working locally for websockets but in Azure I just get 404s. I think my port configuration is wrong. I've read all the reverse proxy documentation but still can't figure some stuff out.
Q1 Do all the listeners on the stateful service that wish to receive messages from the reverse proxy have to listen on 19081? I would have thought so but the documentation randomly puts a different port (10592?) and a super long id as some sort of identifier (which i believe is the partitionId and replicaId combined), with no explanation as to how it would map the name to the listening port in the naming service.
As an example, let's take the fabric:/MyApp/MyService service that opens an HTTP listener on the following URL:
http://10.0.0.5:10592/3f0d39ad-924b-4233-b4a7-02617c6308a6-130834621071472715/
Am I meant to be using this super long id as the listening address? I guess that means kestrel is out - since multiple ones may try to listen on same node, but i can maybe use a Web/Http Listener so I can share the port.
Q2 Is it compulsory to create a listener if I want the service to listen only to the reverse proxy. ListenerName seems like a necessary parameter in the URI formatting for addressing services. In this instance, is it impossible to call dynamically spawned hosts? (such as WCF Service Hosts) that are listening on generated paths ie https://fabric:19081/MyApp/MySvc/SomeWcfPath1.
Happy to post (broken) code but I think this is more of a conceptual problem and once I understand the limitations / underlying architecture better I can solve it myself
Regards
The reason you see the super long URL for stateful services is because you will have multiple partitions and replicas on the same node in many cases. The default examples / templates are probably doing a convention of:
http://+:port/partition/replica/newGuid
Consider the following case: you have a stateful service with two partitions that both have a primary replica and two secondary replicas.
Partition 1: 2446223d-5998-45f3-90fc-2d9705bedb1d
Partition 2: 7af3a1f0-7845-4003-b192-6a8b64cc47fd
You can set up your secondary replicas to open up communication. If you only used the partition id as the listening address then you would have the following:
http://10.10.10.1:19081/2446223d-5998-45f3-90fc-2d9705bedb1d (Primary)
http://10.10.10.1:19081/2446223d-5998-45f3-90fc-2d9705bedb1d (Secondary)
http://10.10.10.1:19081/2446223d-5998-45f3-90fc-2d9705bedb1d (Secondary)
http://10.10.10.1:19081/7af3a1f0-7845-4003-b192-6a8b64cc47fd (Primary)
http://10.10.10.1:19081/7af3a1f0-7845-4003-b192-6a8b64cc47fd (Secondary)
http://10.10.10.1:19081/7af3a1f0-7845-4003-b192-6a8b64cc47fd (Secondary)
On a five node cluster you're going to have the same listener on a single node so that is a conflict. So you need to make it more unique by adding something else. The default OwinCommunicationListener
implementation in the stateless web api template will use partitionId, replicaId and a random guid. I don't believe partition + replica is enough to be unique so that is why they add a random guid to the path.