I have a bunch of different RESTful
services running, and they designed to be as much separate, as possible (physically or logically). But sometimes they need to communicate with each other (if available). A long as they are web servers, I use http to deliver messages from one to another, and receive response.
The question is, is http
as a protocol is quite efficient at that? As far as every request requires a new connection, I am concerned for another solution, ready for high-load.
Another thing, say I have 10 instances of services A and only 5 instances of B, and there's an internal load balancer, so when A asks B, balancer gives him the most available B. From that point, I have doubts, if keep-alive
might help here.
Is there a production-ready library for this? Something similar to pub/sub
, when service publishes request, and some free service from certain group processes it, and gives the response? Or, say, when service B pulls service A, the B desires to stick around A for a while for few more request, then searching for the free one.
UPD. I am using tornado
framework (python), with nginx
as load balancer (and plan to use Amazon in future).
Sorry if that question is too broad.
Thank you!
After some investigations, I've found RabbitMQ
to be the solution for my problem. It comes with broker, and powerful administration tool.
Using RPC I've implemented an asynchronous request-reply pattern through JSON-RPC, so internal services may communicate with each other pretty fast. Multiple instances of same service may be attached to the broker, and requests go round-robin.
Also, this article helped me to find the idea about how to do that.