In a two node scenario, using roundrobin, I want haproxy to dispense two requests to each node before switching to the next node.
I have a messaging application, which makes one request for getting a messageID, then the next for sending the message.
If i use a standard roundrobin algorithm on two backend servers, this leads to one server only getting the messageID requests, and the other doing all the message sending.
This is not really balanced, as providing messageIDs is a nobrainer to the server, and handling the messages, which can be up to a few hundret MBs, is all done by the other node.
I had a look at weighted roundrobin, but if seems not to work out, when using a weight of 2 for both servers, as the weights seem to get calculated relatively to each other.
I'd be glad for any hint, how to achieve haproxy switching the backend nodes after sending two requests, instead of one.
this is my current configuration, which still leads to a claer one here one there round robin pattern:
### frontend XTA Entry TLS/CA
frontend GMM_XTA_Entry_TLS_CA
mode tcp
bind 10.200.0.20:8444
default_backend GMM_XTA_Entrypoint_TLS_CA
### backend XTA Entry TLS/CA
backend GMM_XTA_Entrypoint_TLS_CA
mode tcp
server GMMAPPLB1-XTA-CA 10.200.0.21:8444 check port 8444 inter 1s rise 2 fall 3 weight 2
server GMMAPPLB2-XTA-CA 10.200.0.22:8444 check port 8444 inter 1s rise 2 fall 3 weight 2
well, like stated, I would need a "two requests here, two requests there" round robin pattern, but it keeps doing "one here, one there".
Glad for any hint, cheers, Arend
To get the behavior you want where requests go to a server 2 at a time, you can add an extra consecutive server line for each backend, like so:
backend GMM_XTA_Entrypoint_TLS_CA
balance roundrobin
mode tcp
server GMMAPPLB1-XTA-CA_1 10.200.0.21:8444 check port 8444 inter 1s rise 2 fall 3
server GMMAPPLB1-XTA-CA_2 10.200.0.21:8444 track GMMAPPLB1-XTA-CA_1
server GMMAPPLB2-XTA-CA_1 10.200.0.22:8444 check port 8444 inter 1s rise 2 fall 3
server GMMAPPLB2-XTA-CA_2 10.200.0.22:8444 track GMMAPPLB2-XTA-CA_1
However, if you can use HAProxy 1.9 or above, you can also use the balance random
option which should randomly distribute requests evenly across your servers. I think this may solve the balancing problem you stated above more directly. Also, using balance random
will still balance your requests randomly if the type of requests change.