Search code examples
load-balancinghaproxyround-robinmaxscalekeepalived

MaxScale cluster (master-master) setup


When deploying multiple MaxScale in a Master-Slave typology (failover from master to slave with Keepalived or similar) in front of a Galera Cluster in read-write-split mode , everything goes fine. But what about a Master-Master like typology in a round-robin fashion, is this possible?

Ex.: Having one MaxScale at 10.0.0.1, a 2nd at 10.0.0.2 and Haproxy in front of it with a roundrobin or leastconn distribution algorithm (or even without Haproxy/load-balancer, where applications just connect randomly to one or another) is this possible/well supported by MaxScale?


Solution

  • Usually you can connect to any number of MaxScale instances as long as certain features are enabled to guarantee that all MaxScale instances pick the same server where they send writes to.

    Galera Clusters

    If you are using a Galera cluster, this can be done in a safe and conflict-free manner by enabling the root_node_as_master parameter. It uses the Galera cluster itself to select which node it uses for writes. This allows your applications to connect to either of the MaxScale instances.

    Even without this parameter it would not cause any problems with the database itself but due to the way Galera works, you increase the likelihood of running into a conflict when you COMMIT your transaction if you write to multiple nodes.

    Asynchronous Replication Clusters

    If you use MaxScale with a cluster that uses asynchronous replication, you still can do this as long as you configure your mariadbmon monitor to use cooperative_monitoring_locks. This causes the MaxScale instances to communicate via the database about which servers they see and which of them they chose for writes.

    An added benefit of cooperative_monitoring_locks is that you can enable the automated cluster management parameters auto_failover and auto_rejoin without having to worry about two MaxScales attempting to change the replication configuration at the same time: cooperative_monitoring_locks makes sure only one MaxScale does it.