Search code examples
rafthashicorp

The double master phenomenon


Imagine a cluster running on the Raft protocol that has numerous nodes running in completely two separated networks (i.e. AWS VPC). This cluster is running fine for a while and has exactly one master as expected. All of a sudden something goes wrongs and the connection between the two networks break! Now, we have two groups of nodes. In the group that has lost connection to the master, nodes start an election and pick up another master!

The clients which are outside the network can still see all nodes! Now, clients are actually communicating with two clusters each having its own state!

This definitely breaks the replicated log consistency. How exactly is it handled or should it be handled in Raft?


Solution

  • Virtually every consensus protocol requires a majority of nodes to be available to elect a leader. In your example, there are two options:

    1. When partitioning is happening, one of the networks will have a majority of total number of nodes - then this network will be able to elect a majority. E.g. one network has one node and the other has two - the second network may elect a leader as it has two out of three nodes.
    2. When partitioning is happening, none of networks may have a majority, hence no leader will be elected. E.g. each network has two nodes - four nodes total and three nodes is required for majority; no leader will be elected in a case of partitioning