Scenario
I am working on restoring backup taken from a different replica set i.e a unique replica set to another replica set.lets call them replica set A and replica set B.. The backup is in aws EBS snapshot .
Backup available is for set A which has to be restored for set B.
I had initially copied initial configuration cfg=rs.config()
of node of set B.
Now after mounting the ebs volume of set A to a node of setB created from snapshot, I am able to connect to the db.The configuration will be of set A as the volume was created from set A backup which means all hostname are of set A in existing configuration after restoration.
Issue :
While trying to force the existing configuration,now I am running into below issue .
rs.reconfig(cfg,{force:true})
{
"ok" : 0,
"errmsg" : "New and old configurations differ in replica set ID; old was 5c4a6ab3b5306ee3ec95dae4, and new is 59dc23bfa547d208144dd564",
"code" : 103,
"codeName" : "NewReplicaSetConfigurationIncompatible",
"operationTime" : Timestamp(1616525693, 4976),
"$clusterTime" : {
"clusterTime" : Timestamp(1616573470, 22),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
Question
Let me know if more details are needed to add clarity to the question.
Note : the hosts are different in the set A and set B and both follows replication model with arbiter node.
MongoDB makes some sanity checks when dealing with replica sets:
When starting up, and when a new replica set configuration document is received, mongod will check that the replica set ID matches what is already has, and that its host name appears in the members list of the new configuration. If anything doesn't match, it transitions to a state that will not accept writes.
This helps to ensure the consistency of the data across replica set members.
Basic steps to restore a replica set from backups. Taken from the docs, for more detail, see https://docs.mongodb.com/manual/tutorial/restore-replica-set-from-backup/index.html