In a traditional master slave architecture ,there is one master and 1 or more slaves.The dataset residing in master node gets asynchronously synced to slave nodes.The write is restricted to one(master) node itself.In case of master node failure,one of the slave node will be made master.This will come with some downtime and some data loss as well.
Master master architecture will have multiple master nodes and each master node will have their own slaves.Here write can be done on multiple nodes.So if one of the master node goes down,the other master node will support the write requests.So my concern is
1.Whether the master nodes in master master architecture share the same data set? If yes then what is the mode of communication between the master nodes whether it is synchronous or asynchronous? If it is asynchronous then if one of the master node goes down,the other master node will have inconsistent data like in master slave architecture.
2.If master nodes have their own unique dataset,then whether the master nodes still communicate with each other?If yes what is the mode of communication and more importantly what is the need considering they have their unique dataset.
3.Apart from scaling the write requests,is there any other drawback of master slave architecture is being solved by master master architecture?
All of these architectures assume a single dataset. Otherwise you are not talking replication, you are talking sharding, which is a whole other discipline. The two discussions are not incompatible, but are really separate issues solving separate problems.
Multi-master solutions are almost always allowed to be inconsistent. Eventual consistency or lazy consistency is often the tradeoff for higher performance and no-downtime failover. The alternative, which is quorum based, generally has an overall lower performance than a simple master/slave solution, as multiple servers must ack a change to allow it to commit.
To address your questions:
1) Generally asynchronous, and yes, inconsistency can absolutely happen. Synchronous solutions are brittle, expensive, and slow, but are occasionally needed to solve very particular problems.
2) No, master nodes all share a data-set
3) Assuming you are talking eventually-consistent, the thing you are solving is scaling and redundancy. Any node can go down and the system can keep running uninterrupted. If you are talking ACID consistency, you are purely solving the guaranteed write problem, not the scaling problem. If I was running a bank, I would likely require multiple masters with ACID consistency.