we have been implementing Hazelcast2.5 as our distributed cache mechanism. Before implementing it completely Hazelcast Distribute Map can we get some idea how hazelcast has its distributed map. ie how it is sharing the data between two JVMs. Is Hazelcast using its own extended Map for it.
We implement the Map interface (the ConcurrentMap interface to be more precise). But underneath this interface, the implementation is completely custom.
Hazelcast partitions your data based on the key of the map entry. By default there are 271 partitions and these are being spread over the members in the cluster. So with a 2 node cluster, each member gets +/- 135 partitions.
When a write is done, the correct partition is determined based on the hash of the key. The write is then send to the machine owning that partition and processed.
When a get is done, the correct partition also is determined based on the hash of the key. The get is then send to the machine owning that partition and once the value is read, the result is send back to the client.
This is a very simplified explanation of how the Hazelcast map works.