Search code examples
javaserializationhazelcasthazelcast-imap

partitioning and serialization overhead in hazelcast


I have an IMap map where the Abc class is a nested class which implements serialization. Even with no sync and async backups I get an out of memory exception. When I reduce the no of partitions to 3 from the default 271 it seems to work and all entries get loaded successfully. What is the partitioning and serialization overhead incurred during MapLoading.


Solution

  • Do not reduce partition count to 3, you are slowing down the system by reducing the concurrency. There are multiple partition threads on each member in a cluster and each partition thread owns certain number of partitions. Reducing it to such low count would drastically impact the performance. Moreover, you can not have more than 3 members in the cluster (I think it may actually be 2) with 3 cluster wide partitions.

    You were able to avoid OOME because there were less partition objects but it wont last long. You need to appropriately tune JVM heap size for required cache usage.

    On Serialization overhead, Java serialization is the worst in terms of payload size and latency. Use Hazelcast serialization, here is the link: http://docs.hazelcast.org/docs/3.10.2/manual/html-single/index.html#serialization