Was doing some internal testing about a clustering solution on top of infinispan/jgroups and noticed that the expired entries were never becoming eligible for GC, due to a reference on the expiration-reaper, while having more than 1 nodes in the cluster with expiration enabled / eviction disabled. Due to some system difficulties the below versions are being used :
In my example I am using a simple Java main scenario, placing a specific number of data, expecting them to expire after a specific time period. The expiration is indeed happening, as it can be confirmed both while accessing the expired entry and by the respective event listener(if its configured), by it looks that it is never getting removed from the available memory, even after an explicit GC or while getting close to an OOM error.
So the question is :
Is this really expected as default behavior, or I am missing a critical configuration as per the cluster replication / expiration / serialization ?
Example :
Cache Manager :
return new DefaultCacheManager("infinispan.xml");
infinispan.xml :
<jgroups>
<stack-file name="udp" path="jgroups.xml" />
</jgroups>
<cache-container default-cache="default">
<transport stack="udp" node-name="${nodeName}" />
<replicated-cache name="myLeakyCache" mode="SYNC">
<expiration interval="30000" lifespan="3000" max-idle="-1"/>
</replicated-cache>
</cache-container>
Default UDP jgroups xml as in the packaged example :
.....
<UDP
mcast_addr="${jgroups.udp.mcast_addr:x.x.x.x}"
mcast_port="${jgroups.udp.mcast_port:46655}"
bind_addr="${jgroups.bind.addr:y.y.y.y}"
tos="8"
ucast_recv_buf_size="200k"
ucast_send_buf_size="200k"
mcast_recv_buf_size="200k"
mcast_send_buf_size="200k"
max_bundle_size="64000"
ip_ttl="${jgroups.udp.ip_ttl:2}"
enable_diagnostics="false"
bundler_type="old"
thread_naming_pattern="pl"
thread_pool.enabled="true"
thread_pool.max_threads="30"
/>
The dummy cache entry :
public class CacheMemoryLeak implements Serializable {
private static final long serialVersionUID = 1L;
Date date = new Date();
}
An example usage from the "service" :
Cache<String, Object> cache = cacheManager.getCache("myLeakyCache");
cache.put(key, new CacheMemoryLeak());
Some info / tryouts :
As it seems noone else had the same issue or using primitive objects as cache entries, thus haven't noticed the issue. Upon replicating and fortunately traced the root cause, the below points are coming up :
hashCode
/ equals
for custom objects that are going to end been transmitted through a replicated/synchronized cache.hashcode
/ equals
would not be calculated - efficiently-